repoName
stringlengths
7
77
tree
stringlengths
0
2.85M
readme
stringlengths
0
4.9M
near_nearorg_container
.eslintrc.json .github workflows update_submodules.yml README.md next.config.js package.json public blog category a-post-from-illia-polosukhin feed index.xml index.html case-studies feed index.xml index.html page 2 index.html 3 index.html community feed index.xml index.html page 10 index.html 11 index.html 12 index.html 13 index.html 14 index.html 15 index.html 16 index.html 17 index.html 18 index.html 19 index.html 2 index.html 20 index.html 21 index.html 22 index.html 23 index.html 24 index.html 25 index.html 26 index.html 3 index.html 4 index.html 5 index.html 6 index.html 7 index.html 8 index.html 9 index.html developers feed index.xml index.html page 2 index.html 3 index.html 4 index.html 5 index.html 6 index.html 7 index.html 8 index.html 9 index.html near-foundation feed index.xml index.html page 10 index.html 11 index.html 12 index.html 13 index.html 14 index.html 15 index.html 16 index.html 17 index.html 18 index.html 19 index.html 2 index.html 20 index.html 21 index.html 22 index.html 23 index.html 3 index.html 4 index.html 5 index.html 6 index.html 7 index.html 8 index.html 9 index.html uncategorized feed index.xml index.html page 10 index.html 11 index.html 12 index.html 13 index.html 14 index.html 15 index.html 16 index.html 17 index.html 18 index.html 19 index.html 2 index.html 20 index.html 21 index.html 22 index.html 23 index.html 24 index.html 25 index.html 26 index.html 27 index.html 28 index.html 29 index.html 3 index.html 30 index.html 31 index.html 32 index.html 33 index.html 34 index.html 35 index.html 36 index.html 37 index.html 38 index.html 39 index.html 4 index.html 40 index.html 41 index.html 42 index.html 43 index.html 44 index.html 45 index.html 46 index.html 47 index.html 48 index.html 49 index.html 5 index.html 50 index.html 51 index.html 52 index.html 53 index.html 54 index.html 55 index.html 56 index.html 57 index.html 58 index.html 59 index.html 6 index.html 60 index.html 7 index.html 8 index.html 9 index.html press-releases-categories ecosystem feed index.xml index.html governance feed index.xml index.html inclusion feed index.xml index.html other feed index.xml index.html partner feed index.xml index.html page 2 index.html regional-hubs feed index.xml index.html technical feed index.xml index.html press-releases bitcoin-suisse-announces-full-support-of-near-for-institutional-investors index.html fireblocks-provides-custody-facility-for-institutional-investors-on-near index.html marieke-flament-appointed-ceo-of-near-foundation index.html mintbase-raises-7-5-million-in-series-a-and-5-million-grant-pool-to-pioneer-nft-infrastructure index.html near-and-circle-announce-usdc-support-for-multi-chain-ecosystem index.html near-foundation-and-forkast-unveil-shortlist-for-women-in-web3-changemakers-2022 index.html near-launches-regional-hub-in-kenya-to-lead-blockchain-innovation-and-talent-development-in-africa index.html near-launches-web3-regional-hub-in-korea index.html near-opens-submissions-for-women-in-web3-changemakers-2022 index.html near-protocol-enhances-its-ecosystem-with-nep-141-integration-on-binance-custody index.html near-releases-javascript-sdk-bringing-web3-to-20-million-developers index.html near-teams-with-google-cloud-to-accelerate-web3-startups index.html near-wallet-users-soar-to-20-million index.html sailgp-and-near-unveil-multi-year-global-league-partnership index.html tag accelerator feed index.xml index.html account-aggregation feed index.xml index.html ai-is-near feed index.xml index.html ai feed index.xml index.html alex-skidanov feed index.xml index.html alibaba-cloud feed index.xml index.html alpha-near-org feed index.xml index.html analytics feed index.xml index.html arbitrum feed index.xml index.html art feed index.xml index.html arterra-labs feed index.xml index.html artificial-intelligence feed index.xml index.html aurora-cloud feed index.xml index.html aurora-labs feed index.xml index.html aurora feed index.xml index.html b-o-s-web-push-notifications feed index.xml index.html b-o-s feed index.xml index.html berklee-college-of-music feed index.xml index.html berklee-raidar feed index.xml index.html bernoulli-locke feed index.xml index.html bigquery feed index.xml index.html blockchain-operating-system feed index.xml index.html blockchain feed index.xml index.html bora feed index.xml index.html bos feed index.xml index.html page 2 index.html captains-call feed index.xml index.html case-study feed index.xml index.html cathy-hackl feed index.xml index.html chain-abstraction feed index.xml index.html circle feed index.xml index.html coin98-super-app feed index.xml index.html coinbase feed index.xml index.html coingecko-raffle feed index.xml index.html collision feed index.xml index.html communication feed index.xml index.html community feed index.xml index.html page 2 index.html 3 index.html competition feed index.xml index.html consensus feed index.xml index.html core-protocol feed index.xml index.html page 2 index.html cosmose-ai feed index.xml index.html creatives feed index.xml index.html cricket-world-cup-2023 feed index.xml index.html daos feed index.xml index.html page 2 index.html decentralized-storage feed index.xml index.html defi feed index.xml index.html dev-rel feed index.xml index.html developer-tools feed index.xml index.html developers feed index.xml index.html dropt feed index.xml index.html ecosystem-funding feed index.xml index.html ecosystem feed index.xml index.html page 2 index.html 3 index.html 4 index.html 5 index.html 6 index.html 7 index.html 8 index.html eigen-labs feed index.xml index.html eigenlayer feed index.xml index.html emily-rose-dallara feed index.xml index.html encode-club feed index.xml index.html encode feed index.xml index.html entertainment feed index.xml index.html erica-kang feed index.xml index.html eth-denver feed index.xml index.html ethcc feed index.xml index.html ethdenver feed index.xml index.html ethereum-climate-alliance feed index.xml index.html ethereum-climate-platform feed index.xml index.html ethereum-rollups feed index.xml index.html ethereum feed index.xml index.html events feed index.xml index.html fan-engagement feed index.xml index.html fastauth-sdk feed index.xml index.html fastauth feed index.xml index.html few-and-far feed index.xml index.html fitness feed index.xml index.html flipside feed index.xml index.html founders feed index.xml index.html funding feed index.xml index.html galxe feed index.xml index.html gamefi feed index.xml index.html gaming feed index.xml index.html gig-economy feed index.xml index.html glass feed index.xml index.html governance feed index.xml index.html grants feed index.xml index.html grassroots-support-initiative feed index.xml index.html hackathon feed index.xml index.html horizon feed index.xml index.html i-am-human feed index.xml index.html icc-world-cup feed index.xml index.html icc feed index.xml index.html idos feed index.xml index.html illia-polosukhin feed index.xml index.html indexer feed index.xml index.html infrastructure feed index.xml index.html international-cricket-council feed index.xml index.html inven feed index.xml index.html investments feed index.xml index.html javascript-dapps feed index.xml index.html javascript feed index.xml index.html journey feed index.xml index.html kaikai feed index.xml index.html kaikainow feed index.xml index.html kakao-games feed index.xml index.html knaq feed index.xml index.html lafc feed index.xml index.html ledger-live feed index.xml index.html litenode feed index.xml index.html los-angeles-football-club feed index.xml index.html loyalty feed index.xml index.html machine-learning feed index.xml index.html mantle feed index.xml index.html mass-adoption feed index.xml index.html mastercard feed index.xml index.html media feed index.xml index.html metabuild feed index.xml index.html mintbase feed index.xml index.html mirea-asset feed index.xml index.html mission-vision feed index.xml index.html modularity feed index.xml index.html move-to-earn feed index.xml index.html multichain feed index.xml index.html music-licensing feed index.xml index.html music feed index.xml index.html ncon-bounties feed index.xml index.html ncon feed index.xml index.html ndc-funding feed index.xml index.html ndc feed index.xml index.html ndcfunding feed index.xml index.html near-2023 feed index.xml index.html near-2024 feed index.xml index.html near-apac feed index.xml index.html near-balkans feed index.xml index.html near-big-query feed index.xml index.html near-block-explorers feed index.xml index.html near-da-layer feed index.xml index.html near-da feed index.xml index.html near-data-availability-layer feed index.xml index.html near-day feed index.xml index.html near-digital-collective feed index.xml index.html near-discord feed index.xml index.html near-ecosystem feed index.xml index.html near-foundation-council feed index.xml index.html near-foundation-policy-priorities feed index.xml index.html near-foundation-update feed index.xml index.html near-foundation feed index.xml index.html near-horizon feed index.xml index.html near-in-review feed index.xml index.html near-korea-hub feed index.xml index.html near-korea feed index.xml index.html near-protocol feed index.xml index.html near-query-api feed index.xml index.html near-regional-hubs feed index.xml index.html near-tasks feed index.xml index.html near-vietnam feed index.xml index.html near-wallet-migration feed index.xml index.html near-wallet feed index.xml index.html near feed index.xml index.html nearcon-2023 feed index.xml index.html nearcon-23-3 feed index.xml index.html nearcon-23 feed index.xml index.html nearcon-early-bird-tickets feed index.xml index.html nearcon-highlights feed index.xml index.html nearcon-irl-hackathon feed index.xml index.html nearcon-speakers feed index.xml index.html nearcon feed index.xml index.html page 2 index.html nfts feed index.xml index.html page 2 index.html 3 index.html nightshade feed index.xml index.html okrs feed index.xml index.html onmachina feed index.xml index.html open-web feed index.xml index.html oracles feed index.xml index.html pagoda-product-roadmap feed index.xml index.html pagoda feed index.xml index.html partners feed index.xml index.html partnerships feed index.xml index.html page 2 index.html pipeflare feed index.xml index.html polygon-zkevm feed index.xml index.html press-start feed index.xml index.html press feed index.xml index.html protocol-roadmap feed index.xml index.html public-dataset feed index.xml index.html pyth-price-feeds feed index.xml index.html pyth feed index.xml index.html raidar feed index.xml index.html refer-and-earn feed index.xml index.html regional-hubs feed index.xml index.html research feed index.xml index.html retail feed index.xml index.html rewards feed index.xml index.html richmond-night-market feed index.xml index.html roadmaps feed index.xml index.html rownd feed index.xml index.html sailgp feed index.xml index.html satori feed index.xml index.html self-sovereignty feed index.xml index.html seracle feed index.xml index.html sharding feed index.xml index.html shemaroo feed index.xml index.html shopify feed index.xml index.html shred-sports feed index.xml index.html sk-inc-cc feed index.xml index.html skateboarding feed index.xml index.html space-id-voyage feed index.xml index.html startup-wise-guys feed index.xml index.html stateless-validation feed index.xml index.html statistics feed index.xml index.html sustainability feed index.xml index.html sweat-economy feed index.xml index.html sweat feed index.xml index.html sweatcoin feed index.xml index.html taco-labs feed index.xml index.html tekuno feed index.xml index.html the-dock feed index.xml index.html the-littles feed index.xml index.html thefunpass feed index.xml index.html ticketing feed index.xml index.html transfer-wizard feed index.xml index.html transparency-report feed index.xml index.html transparency feed index.xml index.html uk-crypto-asset-consultation feed index.xml index.html ukraine feed index.xml index.html usdc feed index.xml index.html user-owned-ai feed index.xml index.html validator-delegation feed index.xml index.html veronica-korzh feed index.xml index.html vortex-gaming feed index.xml index.html wallet feed index.xml index.html we-are-developers-world-congress feed index.xml index.html we-are-developers feed index.xml index.html web3-accelerator feed index.xml index.html web3-and-thrive-podcast feed index.xml index.html web3-b2b feed index.xml index.html web3-finance feed index.xml index.html web3-gaming feed index.xml index.html web3-incubation feed index.xml index.html web3-loyalty feed index.xml index.html web3-onboarding feed index.xml index.html women-in-web3-changemakers feed index.xml index.html women-of-web3-changemakers feed index.xml index.html wormhole feed index.xml index.html zero-knowledge feed index.xml index.html zk-proofs feed index.xml index.html унікальна-можливість-для-українців-б index.html images ecosystem get-funding bafkreia3zulk3xrmwc6grqcpxavzug6odwgkwzd5magctxvq4jvalbnkcy.svg bafkreicsmgaejlbmzvfbawdayiqljbxzi62tmvvktoveubuljijib6ezd4.svg bafkreictzvvz2irr7tr7fhkdne2i7xpr4mf7x5b5i2vhgoqdswb73lbyyu.svg bafkreifdzknpkboed3jmm4rgtbg3mqaocziagtjbznfp6o3hvgd5ix6brm.svg bafkreihmznoqcsq2ivkjck2iqpyaojmmrusma2pqapjwlvop2i7oyoebyu.svg manifest.json robots.txt src components middleware.ts navigation categories.ts icons close.svg near-icon.svg near-logo.svg return.svg search.svg hooks useBanner.ts useClickTracking.ts useCookiePreferences.ts useLatestEvents.ts useLatestNews.ts usePageAnalytics.ts useSocialDb.ts useStatistics.ts styles globals.css reset.css theme.css types rudderstack-analytics.ts utils analytics.ts config.ts constants.ts events.ts social-db.ts types.ts tsconfig.json
# nearorg_container top level container app for near.org marketing and the apps.near.org gateway.
near_near-workspaces-js
.github ISSUE_TEMPLATE BOUNTY.yml workflows lint.yml tests-sandbox.yml typedoc-generator.yml .near-credentials workspaces testnet ro3evqruqecmi7q4uwux1651245117258.json .vscode extensions.json settings.json README.md __tests__ 01.basic-transactions.ava.ts 03.single-use-access-keys-with-linkdrop.ava.ts 04.cross-contract-calls-with-fungible-token.ava.ts 05.spoon-contract-to-sandbox.ava.ts 06.init-config.ava.ts 07.resue-worker.ava.ts 08.custom-network.ava.ts 08.fast-forward.ava.ts ci-ignore-02.patch-state.ava.ts examples simple-project README.md __tests__ test-simple-project.ava.js test-status-message.ava.js package.json lerna.json package.json packages js README.md __tests__ account-manager.ava.ts account.ava.ts jsonrpc.ava.js record-builder.ava.ts dist account account-manager.d.ts account-manager.js account.d.ts account.js index.d.ts index.js near-account-manager.d.ts near-account-manager.js near-account.d.ts near-account.js utils.d.ts utils.js contract-state.d.ts contract-state.js index.d.ts index.js internal-utils.d.ts internal-utils.js jsonrpc.d.ts jsonrpc.js record builder.d.ts builder.js index.d.ts index.js types.d.ts types.js server index.d.ts index.js server.d.ts server.js transaction-result.d.ts transaction-result.js transaction.d.ts transaction.js types.d.ts types.js utils.d.ts utils.js worker.d.ts worker.js package.json scripts delete-accounts.ts install.js src account account-manager.ts account.ts index.ts near-account-manager.ts near-account.ts utils.ts contract-state.ts index.ts internal-utils.ts jsonrpc.ts record builder.ts index.ts types.ts server index.ts node-port-check.d.ts server.ts transaction-result.ts transaction.ts types.ts utils.ts worker.ts tsconfig.json typedoc.json tsconfig.json typedoc.json
# Simple project This is the simplest project setup example with workspaces-js. You can copy it as the starting point when setup your project. ## Usage ``` yarn yarn test ``` ## Setup your project Assume you have written your smart contract. Setup and write workspaces-js test as this project is easy: 1. Build the contract to `.wasm` as place it in `contracts/`. 2. Install the `near-workspaces` and `ava` with `npm` or `yarn`. 3. Copy the ava.config.cjs to you project root directory. 4. Write test, place in `__tests__/`, end with `.ava.js`. You can refer to `__tests__/test-status-message.ava.js` as an example. 5. We're done! Run test with `yarn test` and continue adding more tests! <div align="center"> <h1>NEAR Workspaces (TypeScript/JavaScript Edition)</h1> [![Project license](https://img.shields.io/badge/license-Apache2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Project license](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Discord](https://img.shields.io/discord/490367152054992913?label=discord)](https://discord.gg/Vyp7ETM) [![NPM version](https://img.shields.io/npm/v/near-workspaces.svg?style=flat-square)](https://npmjs.com/near-workspaces) [![Size on NPM](https://img.shields.io/bundlephobia/minzip/near-workspaces.svg?style=flat-square)](https://npmjs.com/near-workspaces) </div> `NEAR Workspaces` is a library for automating workflows and writing tests for NEAR smart contracts. You can use it as is or integrate with test runner of your choise (AVA, Jest, Mocha, etc.). If you don't have a preference, we suggest you to use AVA. Quick Start (without testing frameworks) =========== To get started with `Near Workspaces` you need to do only two things: 1. Initialize a `Worker`. ```ts const worker = await Worker.init(); const root = worker.rootAccount; const alice = await root.createSubAccount('alice'); const contract = await root.devDeploy('path/to/compiled.wasm'); ``` 2. Writing tests. `near-workspaces` is designed for concurrency. Here's a simple way to get concurrent runs using plain JS: ```ts import {strict as assert} from 'assert'; await Promise.all([ async () => { await alice.call( contract, 'some_update_function', {some_string_argument: 'cool', some_number_argument: 42} ); const result = await contract.view( 'some_view_function', {account_id: alice} ); assert.equal(result, 'whatever'); }, async () => { const result = await contract.view( 'some_view_function', {account_id: alice} ); /* Note that we expect the value returned from `some_view_function` to be a default here, because this `fork` runs *at the same time* as the previous, in a separate local blockchain */ assert.equal(result, 'some default'); } ]); ``` ``` More info in our main README: https://github.com/near/workspaces-js <div align="center"> <h1>NEAR Workspaces (TypeScript/JavaScript Edition)</h1> [![Project license](https://img.shields.io/badge/license-Apache2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Project license](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Discord](https://img.shields.io/discord/490367152054992913?label=discord)](https://discord.gg/Vyp7ETM) [![NPM version](https://img.shields.io/npm/v/near-workspaces.svg?style=flat-square)](https://npmjs.com/near-workspaces) [![Size on NPM](https://img.shields.io/bundlephobia/minzip/near-workspaces.svg?style=flat-square)](https://npmjs.com/near-workspaces) </div> `NEAR Workspaces` is a library for automating workflows and writing tests for NEAR smart contracts. You can use it as is or integrate with test runner of your choise (AVA, Jest, Mocha, etc.). If you don't have a preference, we suggest you to use AVA. Quick Start (without testing frameworks) =========== To get started with `Near Workspaces` you need to do only two things: 1. Initialize a `Worker`. ```ts const worker = await Worker.init(); const root = worker.rootAccount; const alice = await root.createSubAccount('alice'); const contract = await root.devDeploy('path/to/compiled.wasm'); ``` Let's step through this. 1. `Worker.init` initializes a new `SandboxWorker` or `TestnetWorker` depending on the config. `SandboxWorker` contains [NEAR Sandbox](https://github.com/near/sandbox), which is essentially a local mini-NEAR blockchain. You can create one `Worker` per test to get its own data directory and port (for Sandbox) or root account (for Testnet), so that tests can run in parallel without race conditions in accessing states. If there's no state intervention. you can also reuse the `Worker` to speedup the tests. 2. The worker has a `root` account. For `SandboxWorker`, it's `test.near`. For `TestnetWorker`, it creates a unique account. The following accounts are created as subaccounts of the root account. The name of the account will change from different runs, so you should not refer to them by hard coded account name. You can access them via the account object, such as `root`, `alice` and `contract` above. 3. `root.createSubAccount` creates a new subaccount of `root` with the given name, for example `alice.<root-account-name>`. 4. `root.devDeploy` creates an account with random name, then deploys the specified Wasm file to it. 5. `path/to/compiled.wasm` will resolve relative to your project root. That is, the nearest directory with a `package.json` file, or your current working directory if no `package.json` is found. To construct a path relative to your test file, you can use `path.join(__dirname, '../etc/etc.wasm')` ([more info](https://nodejs.org/api/path.html#path_path_join_paths)). 6. `worker` contains a reference to this data directory, so that multiple tests can use it as a starting point. 7. If you're using a test framework, you can save the `worker` object and account objects `root`, `alice`, `contract` to test context to reuse them in subsequent tests. 8. At the end of test, call `await worker.tearDown()` to shuts down the Worker. It gracefully shuts down the Sandbox instance it ran in the background. However, it keeps the data directory around. That's what stores the state of the two accounts that were created (`alice` and `contract-account-name` with its deployed contract). 2. Writing tests. `near-workspaces` is designed for concurrency. Here's a simple way to get concurrent runs using plain JS: ```ts import {strict as assert} from 'assert'; await Promise.all([ async () => { await alice.call( contract, 'some_update_function', {some_string_argument: 'cool', some_number_argument: 42} ); const result = await contract.view( 'some_view_function', {account_id: alice} ); assert.equal(result, 'whatever'); }, async () => { const result = await contract.view( 'some_view_function', {account_id: alice} ); /* Note that we expect the value returned from `some_view_function` to be a default here, because this `fork` runs *at the same time* as the previous, in a separate local blockchain */ assert.equal(result, 'some default'); } ]); ``` Let's step through this. 1. `worker` and accounts such as `alice` are created before. 2. `call` syntax mirrors [near-cli](https://github.com/near/near-cli) and either returns the successful return value of the given function or throws the encountered error. If you want to inspect a full transaction and/or avoid the `throw` behavior, you can use `callRaw` instead. 3. While `call` is invoked on the account _doing the call_ (`alice.call(contract, …)`), `view` is invoked on the account _being viewed_ (`contract.view(…)`). This is because the caller of a view is irrelevant and ignored. See the [tests](https://github.com/near/workspaces-js/tree/main/__tests__) directory in this project for more examples. Quick Start with AVA =========== Since `near-workspaces` is designed for concurrency, AVA is a great fit, because it runs tests concurrently by default. To use`NEAR Workspaces` with AVA: 1. Start with the basic setup described [here](https://github.com/avajs/ava). 2. Add custom script for running tests on Testnet (if needed). Check instructions in `Running on Testnet` section. 3. Add your tests following these example: ```ts import {Worker} from 'near-workspaces'; import anyTest, {TestFn} from 'ava' const test = anyTest as TestFn<{ worker: Worker; accounts: Record<string, NearAccount>; }>; /* If using `test.before`, each test is reusing the same worker; If you'd like to make a copy of the worker, use `beforeEach` after `afterEach`, which allows you to isolate the state for each test */ test.before(async t => { const worker = await Worker.init(); const root = worker.rootAccount; const contract = await root.devDeploy('path/to/contract/file.wasm'); /* Account that you will be able to use in your tests */ const ali = await root.createSubAccount('ali'); t.context.worker = worker; t.context.accounts = {root, contract, ali}; }) test('Test name', async t => { const {ali, contract} = t.context.accounts; await ali.call(contract, 'set_status', {message: 'hello'}); const result: string = await contract.view('get_status', {account_id: ali}); t.is(result, 'hello'); }); test.after(async t => { // Stop Sandbox server await t.context.worker.tearDown().catch(error => { console.log('Failed to tear down the worker:', error); }); }); ``` "Spooning" Contracts from Testnet and Mainnet ============================================= [Spooning a blockchain](https://coinmarketcap.com/alexandria/glossary/spoon-blockchain) is copying the data from one network into a different network. near-workspaces makes it easy to copy data from Mainnet or Testnet contracts into your local Sandbox environment: ```ts const refFinance = await root.importContract({ mainnetContract: 'v2.ref-finance.near', blockId: 50_000_000, withData: true, }); ``` This would copy the Wasm bytes and contract state from [v2.ref-finance.near](https://explorer.near.org/accounts/v2.ref-finance.near) to your local blockchain as it existed at block `50_000_000`. This makes use of Sandbox's special [patch state](#patch-state-on-the-fly) feature to keep the contract name the same, even though the top level account might not exist locally (note that this means it only works in Sandbox testing mode). You can then interact with the contract in a deterministic way the same way you interact with all other accounts created with near-workspaces. Gotcha: `withData` will only work out-of-the-box if the contract's data is 50kB or less. This is due to the default configuration of RPC servers; see [the "Heads Up" note here](https://docs.near.org/api/rpc/contracts#view-contract-state). Some teams at NEAR are hard at work giving you an easy way to run your own RPC server, at which point you can point tests at your custom RPC endpoint and get around the 50kB limit. See an [example of spooning](https://github.com/near/workspaces-js/blob/main/__tests__/05.spoon-contract-to-sandbox.ava.ts) contracts. Running on Testnet ================== near-workspaces is set up so that you can write tests once and run them against a local Sandbox node (the default behavior) or against [NEAR TestNet](https://docs.near.org/concepts/basics/networks). Some reasons this might be helpful: * Gives higher confidence that your contracts work as expected * You can test against deployed testnet contracts * If something seems off in Sandbox mode, you can compare it to testnet In order to use Workspaces JS in testnet mode you will need to have a testnet account. You can create one [here](https://wallet.testnet.near.org/). You can switch to testnet mode in three ways. 1. When creating Worker set network to `testnet` and pass your master account: ```ts const worker = await Worker.init({ network: 'testnet', testnetMasterAccountId: '<yourAccountName>', }) ``` 2. Set the `NEAR_WORKSPACES_NETWORK` and `TESTNET_MASTER_ACCOUNT_ID` environment variables when running your tests: ```bash NEAR_WORKSPACES_NETWORK=testnet TESTNET_MASTER_ACCOUNT_ID=<your master account Id> node test.js ``` If you set this environment variables and pass `{network: 'testnet', testnetMasterAccountId: <masterAccountId>}` to `Worker.init`, the config object takes precedence. 3. If using `near-workspaces` with AVA, you can use a custom config file. Other test runners allow similar config files; adjust the following instructions for your situation. Create a file in the same directory as your `package.json` called `ava.testnet.config.cjs` with the following contents: ```js module.exports = { ...require('near-workspaces/ava.testnet.config.cjs'), ...require('./ava.config.cjs'), }; module.exports.environmentVariables = { TESTNET_MASTER_ACCOUNT_ID: '<masterAccountId>', }; ``` The [near-workspaces/ava.testnet.config.cjs](https://github.com/near/workspaces-js/blob/main/ava.testnet.config.cjs) import sets the `NEAR_WORKSPACES_NETWORK` environment variable for you. A benefit of this approach is that you can then easily ignore files that should only run in Sandbox mode. Now you'll also want to add a `test:testnet` script to your `package.json`'s `scripts` section: ```diff "scripts": { "test": "ava", + "test:testnet": "ava --config ./ava.testnet.config.cjs" } ``` Stepping through a testnet example ---------------------------------- Let's revisit a shortened version of the example from How It Works above, describing what will happen in Testnet. 1. Create a `Worker`. ```ts const worker = await Worker.init(); ``` `Worker.init` creates a unique testnet account as root account. 2. Write tests. ```ts await Promise.all([ async () => { await alice.call( contract, 'some_update_function', {some_string_argument: 'cool', some_number_argument: 42} ); const result = await contract.view( 'some_view_function', {account_id: alice} ); assert.equal(result, 'whatever'); }, async () => { const result = await contract.view( 'some_view_function', {account_id: alice} ); assert.equal(result, 'some default'); } ]); ``` Note: Sometimes account creation rate limits are reached on testnet, simply wait a little while and try again. Running tests only in Sandbox ------------------------------- If some of your runs take advantage of Sandbox-specific features, you can skip these on testnet in two ways: 1. You can skip entire sections of your files by checking `getNetworkFromEnv() === 'sandbox'`. ```ts let worker = Worker.init(); // things make sense to any network const root = worker.rootAccount; const alice = await root.createSubAccount('alice'); if (getNetworkFromEnv() === 'sandbox') { // thing that only makes sense with sandbox } ``` 2. Use a separate testnet config file, as described under the "Running on Testnet" heading above. Specify test files to include and exclude in config file. Patch State on the Fly ====================== In Sandbox-mode, you can add or modify any contract state, contract code, account or access key with `patchState`. You cannot perform arbitrary mutation on contract state with transactions since transactions can only include contract calls that mutate state in a contract-programmed way. For example, with an NFT contract, you can perform some operation with NFTs you have ownership of, but you cannot manipulate NFTs that are owned by other accounts since the smart contract is coded with checks to reject that. This is the expected behavior of the NFT contract. However, you may want to change another person's NFT for a test setup. This is called "arbitrary mutation on contract state" and can be done with `patchState`. Alternatively you can stop the node, dump state at genesis, edit genesis, and restart the node. The later approach is more complicated to do and also cannot be performed without restarting the node. It is true that you can alter contract code, accounts, and access keys using normal transactions via the `DeployContract`, `CreateAccount`, and `AddKey` [actions](https://nomicon.io/RuntimeSpec/Actions.html?highlight=actions#actions). But this limits you to altering your own account or sub-account. `patchState` allows you to perform these operations on any account. To see an example of how to do this, see the [patch-state test](https://github.com/near/workspaces-js/blob/main/__tests__/02.patch-state.ava.ts). Time Traveling =============== In Sandbox-mode, you can forward time-related state (the block height, timestamp and epoch height) with `fastForward`. This means contracts which require time sensitive data do not need to sit and wait the same amount of time for blocks on the sandbox to be produced. We can simply just call the api to get us further in time. For an example, see the [fast-forward test](./__tests__/08.fast-forward.ava.ts) Note: `fastForward` does not speed up an in-flight transactions. Pro Tips ======== * `NEAR_WORKSPACES_DEBUG=true` – run tests with this environment variable set to get copious debug output and a full log file for each Sandbox instance. * `Worker.init` [config](https://github.com/near/workspaces-js/blob/main/packages/js/src/interfaces.ts) – you can pass a config object as the first argument to `Worker.init`. This lets you do things like: * skip initialization if specified data directory already exists (the default behavior) ```ts Worker.init( { rm: false, homeDir: './test-data/alice-owns-an-nft' }, ) ``` * always recreate such data directory instead with `rm: true` * specify which port to run on * and more! Env variables ======== ```text NEAR_CLI_MAINNET_RPC_SERVER_URL NEAR_CLI_TESTNET_RPC_SERVER_URL ``` Clear them in case you want to get back to the default RPC server. Example: ```shell export NEAR_CLI_MAINNET_RPC_SERVER_URL=<put_your_rpc_server_url_here> ``` here is a testcase: [jsonrpc.ava.js](./packages/js/__tests__/jsonrpc.ava.js)
EVANSMUOKI_Quadartic_Equation_Calculator
.vscode launch.json Cargo.toml README.md build.bat build.sh src lib.rs test.sh
# Near certification smart contract # Quadratic_equation_calculator The Smart Contract code is in the src lib.re. The Smart Contract involves a quadratic_equation_calculator which takes input from the user. The input is the coefficients for three terms of a quadratic equation. The Smart Contract the prorgram uses the quadratic formula to return the roots of the equation as the output. The result may be of three types, perfect, real or imaginary depending with the coefficients entred vy the user. Once executed, it is reset to zero and prompts the user to input another set of coefficients for executions. Tests are also contained in th smart contract. ## Required softwares Rust 1.58 + cargo Node.js NEAR CLI 3.1 ## Author EVANSMUOKI
esaminu_test-rs-boilerplate-10
.eslintrc.yml .github ISSUE_TEMPLATE 01_BUG_REPORT.md 02_FEATURE_REQUEST.md 03_CODEBASE_IMPROVEMENT.md 04_SUPPORT_QUESTION.md config.yml PULL_REQUEST_TEMPLATE.md labels.yml workflows codeql.yml deploy-to-console.yml labels.yml lock.yml pr-labels.yml stale.yml .gitpod.yml README.md contract Cargo.toml README.md build.sh deploy.sh src lib.rs docs CODE_OF_CONDUCT.md CONTRIBUTING.md SECURITY.md frontend App.js assets global.css logo-black.svg logo-white.svg index.html index.js near-interface.js near-wallet.js package.json start.sh ui-components.js integration-tests Cargo.toml src tests.rs package.json
# Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ <h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._
kopfadam_near_testnet
README.md as-pect.config.js asconfig.json package-lock.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
# `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068)
ing-waldoat_CodingHope
.cache 00 c62409ca3d86638b58edc3bca7c937.json 01 cb1e1cc5f502bc59e2ff7c009a6cff.json 02 960fe6f943f643207e5758e927c5b6.json 9e41a704a7ebc97c68c5f4606bac50.json 03 1d87e942581dd7e561b25b0f2147aa.json 3f6e2cef26ee95ce0d8ce1ac0f1568.json 04 906dcc03451a408faa77d4c88395b1.json a0acd6ca5825ebe43d51b0386200ca.json 06 aa9c8a63b28f7ce8a9bbd9161467af.json 07 519d75bf12307e3aefba12afb34b46.json 08 f37e15d5838d694264adcada02e40c.json 09 6602d2c121a02123d5b01559ce8668.json 0a b8462efdffdb6e4f81571a4da442c7.json bf5a5e582b602fc7682d24bb877a3e.json 0c 3b901a71d3ca9315fdfe78b72cefa4.json 4a10f686fb7e839026ec7592b8ca63.json 0d f1d3046c1c0c574658597953cbc148.json 0e 2f169078c5fba4d8650c868c2bcb97.json 62afcd387c83c3d96a8df38c9eea22.json 0f 397d838327963747032bf3fb8d2160.json 10 00e1735d20b2a6edc9811e83c38336.json cf1888206908f6fabfb66365dbf880.json 11 e3aa2f9d8312af31cc71f2203faf7b.json 13 a783949e0cc65df4a3d3e4bbdcfa64.json b073e4c170b60b75d9376aa4aa2e59.json 15 189afa10604f44a340e80a2f26f0c4.json d1106a76d14b24d463f1d3bf004415.json 17 0b273ab017923e18191d8d1fc6eb08.json 130a77c9902a61805cdc3fbffa9df8.json 2853453f765bda2d72695afe7bf3f9.json 376672b42ad14639d54c39940af44d.json fad1fa445f867a2699a874a02798cd.json 18 f073733906346b05f4336de7d907d0.json 19 bc8b1944f4e407ca968a3941643743.json 1a afef8b55f8edcedc793db8f61c5f7f.json d105a06ef7f4654a1d456bcb752801.json 1c 8ecf99eb9cc0d37c53207c5a71d93a.json 1d 0c0f7234b41ac1b0b251d21d1045cb.json 1e 0f1693a446ac29906dac7611cf702b.json 1f 6dafc1757ac83806a7732ba759d20d.json 21 22c47b4d07999b26ae5376d0516b50.json e25d8d1264e495dbfe61e4ca96b77c.json 23 1c65a50e0218b95b4ecb38ab6b8d1b.json 3264d7c38299ae120cf77f301db3c3.json 92ed859ce2c7df375909bf706628e3.json 24 6e8b56cfdf6274a0461631d1c9b8ad.json c556bcbb761973db6df07ae3223f62.json ef97651a5c52d765e346201499488d.json 25 70f87a387779afaed6f38b73bd57a5.json 26 33652225a7755c359f22f178c0f203.json 29 1518131a1eb1fa69317776bfcc67ba.json 22c09d4b38321cb50ee88fb3fd4204.json 2b 9ac783b7d8530353cff2033b252ee9.json 2c 426faae20c1f7f7cacce9ef822f6fd.json 4d6874015ed476d8138ac214e3f614.json cdd484b7b8f6527db034b601b93c55.json 2d 6b275fed6d5b928d10d8c3655868ad.json acfd124582d6977bfe555185f0489c.json 30 844ac3b5ee1970c64f8fcbb76c7537.json ce079b11b3cdc7d52aa8d81b9a9a37.json f9aea5b7182b11b26979724010c3cc.json 33 71dc346332b66c682604b8b2ec3652.json 9a019aebc0100641846e1abe512fe1.json 34 3225ba079aae2d1a1204f76a597192.json 6197687429fb3af0faf7093ef8d75b.json 855e496d91e6704180f0c56fe7c1d8.json 9cf411a73dfa377a83beebe4022c21.json d6abf5b898820202da735d4555ba77.json 35 b2c1e66c75fcab6854d68f770392f3.json 37 78a6c1a7446f09f3cfcdba1b9eef44.json 38 142f257ac02c766ef1800fd483a679.json fffdf559546e96ff79f009fe44d398.json 3a c1cb3239e2a6ac5979ca8c30862293.json 3b d6b87e2b3fd689de7bcfb1733b8356.json 3c 3a9c9f1cc25fc69a7fb8c48b71f50e.json a18beb1261657da6a4f04b9e1e7453.json 3f 3d017315d721f59b49bc7f5b675a64.json 41 d4c10c407d95b9edd5d626190b18fa.json 42 a945dbbeb824c6a02ea9071f8892f6.json 46 a60dfb95dff1763054ae46fda28bad.json 47 60e771c8af1ab5c2af2d98bbac56a6.json e54b57271c9089942677fe6744fef6.json 48 80350f576b81663ff4ea202d0b308e.json d7de76b8826dbfd4e5253c62e3a0ea.json 49 13821624de3fa415229a672956db39.json 19a7b346237a2fadba0fbf77a851b8.json e16e5726f7af86854c103883da705e.json 4c 1379250d2987248ea8a5c7b2475b63.json 4d 652f6e7bc89516af47c69464d038f8.json d1ae7a945cb3e1c6c644613e435b7d.json 4e 2ee7e5e9e0a46ffe9a7733ba0f5a31.json 4450d45f524d547d6226052ed6124b.json 4f 4b06afea49bab9b9693415f53a7502.json 50 7aa610e165ee4513e430165b1f356a.json dd637ecccd53e9d12a5ac50dcb9d7d.json 53 f1639fb64f2db13b784cce68eb1950.json 55 5310480117d44f0ce42ce3925fd61f.json 9b4dad66e1b09548154a4acb3232c6.json ad552165db5a1698b32b356e38f13c.json 56 245ca93648ca833ebd9fdd695b6656.json 2a2d61ba25de2f6ed950bb9d843460.json f9809bc072923cc7fb10cc0f8b9ae7.json 58 4ca48e18b44bf19999369f95a1733a.json a1c46122f6ceb547149927d9786234.json 5a b36e7acd0502fc87a2a6e2cd4ffcb5.json b3af7ce1a060f1bfaa44ceaae75eb3.json 5b bb34b8583439e169e27a5485e8967d.json 5d 407e9139dee6758efebbdfbe655da5.json 82c473ec6e8bbb8625c31556e1f7d9.json 5e 34eba70a2e1df9479efa00a675c4f4.json 60 d40b0ede5f54b7c6c2e6b1bc57fa35.json ea651d51f752c3d671696c1eb6711b.json 61 f355dbb2704114982176db7026a856.json 63 07c6ef9137a59170d1614241a4128b.json 8c12bfb2e21cb12de5ca6b1183e685.json 64 439cb26026c5c27ef21883a6d9b326.json a72638ed1bd3be6dcf88b41ab3cc29.json 65 d509cbd24deaae9f965d6e5a1c0311.json 66 823a8c0e01b4d76fc2ecdc1b19499a.json b3ee56a76eeb4d0a9c4dfce9ddf533.json 67 a619e84e78350a8c57b9f8d67660a7.json fe891dcaf13b85fbc356c735f8b59b.json 68 153aa01d6a00d0fdb5190cdfacbc66.json ed7808301153863f7829e1c8106209.json 69 05f2ac18764f8f46166ce74d096a0d.json 16e17733c1e82e720b0defbfbf1757.json 6a 1b06e4635e5af3a3f19d8f10ad3f28.json 3e6c1a74a4ff7f56a20046ea92fc2c.json 6c ba24b7670b45d50ef2cd1dbc2330dd.json e1b3c233aa07e369a590f910b42108.json 6d 675fde9dfc85641eac1c5219dd47fe.json ec25233e21d96598c87f5ddf56276d.json 6f 42ec5992753747c5e48b7b26b9a2d9.json 70 6ecfba96c6f71a8c82139fa2bcbc07.json a40a7e4c107ea82937ef522e121de5.json 71 a11ad7b25ac1374297a9a21484e86e.json d431514f34f98d5bf236ffb6223b11.json 73 4698378898f44dc5683964eaa99ecb.json 75 5823edf7810eaddf55dab36606d15a.json 77 38ffa2f5c458d065f35354202388c8.json 78 0c653d771dab102f951c0e9e693cf4.json e3f305c68ffca540d5b51c9f4c09f6.json 7a ea024216e656ebb4e0767c76203958.json 7b 3ccdf5fa33a2c8c010bf30bec87fc0.json 7d b83ebb57b273592286b4d03226207d.json 7e 10e98eb0f324a84d30d12b0f7a5706.json 57f9531d8d97ac43e7a1a79a3cbfa7.json 9260c1b4538954085fc29b9b5d5f9e.json dd516baf19f88c9a633187016fe409.json e5be6ab46b733410f5404bb14f5f25.json 80 b661ce9f60a6b34dc23ba073ad7c20.json 82 08c5ce73c6defc36310da97b4d3e5d.json f556dbaeba20e23ed5cd154a318d06.json 83 a93c6359b18e88561815028eccef59.json 84 9775d4555adbdee7efb1f7199a4984.json 85 a9919c05c1a35d0f3779ff87a1e7ff.json 87 04b980540652bcc1a8630c962eb4cb.json fd75390dc00a81ff1a9f79254ef496.json 89 50ffd2546a7af06597e0f32d7fa622.json 8a a48c17b6a5634f19d7e25b6c7befc5.json b6aff1072904678c92d00f30d06160.json 8b fb9d624c0048fbe3524e6b91f158d2.json 8c c17934389a70266a7d55093f644228.json ddc7c946a8e3a626ba25346b32fc47.json 8d 83a6cb772452bcb84cfb92692e00da.json 8e c4c46d9986679d5750e3d51253fbed.json cf5792b4cbc222a6cbfa832337e00b.json 8f d8b22a355c22ba7a4c0fdca78f5c4d.json e52f269c7036c23bb96784e73d8761.json e6f112cde632675b7962f756f43a4d.json 92 80fe511a2488d41320b3c3c3503691.json 93 15cc85abaabfe848dc4328ae0ad3dc.json 94 935a5413eaa6cabba8a9ae5fc60f1f.json 9bdc1feef02574167a99db3539e7a8.json 95 7cf66c65c96ae61adbd8bd90889078.json 98 32c8c09c9b2c46a90186ec56b2fba6.json 355906b441a58a80049c4b16f74919.json 3d48f8cad5483e21053999b5fbcc5b.json 65315b7b6d214e1edc10228e90e200.json b3dbddeb53efbf78b111bad68c9eb6.json ce7629c39b952e3639a11700b97175.json eae6958a1e2bfa9fa154aa091936c4.json f8b7fd1208f369b8d6ec977a3b1b77.json 99 a613d93de83f92848ad514933fed5f.json b3b02ffebf89f3ed4d68b8ebcd2870.json 9a 0e8af0b2dd30924878ff3f9306e9fd.json 7a8947d40f4946beea36e299c38113.json dc474109f6791e8fe847f18214c210.json 9c 0f0d28801ff1ce903faf7bcf7fc57b.json 34c1b0cb8a2a34d9f411ffb2e6808e.json 5e8fa3641b361ea17e210af976c213.json 5e9e352d7904ed12c97811a396d45b.json 9e c82777eeeea8f71e38c0bcae5e4325.json 9f 365391b65030342655d5880b36238d.json a1 3d83ee9d17f11ee212ce2e4e8d04cb.json a3 b2c2937eb3a244b17ab601022e6bf5.json a4 14eca676f4e803b8079983d474ceff.json f33402bf8ec55cb0ad0fc50a039ded.json a5 c54a6b527ac1957777b308cb711d6a.json a6 0c9eeb097ba5ecfdf409c90387916b.json 20f61e1bf7624a75a3ac9905b83403.json c81da9607bda7b0c66d7b40fa9373f.json a7 f54737a6f84bc989dcec5f6ece9eaa.json a8 14df6c32eaf517087a354610c07e48.json 6039f5985d9da4f3233d32a4ee536b.json a9 1b4f8d043679a18927ea794a8799b5.json 516a94570ec2a9ebe9b8c4784a092b.json aa 20e6b096a31897e9216ef2462b1085.json ab 4d6e0d16239306d756ef0efc45cf0f.json 8effa370409b1435db447ef095362f.json ad b100d7f36ceab0ff892cc0bc990045.json ae 1e9615c69518fe2c697b83cf4bd7af.json 8dd0b16942a7a9518e83e1fe68f83e.json af 429c3e1b4e27bbec31f2233967b3da.json 46061f98d800c1b5bcad5a537602f9.json b1 7dd8351f3493852d307d41786a61aa.json b2 be9d163cb36f592ca20b79ff40393f.json b3 5ab6f5641bda6f86c9535a0d57895c.json b4 894ecb802174424b5426398448692a.json b5 536d61704cf32e12e308c8194274bc.json b6 23c460cd02f31743660190895f514c.json 37e292d1f5bbe212f1223c52597b6c.json b7 cb504ffa8bed8350c9ca1bdd616130.json b8 571e55259a21fe02e8fc0984920b39.json b9 d8fc062be3d0ab298ff389becf64cf.json ba f8f3369aad690a1099251224f665e5.json bb 341e61ef85531f4861416420398835.json 9636ee1eebc51c97f2a4d56c5948c3.json bc d216bc202aa2420b487101c20f0926.json bd 8b3a4187b25362760ad4374f3eb783.json c4f0eaaa29f2522c876db0621a86e8.json be 95bc0859016b69c1f272f8abf8870f.json c1 a42fa5c4b8bd26fbee34d7ab1d8dd4.json d2adc6b8d2ff941b37361b0f2a875c.json c2 4922baece42c3d537d69b2a15cc35c.json 81f87ccfa10bc14e02f6d290a7ae0c.json c3 69da8192697df7bbf29a4be9ebdf37.json c4 04e898b9aaa417bca572045d9309e9.json 13fc0e192d689a21d528b94ef21b36.json c5 2daf0c3245e4a8cd940c7d9b26512f.json c6 197dfbfa09ea95726b2a467ca2a8cb.json 32f22a438e58daa55568a3841f95b1.json 7426258961e54d604d2175c5274708.json ac57f954bc006d3de4765a7afa41f8.json c7 264a7192fa63514db7f19593948881.json 8ea0bfcd30106aa2fbb4f99eb55b76.json c9 900acbec6e16f6e98c2f33d8130d33.json e16de7faf1d4237e3974a21a3bce86.json f2e67370deb4b54c1fce2e31cb1f39.json ca 03758f82753d35e9e3fed4a54dd5d1.json 3b6f2e676236d87af10e8f1e77554b.json cd 5f856ef63e18a53a35fc6c4cf06c7b.json 83e97fb39c301a9dc2c45ec6e78946.json d4504fd266c98d5c2936b849e75e30.json d0 631541e202708adad4cdfe1202896c.json d485e0a973fc046e1a6a7d78025e7b.json d1 e74e41f85d0d1460a53f2326aaf317.json d2 8cfc7986ee3c13e8eeda62a7bf94a2.json e618a28410b4b9a2fe800009be4282.json d3 8aa355a9d138f88de4343ed6a98e9a.json b3df15902415a111e2350cc325ab55.json d4 4ada6ee3be4b677e4758c4002cc6d2.json 5cdaf888a0fd9c9395f02e72c78df8.json d6 55a0077bf68613d5863d02fa30d3b5.json d7 7a811f2158c1f9ff62f09996a42731.json da c03cf287e347cf5295bc145d586263.json db 363a8ae12b36d8f3b82f9e60520f3d.json dc 14fada2329a373cc10813334ee8518.json 9a31ba4bcbb36234a46f9394942815.json dd 0a1453d26ed4058cf178e0476da124.json 928ecba5fdab13d7093ad4e5d790b9.json a5aac9fe75175a39b639a14f9abbec.json de 0075baf3dca0b932e1c0403ee1e9d1.json df 54789c2568e7e44cfe9eed80765736.json 722a0ee9e8e9def846d46f09d79112.json e0 700fa7e8464522efc0f7c8e4e67974.json e82fc7e21b343714c34ad1374909c1.json e1 01d57e85df46e9e0f8363024843f22.json e6fee33295ecf2313f21e892298ebd.json e2 40c9675dba8c98c32372e79fb92521.json e4 6fa65de0061a2056cfaa7485284550.json e5 262c2bc372937bb1159f75548ed5c5.json fac2ecb00fcdc57216ec3db7d5abb0.json e8 1eae153efb844c0e2d34d5523da84c.json 22bb5fab772fd057ef299f5955f5f6.json e9 007249fcae8984f9d5a29d1837d348.json ea 7cc7e27fcabcb7b08be34ec040c045.json eb 03c45ccb113d1a863ad896a95e68aa.json ec 12973455a8e45f97d798bd08f52ad8.json 6a8fec6cb86c242808fc543a7677f0.json bfa1ac6278b2074eda1ec388441153.json ed 1a4075411a1c823bc31b86dbd3f9d7.json 4661aecdb675962af26402e7f54d08.json ee 613091b71e2f67bd243a78e47e4371.json ef 2db2bd8b10ba6e9093130859c119fd.json 4bd3526d73a826a4635ee5fa2307f7.json f0 1db671973f8aca678c54aea3073033.json d3db616bde5be6bf45bc14d207ca89.json f1 53f0ac497637dbe524493822b41cc7.json f2 1a920c00fb4cb0404e25f4f5422c22.json 6bc9c55ff92e65cb8ebac74f85c83d.json f3 3ea46a1b3e9ca22530e940127cdcab.json f4 479d9b1fe84500f66c916abdf1a839.json f5 b50ba85be6be6e85dd8c8e726f23d8.json f6 c88c1bdff5008cb151f40380a79fc7.json f8 e0c3864ad1ff7a0f962dc93eda6ae0.json f9 08a2bc72a52ce1c902b8978239675f.json 5b52afc19c8a0ef8b792cb72318951.json 6f33362a10ca0a27fc9dfc48f0b09f.json fa 521b09690a0dd8da40a97e9510afee.json f024fab0ae8ffac74ba45f0262ad3b.json fb 07a4c1485b44b192d3e1a727f90e37.json 5997ed272edf37771076550f6abf05.json ff 382b35731ba16ad6c4c989ff69189e.json 610b93e319c49c7fa6fc62a57871c5.json e0672e475dfc5f09eb5daed5ae95b4.json File Name: style.css import Fonts import Files skeleton .gitpod.yml README.md contract README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts index.ts models.ts tsconfig.json compile.js node_modules @as-covers assembly CONTRIBUTING.md README.md index.ts package.json tsconfig.json core CONTRIBUTING.md README.md package.json glue README.md lib index.d.ts index.js package.json transform README.md lib index.d.ts index.js util.d.ts util.js node_modules visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js toString.d.ts toString.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformRange.d.ts transformRange.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json package.json @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json @babel code-frame README.md lib index.js package.json helper-validator-identifier README.md lib identifier.js index.js keyword.js package.json scripts generate-identifier-regex.js highlight README.md lib index.js node_modules ansi-styles index.js package.json readme.md chalk index.js package.json readme.md templates.js types index.d.ts color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name .eslintrc.json README.md index.js package.json test.js escape-string-regexp index.js package.json readme.md has-flag index.js package.json readme.md supports-color browser.js index.js package.json readme.md package.json @eslint eslintrc CHANGELOG.md README.md conf config-schema.js environments.js eslint-all.js eslint-recommended.js lib cascading-config-array-factory.js config-array-factory.js config-array config-array.js config-dependency.js extracted-config.js ignore-pattern.js index.js override-tester.js flat-compat.js index.js shared ajv.js config-ops.js config-validator.js deprecation-warnings.js naming.js relative-module-resolver.js types.js node_modules ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json @humanwhocodes config-array README.md api.js package.json object-schema .eslintrc.js .travis.yml README.md package.json src index.js merge-strategy.js object-schema.js validation-strategy.js tests merge-strategy.js object-schema.js validation-strategy.js acorn-jsx README.md index.d.ts index.js package.json xhtml.js acorn CHANGELOG.md README.md dist acorn.d.ts acorn.js acorn.mjs.d.ts bin.js package.json ajv .runkit_example.js README.md dist 2019.d.ts 2019.js 2020.d.ts 2020.js ajv.d.ts ajv.js compile codegen code.d.ts code.js index.d.ts index.js scope.d.ts scope.js errors.d.ts errors.js index.d.ts index.js jtd parse.d.ts parse.js serialize.d.ts serialize.js types.d.ts types.js names.d.ts names.js ref_error.d.ts ref_error.js resolve.d.ts resolve.js rules.d.ts rules.js util.d.ts util.js validate applicability.d.ts applicability.js boolSchema.d.ts boolSchema.js dataType.d.ts dataType.js defaults.d.ts defaults.js index.d.ts index.js keyword.d.ts keyword.js subschema.d.ts subschema.js core.d.ts core.js jtd.d.ts jtd.js refs data.json json-schema-2019-09 index.d.ts index.js meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.d.ts index.js meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.d.ts jtd-schema.js runtime equal.d.ts equal.js parseJson.d.ts parseJson.js quote.d.ts quote.js timestamp.d.ts timestamp.js ucs2length.d.ts ucs2length.js validation_error.d.ts validation_error.js standalone index.d.ts index.js instance.d.ts instance.js types index.d.ts index.js json-schema.d.ts json-schema.js jtd-schema.d.ts jtd-schema.js vocabularies applicator additionalItems.d.ts additionalItems.js additionalProperties.d.ts additionalProperties.js allOf.d.ts allOf.js anyOf.d.ts anyOf.js contains.d.ts contains.js dependencies.d.ts dependencies.js dependentSchemas.d.ts dependentSchemas.js if.d.ts if.js index.d.ts index.js items.d.ts items.js items2020.d.ts items2020.js not.d.ts not.js oneOf.d.ts oneOf.js patternProperties.d.ts patternProperties.js prefixItems.d.ts prefixItems.js properties.d.ts properties.js propertyNames.d.ts propertyNames.js thenElse.d.ts thenElse.js code.d.ts code.js core id.d.ts id.js index.d.ts index.js ref.d.ts ref.js discriminator index.d.ts index.js types.d.ts types.js draft2020.d.ts draft2020.js draft7.d.ts draft7.js dynamic dynamicAnchor.d.ts dynamicAnchor.js dynamicRef.d.ts dynamicRef.js index.d.ts index.js recursiveAnchor.d.ts recursiveAnchor.js recursiveRef.d.ts recursiveRef.js errors.d.ts errors.js format format.d.ts format.js index.d.ts index.js jtd discriminator.d.ts discriminator.js elements.d.ts elements.js enum.d.ts enum.js error.d.ts error.js index.d.ts index.js metadata.d.ts metadata.js nullable.d.ts nullable.js optionalProperties.d.ts optionalProperties.js properties.d.ts properties.js ref.d.ts ref.js type.d.ts type.js union.d.ts union.js values.d.ts values.js metadata.d.ts metadata.js next.d.ts next.js unevaluated index.d.ts index.js unevaluatedItems.d.ts unevaluatedItems.js unevaluatedProperties.d.ts unevaluatedProperties.js validation const.d.ts const.js dependentRequired.d.ts dependentRequired.js enum.d.ts enum.js index.d.ts index.js limitContains.d.ts limitContains.js limitItems.d.ts limitItems.js limitLength.d.ts limitLength.js limitNumber.d.ts limitNumber.js limitProperties.d.ts limitProperties.js multipleOf.d.ts multipleOf.js pattern.d.ts pattern.js required.d.ts required.js uniqueItems.d.ts uniqueItems.js lib 2019.ts 2020.ts ajv.ts compile codegen code.ts index.ts scope.ts errors.ts index.ts jtd parse.ts serialize.ts types.ts names.ts ref_error.ts resolve.ts rules.ts util.ts validate applicability.ts boolSchema.ts dataType.ts defaults.ts index.ts keyword.ts subschema.ts core.ts jtd.ts refs data.json json-schema-2019-09 index.ts meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.ts meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.ts runtime equal.ts parseJson.ts quote.ts timestamp.ts ucs2length.ts validation_error.ts standalone index.ts instance.ts types index.ts json-schema.ts jtd-schema.ts vocabularies applicator additionalItems.ts additionalProperties.ts allOf.ts anyOf.ts contains.ts dependencies.ts dependentSchemas.ts if.ts index.ts items.ts items2020.ts not.ts oneOf.ts patternProperties.ts prefixItems.ts properties.ts propertyNames.ts thenElse.ts code.ts core id.ts index.ts ref.ts discriminator index.ts types.ts draft2020.ts draft7.ts dynamic dynamicAnchor.ts dynamicRef.ts index.ts recursiveAnchor.ts recursiveRef.ts errors.ts format format.ts index.ts jtd discriminator.ts elements.ts enum.ts error.ts index.ts metadata.ts nullable.ts optionalProperties.ts properties.ts ref.ts type.ts union.ts values.ts metadata.ts next.ts unevaluated index.ts unevaluatedItems.ts unevaluatedProperties.ts validation const.ts dependentRequired.ts enum.ts index.ts limitContains.ts limitItems.ts limitLength.ts limitNumber.ts limitProperties.ts multipleOf.ts pattern.ts required.ts uniqueItems.ts package.json ansi-colors README.md index.js package.json symbols.js types index.d.ts ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md argparse CHANGELOG.md README.md index.js lib action.js action append.js append constant.js count.js help.js store.js store constant.js false.js true.js subparsers.js version.js action_container.js argparse.js argument error.js exclusive.js group.js argument_parser.js const.js help added_formatters.js formatter.js namespace.js utils.js package.json as-bignum README.md assembly __tests__ as-pect.d.ts i128.spec.as.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts package.json asbuild README.md dist cli.d.ts cli.js commands build.d.ts build.js fmt.d.ts fmt.js index.d.ts index.js init cmd.d.ts cmd.js files asconfigJson.d.ts asconfigJson.js aspecConfig.d.ts aspecConfig.js assembly_files.d.ts assembly_files.js eslintConfig.d.ts eslintConfig.js gitignores.d.ts gitignores.js index.d.ts index.js indexJs.d.ts indexJs.js packageJson.d.ts packageJson.js test_files.d.ts test_files.js index.d.ts index.js interfaces.d.ts interfaces.js run.d.ts run.js test.d.ts test.js index.d.ts index.js main.d.ts main.js utils.d.ts utils.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md assembly JSON.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts index.js package.json temp-docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md assemblyscript-regex .eslintrc.js .github workflows benchmark.yml release.yml test.yml README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __spec_tests__ generated.spec.ts __tests__ alterations.spec.ts as-pect.d.ts boundary-assertions.spec.ts capture-group.spec.ts character-classes.spec.ts character-sets.spec.ts characters.ts empty.ts quantifiers.spec.ts range-quantifiers.spec.ts regex.spec.ts utils.ts char.ts env.ts index.ts nfa matcher.ts nfa.ts types.ts walker.ts parser node.ts parser.ts string-iterator.ts walker.ts regexp.ts tsconfig.json util.ts benchmark benchmark.js package.json spec test-generator.js ts index.ts tsconfig.json assemblyscript-temporal .github workflows node.js.yml release.yml .vscode launch.json README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __tests__ README.md as-pect.d.ts date.spec.ts duration.spec.ts empty.ts plaindate.spec.ts plaindatetime.spec.ts plainmonthday.spec.ts plaintime.spec.ts plainyearmonth.spec.ts timezone.spec.ts zoneddatetime.spec.ts constants.ts date.ts duration.ts enums.ts env.ts index.ts instant.ts now.ts plaindate.ts plaindatetime.ts plainmonthday.ts plaintime.ts plainyearmonth.ts timezone.ts tsconfig.json tz __tests__ index.spec.ts rule.spec.ts zone.spec.ts iana.ts index.ts rule.ts zone.ts utils.ts zoneddatetime.ts development.md package.json tzdb README.md iana theory.html zoneinfo2tdf.pl assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json astral-regex index.d.ts index.js package.json readme.md axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match .github FUNDING.yml LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json callsites index.d.ts index.js package.json readme.md camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js cross-spawn CHANGELOG.md README.md index.js lib enoent.js parse.js util escape.js readShebang.js resolveCommand.js package.json csv-stringify README.md lib browser index.js sync.js es5 index.d.ts index.js sync.d.ts sync.js index.d.ts index.js sync.d.ts sync.js package.json debug README.md package.json src browser.js common.js index.js node.js decamelize index.js package.json readme.md deep-is .travis.yml example cmp.js index.js package.json test NaN.js cmp.js neg-vs-pos-0.js diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js doctrine CHANGELOG.md README.md lib doctrine.js typed.js utility.js package.json emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js enquirer CHANGELOG.md README.md index.d.ts index.js lib ansi.js combos.js completer.js interpolate.js keypress.js placeholder.js prompt.js prompts autocomplete.js basicauth.js confirm.js editable.js form.js index.js input.js invisible.js list.js multiselect.js numeral.js password.js quiz.js scale.js select.js snippet.js sort.js survey.js text.js toggle.js render.js roles.js state.js styles.js symbols.js theme.js timer.js types array.js auth.js boolean.js index.js number.js string.js utils.js package.json env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js escape-string-regexp index.d.ts index.js package.json readme.md eslint-scope CHANGELOG.md README.md lib definition.js index.js pattern-visitor.js reference.js referencer.js scope-manager.js scope.js variable.js package.json eslint-utils README.md index.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json eslint CHANGELOG.md README.md bin eslint.js conf category-list.json config-schema.js default-cli-options.js eslint-all.js eslint-recommended.js replacements.json lib api.js cli-engine cli-engine.js file-enumerator.js formatters checkstyle.js codeframe.js compact.js html.js jslint-xml.js json-with-metadata.js json.js junit.js stylish.js table.js tap.js unix.js visualstudio.js hash.js index.js lint-result-cache.js load-rules.js xml-escape.js cli.js config default-config.js flat-config-array.js flat-config-schema.js rule-validator.js eslint eslint.js index.js init autoconfig.js config-file.js config-initializer.js config-rule.js npm-utils.js source-code-utils.js linter apply-disable-directives.js code-path-analysis code-path-analyzer.js code-path-segment.js code-path-state.js code-path.js debug-helpers.js fork-context.js id-generator.js config-comment-parser.js index.js interpolate.js linter.js node-event-generator.js report-translator.js rule-fixer.js rules.js safe-emitter.js source-code-fixer.js timing.js options.js rule-tester index.js rule-tester.js rules accessor-pairs.js array-bracket-newline.js array-bracket-spacing.js array-callback-return.js array-element-newline.js arrow-body-style.js arrow-parens.js arrow-spacing.js block-scoped-var.js block-spacing.js brace-style.js callback-return.js camelcase.js capitalized-comments.js class-methods-use-this.js comma-dangle.js comma-spacing.js comma-style.js complexity.js computed-property-spacing.js consistent-return.js consistent-this.js constructor-super.js curly.js default-case-last.js default-case.js default-param-last.js dot-location.js dot-notation.js eol-last.js eqeqeq.js for-direction.js func-call-spacing.js func-name-matching.js func-names.js func-style.js function-call-argument-newline.js function-paren-newline.js generator-star-spacing.js getter-return.js global-require.js grouped-accessor-pairs.js guard-for-in.js handle-callback-err.js id-blacklist.js id-denylist.js id-length.js id-match.js implicit-arrow-linebreak.js indent-legacy.js indent.js index.js init-declarations.js jsx-quotes.js key-spacing.js keyword-spacing.js line-comment-position.js linebreak-style.js lines-around-comment.js lines-around-directive.js lines-between-class-members.js max-classes-per-file.js max-depth.js max-len.js max-lines-per-function.js max-lines.js max-nested-callbacks.js max-params.js max-statements-per-line.js max-statements.js multiline-comment-style.js multiline-ternary.js new-cap.js new-parens.js newline-after-var.js newline-before-return.js newline-per-chained-call.js no-alert.js no-array-constructor.js no-async-promise-executor.js no-await-in-loop.js no-bitwise.js no-buffer-constructor.js no-caller.js no-case-declarations.js no-catch-shadow.js no-class-assign.js no-compare-neg-zero.js no-cond-assign.js no-confusing-arrow.js no-console.js no-const-assign.js no-constant-condition.js no-constructor-return.js no-continue.js no-control-regex.js no-debugger.js no-delete-var.js no-div-regex.js no-dupe-args.js no-dupe-class-members.js no-dupe-else-if.js no-dupe-keys.js no-duplicate-case.js no-duplicate-imports.js no-else-return.js no-empty-character-class.js no-empty-function.js no-empty-pattern.js no-empty.js no-eq-null.js no-eval.js no-ex-assign.js no-extend-native.js no-extra-bind.js no-extra-boolean-cast.js no-extra-label.js no-extra-parens.js no-extra-semi.js no-fallthrough.js no-floating-decimal.js no-func-assign.js no-global-assign.js no-implicit-coercion.js no-implicit-globals.js no-implied-eval.js no-import-assign.js no-inline-comments.js no-inner-declarations.js no-invalid-regexp.js no-invalid-this.js no-irregular-whitespace.js no-iterator.js no-label-var.js no-labels.js no-lone-blocks.js no-lonely-if.js no-loop-func.js no-loss-of-precision.js no-magic-numbers.js no-misleading-character-class.js no-mixed-operators.js no-mixed-requires.js no-mixed-spaces-and-tabs.js no-multi-assign.js no-multi-spaces.js no-multi-str.js no-multiple-empty-lines.js no-native-reassign.js no-negated-condition.js no-negated-in-lhs.js no-nested-ternary.js no-new-func.js no-new-object.js no-new-require.js no-new-symbol.js no-new-wrappers.js no-new.js no-nonoctal-decimal-escape.js no-obj-calls.js no-octal-escape.js no-octal.js no-param-reassign.js no-path-concat.js no-plusplus.js no-process-env.js no-process-exit.js no-promise-executor-return.js no-proto.js no-prototype-builtins.js no-redeclare.js no-regex-spaces.js no-restricted-exports.js no-restricted-globals.js no-restricted-imports.js no-restricted-modules.js no-restricted-properties.js no-restricted-syntax.js no-return-assign.js no-return-await.js no-script-url.js no-self-assign.js no-self-compare.js no-sequences.js no-setter-return.js no-shadow-restricted-names.js no-shadow.js no-spaced-func.js no-sparse-arrays.js no-sync.js no-tabs.js no-template-curly-in-string.js no-ternary.js no-this-before-super.js no-throw-literal.js no-trailing-spaces.js no-undef-init.js no-undef.js no-undefined.js no-underscore-dangle.js no-unexpected-multiline.js no-unmodified-loop-condition.js no-unneeded-ternary.js no-unreachable-loop.js no-unreachable.js no-unsafe-finally.js no-unsafe-negation.js no-unsafe-optional-chaining.js no-unused-expressions.js no-unused-labels.js no-unused-vars.js no-use-before-define.js no-useless-backreference.js no-useless-call.js no-useless-catch.js no-useless-computed-key.js no-useless-concat.js no-useless-constructor.js no-useless-escape.js no-useless-rename.js no-useless-return.js no-var.js no-void.js no-warning-comments.js no-whitespace-before-property.js no-with.js nonblock-statement-body-position.js object-curly-newline.js object-curly-spacing.js object-property-newline.js object-shorthand.js one-var-declaration-per-line.js one-var.js operator-assignment.js operator-linebreak.js padded-blocks.js padding-line-between-statements.js prefer-arrow-callback.js prefer-const.js prefer-destructuring.js prefer-exponentiation-operator.js prefer-named-capture-group.js prefer-numeric-literals.js prefer-object-spread.js prefer-promise-reject-errors.js prefer-reflect.js prefer-regex-literals.js prefer-rest-params.js prefer-spread.js prefer-template.js quote-props.js quotes.js radix.js require-atomic-updates.js require-await.js require-jsdoc.js require-unicode-regexp.js require-yield.js rest-spread-spacing.js semi-spacing.js semi-style.js semi.js sort-imports.js sort-keys.js sort-vars.js space-before-blocks.js space-before-function-paren.js space-in-parens.js space-infix-ops.js space-unary-ops.js spaced-comment.js strict.js switch-colon-spacing.js symbol-description.js template-curly-spacing.js template-tag-spacing.js unicode-bom.js use-isnan.js utils ast-utils.js fix-tracker.js keywords.js lazy-loading-rule-map.js patterns letters.js unicode index.js is-combining-character.js is-emoji-modifier.js is-regional-indicator-symbol.js is-surrogate-pair.js valid-jsdoc.js valid-typeof.js vars-on-top.js wrap-iife.js wrap-regex.js yield-star-spacing.js yoda.js shared ajv.js ast-utils.js config-validator.js deprecation-warnings.js logging.js relative-module-resolver.js runtime-info.js string-utils.js traverser.js types.js source-code index.js source-code.js token-store backward-token-comment-cursor.js backward-token-cursor.js cursor.js cursors.js decorative-cursor.js filter-cursor.js forward-token-comment-cursor.js forward-token-cursor.js index.js limit-cursor.js padded-token-cursor.js skip-cursor.js utils.js messages all-files-ignored.js extend-config-missing.js failed-to-read-json.js file-not-found.js no-config-found.js plugin-conflict.js plugin-invalid.js plugin-missing.js print-config-with-directory-path.js whitespace-found.js node_modules ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json espree CHANGELOG.md README.md espree.js lib ast-node-types.js espree.js features.js options.js token-translator.js visitor-keys.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json esprima README.md bin esparse.js esvalidate.js dist esprima.js package.json esquery README.md dist esquery.esm.js esquery.esm.min.js esquery.js esquery.lite.js esquery.lite.min.js esquery.min.js license.txt node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json parser.js esrecurse README.md esrecurse.js gulpfile.babel.js node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json estraverse README.md estraverse.js gulpfile.js package.json esutils README.md lib ast.js code.js keyword.js utils.js package.json fast-deep-equal README.md es6 index.d.ts index.js react.d.ts react.js index.d.ts index.js package.json react.d.ts react.js fast-json-stable-stringify .eslintrc.yml .github FUNDING.yml .travis.yml README.md benchmark index.js test.json example key_cmp.js nested.js str.js value_cmp.js index.d.ts index.js package.json test cmp.js nested.js str.js to-json.js fast-levenshtein LICENSE.md README.md levenshtein.js package.json file-entry-cache README.md cache.js changelog.md package.json find-up index.d.ts index.js package.json readme.md flat-cache README.md changelog.md package.json src cache.js del.js utils.js flatted .github FUNDING.yml README.md SPECS.md cjs index.js package.json es.js esm index.js index.js min.js package.json php flatted.php types.d.ts follow-redirects README.md http.js https.js index.js node_modules debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js ms index.js license.md package.json readme.md package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json function-bind .jscs.json .travis.yml README.md implementation.js index.js package.json test index.js functional-red-black-tree README.md bench test.js package.json rbtree.js test test.js get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob-parent CHANGELOG.md README.md index.js package.json glob README.md common.js glob.js package.json sync.js globals globals.json index.d.ts index.js package.json readme.md has-flag index.d.ts index.js package.json readme.md has README.md package.json src index.js test index.js hasurl README.md index.js package.json ignore CHANGELOG.md README.md index.d.ts index.js legacy.js package.json import-fresh index.d.ts index.js package.json readme.md imurmurhash README.md imurmurhash.js imurmurhash.min.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json interpret README.md index.js mjs-stub.js package.json is-core-module CHANGELOG.md README.md core.json index.js package.json test index.js is-extglob README.md index.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md is-glob README.md index.js package.json isarray .travis.yml README.md component.json index.js package.json test.js isexe README.md index.js mode.js package.json test basic.js windows.js isobject README.md index.js package.json js-base64 LICENSE.md README.md base64.d.ts base64.js package.json js-tokens CHANGELOG.md README.md index.js package.json js-yaml CHANGELOG.md README.md bin js-yaml.js dist js-yaml.js js-yaml.min.js index.js lib js-yaml.js js-yaml common.js dumper.js exception.js loader.js mark.js schema.js schema core.js default_full.js default_safe.js failsafe.js json.js type.js type binary.js bool.js float.js int.js js function.js regexp.js undefined.js map.js merge.js null.js omap.js pairs.js seq.js set.js str.js timestamp.js package.json json-schema-traverse .eslintrc.yml .github FUNDING.yml workflows build.yml publish.yml README.md index.d.ts index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js json-stable-stringify-without-jsonify .travis.yml example key_cmp.js nested.js str.js value_cmp.js index.js package.json test cmp.js nested.js replacer.js space.js str.js to-json.js levn README.md lib cast.js index.js parse-string.js package.json line-column README.md lib line-column.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.merge README.md index.js package.json lodash.sortby README.md index.js package.json lodash.truncate README.md index.js package.json long README.md dist long.js index.js package.json src long.js lru-cache README.md index.js package.json minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md natural-compare README.md index.js package.json near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts datetime.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ empty.ts exportAs.ts model.ts sentences.ts singleton copy.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json optionator CHANGELOG.md README.md lib help.js index.js util.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md parent-module index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md path-key index.d.ts index.js package.json readme.md path-parse README.md index.js package.json prelude-ls CHANGELOG.md README.md lib Func.js List.js Num.js Obj.js Str.js index.js package.json progress CHANGELOG.md Readme.md index.js lib node-progress.js package.json punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json rechoir .travis.yml README.md index.js lib extension.js normalize.js register.js package.json regexpp README.md index.d.ts index.js package.json require-directory .travis.yml index.js package.json require-from-string index.js package.json readme.md require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json resolve-from index.js package.json readme.md resolve SECURITY.md appveyor.yml example async.js sync.js index.js lib async.js caller.js core.js core.json is-core.js node-modules-paths.js normalize-options.js sync.js package.json test core.js dotdot.js dotdot abc index.js index.js faulty_basedir.js filter.js filter_sync.js mock.js mock_sync.js module_dir.js module_dir xmodules aaa index.js ymodules aaa index.js zmodules bbb main.js package.json node-modules-paths.js node_path.js node_path x aaa index.js ccc index.js y bbb index.js ccc index.js nonstring.js pathfilter.js pathfilter deep_ref main.js precedence.js precedence aaa.js aaa index.js main.js bbb.js bbb main.js resolver.js resolver baz doom.js package.json quux.js browser_field a.js b.js package.json cup.coffee dot_main index.js package.json dot_slash_main index.js package.json foo.js incorrect_main index.js package.json invalid_main package.json mug.coffee mug.js multirepo lerna.json package.json packages package-a index.js package.json package-b index.js package.json nested_symlinks mylib async.js package.json sync.js other_path lib other-lib.js root.js quux foo index.js same_names foo.js foo index.js symlinked _ node_modules foo.js package bar.js package.json without_basedir main.js resolver_sync.js shadowed_core.js shadowed_core node_modules util index.js subdirs.js symlinks.js ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json semver CHANGELOG.md README.md bin semver.js classes comparator.js index.js range.js semver.js functions clean.js cmp.js coerce.js compare-build.js compare-loose.js compare.js diff.js eq.js gt.js gte.js inc.js lt.js lte.js major.js minor.js neq.js parse.js patch.js prerelease.js rcompare.js rsort.js satisfies.js sort.js valid.js index.js internal constants.js debug.js identifiers.js parse-options.js re.js package.json preload.js ranges gtr.js intersects.js ltr.js max-satisfying.js min-satisfying.js min-version.js outside.js simplify.js subset.js to-comparators.js valid.js set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json shebang-command index.js package.json readme.md shebang-regex index.d.ts index.js package.json readme.md shelljs CHANGELOG.md README.md commands.js global.js make.js package.json plugin.js shell.js src cat.js cd.js chmod.js common.js cp.js dirs.js echo.js error.js exec-child.js exec.js find.js grep.js head.js ln.js ls.js mkdir.js mv.js popd.js pushd.js pwd.js rm.js sed.js set.js sort.js tail.js tempdir.js test.js to.js toEnd.js touch.js uniq.js which.js slice-ansi index.js package.json readme.md sprintf-js README.md bower.json demo angular.html dist angular-sprintf.min.js sprintf.min.js gruntfile.js package.json src angular-sprintf.js sprintf.js test test.js string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md strip-json-comments index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md table README.md dist alignString.d.ts alignString.js alignTableData.d.ts alignTableData.js calculateCellHeight.d.ts calculateCellHeight.js calculateCellWidths.d.ts calculateCellWidths.js calculateColumnWidths.d.ts calculateColumnWidths.js calculateRowHeights.d.ts calculateRowHeights.js createStream.d.ts createStream.js drawBorder.d.ts drawBorder.js drawContent.d.ts drawContent.js drawHeader.d.ts drawHeader.js drawRow.d.ts drawRow.js drawTable.d.ts drawTable.js generated validators.d.ts validators.js getBorderCharacters.d.ts getBorderCharacters.js index.d.ts index.js makeStreamConfig.d.ts makeStreamConfig.js makeTableConfig.d.ts makeTableConfig.js mapDataUsingRowHeights.d.ts mapDataUsingRowHeights.js padTableData.d.ts padTableData.js stringifyTableData.d.ts stringifyTableData.js table.d.ts table.js truncateTableData.d.ts truncateTableData.js types api.d.ts api.js internal.d.ts internal.js utils.d.ts utils.js validateConfig.d.ts validateConfig.js validateTableData.d.ts validateTableData.js wrapCell.d.ts wrapCell.js wrapString.d.ts wrapString.js wrapWord.d.ts wrapWord.js package.json tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js normalize-windows-path.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js strip-absolute-path.js strip-trailing-slashes.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js package.json text-table .travis.yml example align.js center.js dotalign.js doubledot.js table.js index.js package.json test align.js ansi-colors.js center.js dotalign.js doubledot.js table.js tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist cjs decorator.js index.js mixin-tracking.js mixins.js proxy.js settings.js types.js util.js esm index.js index.min.js types decorator.d.ts index.d.ts mixin-tracking.d.ts mixins.d.ts proxy.d.ts settings.d.ts types.d.ts util.d.ts package.json type-check README.md lib check.js index.js parse-type.js package.json type-fest base.d.ts index.d.ts package.json readme.md source async-return-type.d.ts asyncify.d.ts basic.d.ts conditional-except.d.ts conditional-keys.d.ts conditional-pick.d.ts entries.d.ts entry.d.ts except.d.ts fixed-length-array.d.ts iterable-element.d.ts literal-union.d.ts merge-exclusive.d.ts merge.d.ts mutable.d.ts opaque.d.ts package-json.d.ts partial-deep.d.ts promisable.d.ts promise-value.d.ts readonly-deep.d.ts require-at-least-one.d.ts require-exactly-one.d.ts set-optional.d.ts set-required.d.ts set-return-type.d.ts stringified.d.ts tsconfig-json.d.ts union-to-intersection.d.ts utilities.d.ts value-of.d.ts ts41 camel-case.d.ts delimiter-case.d.ts index.d.ts kebab-case.d.ts pascal-case.d.ts snake-case.d.ts universal-url README.md browser.js index.js package.json uri-js README.md dist es5 uri.all.d.ts uri.all.js uri.all.min.d.ts uri.all.min.js esnext index.d.ts index.js regexps-iri.d.ts regexps-iri.js regexps-uri.d.ts regexps-uri.js schemes http.d.ts http.js https.d.ts https.js mailto.d.ts mailto.js urn-uuid.d.ts urn-uuid.js urn.d.ts urn.js ws.d.ts ws.js wss.d.ts wss.js uri.d.ts uri.js util.d.ts util.js package.json v8-compile-cache CHANGELOG.md README.md package.json v8-compile-cache.js visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json which CHANGELOG.md README.md package.json which.js word-wrap README.md index.d.ts index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json package-lock.json package.json | features not yet implemented issues with the tests differences between PCRE and JS regex | | | dist bootstrap.min.8c957710.css custom.ba64f85d.js global.eca22910.css index.html jquery.min.b2d9c366.js logo-black.3916bf24.svg logo-white.c927fc35.svg popper.min.6066f208.js solicitar_ayuda.html solicitar_ayuda_diseño.19bc99bf.js solicitar_ayuda_diseño.7f34c832.js src.0f970cdb.js style.c038c74d.css package-lock.json package.json src assets logo-black.svg logo-white.svg config.js css bootstrap.min.css style.css global.css index.html index.js index_original.html js bootstrap.bundle.min.js custom.js jquery-3.0.0.min.js jquery.mCustomScrollbar.concat.min.js jquery.min.js plugin.js popper.min.js main.test.js solicitar_ayuda.html solicitar_ayuda_diseño.js utils.js wallet login index.html
semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` You can also just load the module for the function that you care about, if you'd like to minimize your footprint. ```js // load the whole API at once in a single object const semver = require('semver') // or just load the bits you need // all of them listed here, just pick and choose what you want // classes const SemVer = require('semver/classes/semver') const Comparator = require('semver/classes/comparator') const Range = require('semver/classes/range') // functions for working with versions const semverParse = require('semver/functions/parse') const semverValid = require('semver/functions/valid') const semverClean = require('semver/functions/clean') const semverInc = require('semver/functions/inc') const semverDiff = require('semver/functions/diff') const semverMajor = require('semver/functions/major') const semverMinor = require('semver/functions/minor') const semverPatch = require('semver/functions/patch') const semverPrerelease = require('semver/functions/prerelease') const semverCompare = require('semver/functions/compare') const semverRcompare = require('semver/functions/rcompare') const semverCompareLoose = require('semver/functions/compare-loose') const semverCompareBuild = require('semver/functions/compare-build') const semverSort = require('semver/functions/sort') const semverRsort = require('semver/functions/rsort') // low-level comparators between versions const semverGt = require('semver/functions/gt') const semverLt = require('semver/functions/lt') const semverEq = require('semver/functions/eq') const semverNeq = require('semver/functions/neq') const semverGte = require('semver/functions/gte') const semverLte = require('semver/functions/lte') const semverCmp = require('semver/functions/cmp') const semverCoerce = require('semver/functions/coerce') // working with ranges const semverSatisfies = require('semver/functions/satisfies') const semverMaxSatisfying = require('semver/ranges/max-satisfying') const semverMinSatisfying = require('semver/ranges/min-satisfying') const semverToComparators = require('semver/ranges/to-comparators') const semverMinVersion = require('semver/ranges/min-version') const semverValidRange = require('semver/ranges/valid') const semverOutside = require('semver/ranges/outside') const semverGtr = require('semver/ranges/gtr') const semverLtr = require('semver/ranges/ltr') const semverIntersects = require('semver/ranges/intersects') const simplifyRange = require('semver/ranges/simplify') const rangeSubset = require('semver/ranges/subset') ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0-0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0-0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0-0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0-0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0-0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0-0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0-0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0-0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0-0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0-0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0-0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0-0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0-0` * `^0.2.3` := `>=0.2.3 <0.3.0-0` * `^0.0.3` := `>=0.0.3 <0.0.4-0` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4-0` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0-0` * `^0.0.x` := `>=0.0.0 <0.1.0-0` * `^0.0` := `>=0.0.0 <0.1.0-0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0-0` * `^0.x` := `>=0.0.0 <1.0.0-0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect * `simplifyRange(versions, range)`: Return a "simplified" range that matches the same items in `versions` list as the range specified. Note that it does *not* guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with `||` programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned. * `subset(subRange, superRange)`: Return `true` if the `subRange` range is entirely contained by the `superRange` range. Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` ## Exported Modules <!-- TODO: Make sure that all of these items are documented (classes aren't, eg), and then pull the module name into the documentation for that specific thing. --> You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main `require('semver')` export uses getter functions to lazily load the parts of the API that are used. The following modules are available: * `require('semver')` * `require('semver/classes')` * `require('semver/classes/comparator')` * `require('semver/classes/range')` * `require('semver/classes/semver')` * `require('semver/functions/clean')` * `require('semver/functions/cmp')` * `require('semver/functions/coerce')` * `require('semver/functions/compare')` * `require('semver/functions/compare-build')` * `require('semver/functions/compare-loose')` * `require('semver/functions/diff')` * `require('semver/functions/eq')` * `require('semver/functions/gt')` * `require('semver/functions/gte')` * `require('semver/functions/inc')` * `require('semver/functions/lt')` * `require('semver/functions/lte')` * `require('semver/functions/major')` * `require('semver/functions/minor')` * `require('semver/functions/neq')` * `require('semver/functions/parse')` * `require('semver/functions/patch')` * `require('semver/functions/prerelease')` * `require('semver/functions/rcompare')` * `require('semver/functions/rsort')` * `require('semver/functions/satisfies')` * `require('semver/functions/sort')` * `require('semver/functions/valid')` * `require('semver/ranges/gtr')` * `require('semver/ranges/intersects')` * `require('semver/ranges/ltr')` * `require('semver/ranges/max-satisfying')` * `require('semver/ranges/min-satisfying')` * `require('semver/ranges/min-version')` * `require('semver/ranges/outside')` * `require('semver/ranges/to-comparators')` * `require('semver/ranges/valid')` # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # line-column [![Build Status](https://travis-ci.org/io-monad/line-column.svg?branch=master)](https://travis-ci.org/io-monad/line-column) [![Coverage Status](https://coveralls.io/repos/github/io-monad/line-column/badge.svg?branch=master)](https://coveralls.io/github/io-monad/line-column?branch=master) [![npm version](https://badge.fury.io/js/line-column.svg)](https://badge.fury.io/js/line-column) Node module to convert efficiently index to/from line-column in a string. ## Install npm install line-column ## Usage ### lineColumn(str, options = {}) Returns a `LineColumnFinder` instance for given string `str`. #### Options | Key | Description | Default | | ------- | ----------- | ------- | | `origin` | The origin value of line number and column number | `1` | ### lineColumn(str, index) This is just a shorthand for `lineColumn(str).fromIndex(index)`. ### LineColumnFinder#fromIndex(index) Find line and column from index in the string. Parameters: - `index` - `number` Index in the string. (0-origin) Returns: - `{ line: x, col: y }` Found line number and column number. - `null` if the given index is out of range. ### LineColumnFinder#toIndex(line, column) Find index from line and column in the string. Parameters: - `line` - `number` Line number in the string. - `column` - `number` Column number in the string. or - `{ line: x, col: y }` - `Object` line and column numbers in the string.<br>A key name `column` can be used instead of `col`. or - `[ line, col ]` - `Array` line and column numbers in the string. Returns: - `number` Found index in the string. - `-1` if the given line or column is out of range. ## Example ```js var lineColumn = require("line-column"); var testString = [ "ABCDEFG\n", // line:0, index:0 "HIJKLMNOPQRSTU\n", // line:1, index:8 "VWXYZ\n", // line:2, index:23 "日本語の文字\n", // line:3, index:29 "English words" // line:4, index:36 ].join(""); // length:49 lineColumn(testString).fromIndex(3) // { line: 1, col: 4 } lineColumn(testString).fromIndex(33) // { line: 4, col: 5 } lineColumn(testString).toIndex(1, 4) // 3 lineColumn(testString).toIndex(4, 5) // 33 // Shorthand of .fromIndex (compatible with find-line-column) lineColumn(testString, 33) // { line:4, col: 5 } // Object or Array is also acceptable lineColumn(testString).toIndex({ line: 4, col: 5 }) // 33 lineColumn(testString).toIndex({ line: 4, column: 5 }) // 33 lineColumn(testString).toIndex([4, 5]) // 33 // You can cache it for the same string. It is so efficient. (See benchmark) var finder = lineColumn(testString); finder.fromIndex(33) // { line: 4, column: 5 } finder.toIndex(4, 5) // 33 // For 0-origin line and column numbers var oneOrigin = lineColumn(testString, { origin: 0 }); oneOrigin.fromIndex(33) // { line: 3, column: 4 } oneOrigin.toIndex(3, 4) // 33 ``` ## Testing npm test ## Benchmark The popular package [find-line-column](https://www.npmjs.com/package/find-line-column) provides the same "index to line-column" feature. Here is some benchmarking on `line-column` vs `find-line-column`. You can run this benchmark by `npm run benchmark`. See [benchmark/](benchmark/) for the source code. ``` long text + line-column (not cached) x 72,989 ops/sec ±0.83% (89 runs sampled) long text + line-column (cached) x 13,074,242 ops/sec ±0.32% (89 runs sampled) long text + find-line-column x 33,887 ops/sec ±0.54% (84 runs sampled) short text + line-column (not cached) x 1,636,766 ops/sec ±0.77% (82 runs sampled) short text + line-column (cached) x 21,699,686 ops/sec ±1.04% (82 runs sampled) short text + find-line-column x 382,145 ops/sec ±1.04% (85 runs sampled) ``` As you might have noticed, even not cached version of `line-column` is 2x - 4x faster than `find-line-column`, and cached version of `line-column` is remarkable 50x - 380x faster. ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -am 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## License MIT (See LICENSE) # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` ### esutils [![Build Status](https://secure.travis-ci.org/estools/esutils.svg)](http://travis-ci.org/estools/esutils) esutils ([esutils](http://github.com/estools/esutils)) is utility box for ECMAScript language tools. ### API ### ast #### ast.isExpression(node) Returns true if `node` is an Expression as defined in ECMA262 edition 5.1 section [11](https://es5.github.io/#x11). #### ast.isStatement(node) Returns true if `node` is a Statement as defined in ECMA262 edition 5.1 section [12](https://es5.github.io/#x12). #### ast.isIterationStatement(node) Returns true if `node` is an IterationStatement as defined in ECMA262 edition 5.1 section [12.6](https://es5.github.io/#x12.6). #### ast.isSourceElement(node) Returns true if `node` is a SourceElement as defined in ECMA262 edition 5.1 section [14](https://es5.github.io/#x14). #### ast.trailingStatement(node) Returns `Statement?` if `node` has trailing `Statement`. ```js if (cond) consequent; ``` When taking this `IfStatement`, returns `consequent;` statement. #### ast.isProblematicIfStatement(node) Returns true if `node` is a problematic IfStatement. If `node` is a problematic `IfStatement`, `node` cannot be represented as an one on one JavaScript code. ```js { type: 'IfStatement', consequent: { type: 'WithStatement', body: { type: 'IfStatement', consequent: {type: 'EmptyStatement'} } }, alternate: {type: 'EmptyStatement'} } ``` The above node cannot be represented as a JavaScript code, since the top level `else` alternate belongs to an inner `IfStatement`. ### code #### code.isDecimalDigit(code) Return true if provided code is decimal digit. #### code.isHexDigit(code) Return true if provided code is hexadecimal digit. #### code.isOctalDigit(code) Return true if provided code is octal digit. #### code.isWhiteSpace(code) Return true if provided code is white space. White space characters are formally defined in ECMA262. #### code.isLineTerminator(code) Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262. #### code.isIdentifierStart(code) Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262. #### code.isIdentifierPart(code) Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262. ### keyword #### keyword.isKeywordES5(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections [7.6.1.1](http://es5.github.io/#x7.6.1.1) and [7.6.1.2](http://es5.github.io/#x7.6.1.2), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isKeywordES6(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections [11.6.2.1](http://ecma-international.org/ecma-262/6.0/#sec-keywords) and [11.6.2.2](http://ecma-international.org/ecma-262/6.0/#sec-future-reserved-words), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isReservedWordES5(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section [7.6.1](http://es5.github.io/#x7.6.1). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isReservedWordES6(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section [11.6.2](http://ecma-international.org/ecma-262/6.0/#sec-reserved-words). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isRestrictedWord(id) Returns `true` if provided identifier string is one of `eval` or `arguments`. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section [12.1.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers-static-semantics-early-errors). #### keyword.isIdentifierNameES5(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). #### keyword.isIdentifierNameES6(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section [11.6](http://ecma-international.org/ecma-262/6.0/#sec-names-and-keywords). #### keyword.isIdentifierES5(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. #### keyword.isIdentifierES6(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section [12.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. ### License Copyright (C) 2013 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # rechoir [![Build Status](https://secure.travis-ci.org/tkellen/js-rechoir.png)](http://travis-ci.org/tkellen/js-rechoir) > Require any supported file as a node module. [![NPM](https://nodei.co/npm/rechoir.png)](https://nodei.co/npm/rechoir/) ## What is it? This module, in conjunction with [interpret]-like objects can register any file type the npm ecosystem has a module loader for. This library is a dependency of [Liftoff]. ## API ### prepare(config, filepath, requireFrom) Look for a module loader associated with the provided file and attempt require it. If necessary, run any setup required to inject it into [require.extensions](http://nodejs.org/api/globals.html#globals_require_extensions). `config` An [interpret]-like configuration object. `filepath` A file whose type you'd like to register a module loader for. `requireFrom` An optional path to start searching for the module required to load the requested file. Defaults to the directory of `filepath`. If calling this method is successful (aka: it doesn't throw), you can now require files of the type you requested natively. An error with a `failures` property will be thrown if the module loader(s) configured for a given extension cannot be registered. If a loader is already registered, this will simply return `true`. **Note:** While rechoir will automatically load and register transpilers like `coffee-script`, you must provide a local installation. The transpilers are **not** bundled with this module. #### Usage ```js const config = require('interpret').extensions; const rechoir = require('rechoir'); rechoir.prepare(config, './test/fixtures/test.coffee'); rechoir.prepare(config, './test/fixtures/test.csv'); rechoir.prepare(config, './test/fixtures/test.toml'); console.log(require('./test/fixtures/test.coffee')); console.log(require('./test/fixtures/test.csv')); console.log(require('./test/fixtures/test.toml')); ``` [interpret]: http://github.com/tkellen/js-interpret [Liftoff]: http://github.com/tkellen/js-liftoff # path-parse [![Build Status](https://travis-ci.org/jbgutierrez/path-parse.svg?branch=master)](https://travis-ci.org/jbgutierrez/path-parse) > Node.js [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) [ponyfill](https://ponyfill.com). ## Install ``` $ npm install --save path-parse ``` ## Usage ```js var pathParse = require('path-parse'); pathParse('/home/user/dir/file.txt'); //=> { // root : "/", // dir : "/home/user/dir", // base : "file.txt", // ext : ".txt", // name : "file" // } ``` ## API See [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) docs. ### pathParse(path) ### pathParse.posix(path) The Posix specific version. ### pathParse.win32(path) The Windows specific version. ## License MIT © [Javier Blanco](http://jbgutierrez.info) # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support. [![Build Status](https://travis-ci.org/epoberezkin/fast-deep-equal.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-deep-equal) [![npm](https://img.shields.io/npm/v/fast-deep-equal.svg)](https://www.npmjs.com/package/fast-deep-equal) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-deep-equal/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-deep-equal?branch=master) ## Install ```bash npm install fast-deep-equal ``` ## Features - ES5 compatible - works in node.js (8+) and browsers (IE9+) - checks equality of Date and RegExp objects by value. ES6 equal (`require('fast-deep-equal/es6')`) also supports: - Maps - Sets - Typed arrays ## Usage ```javascript var equal = require('fast-deep-equal'); console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true ``` To support ES6 Maps, Sets and Typed arrays equality use: ```javascript var equal = require('fast-deep-equal/es6'); console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true ``` To use with React (avoiding the traversal of React elements' _owner property that contains circular references and is not needed when comparing the elements - borrowed from [react-fast-compare](https://github.com/FormidableLabs/react-fast-compare)): ```javascript var equal = require('fast-deep-equal/react'); var equal = require('fast-deep-equal/es6/react'); ``` ## Performance benchmark Node.js v12.6.0: ``` fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled) fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled) fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled) nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled) shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled) underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled) lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled) deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled) deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled) ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled) util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled) assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled) The fastest is fast-deep-equal ``` To run benchmark (requires node.js 6+): ```bash npm run benchmark ``` __Please note__: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application. ## Enterprise support fast-deep-equal package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-deep-equal?utm_source=npm-fast-deep-equal&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/fast-deep-equal/blob/master/LICENSE) # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0,4.1,4.2?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <tannerntannern@gmail.com> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # is-extglob [![NPM version](https://img.shields.io/npm/v/is-extglob.svg?style=flat)](https://www.npmjs.com/package/is-extglob) [![NPM downloads](https://img.shields.io/npm/dm/is-extglob.svg?style=flat)](https://npmjs.org/package/is-extglob) [![Build Status](https://img.shields.io/travis/jonschlinkert/is-extglob.svg?style=flat)](https://travis-ci.org/jonschlinkert/is-extglob) > Returns true if a string has an extglob. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-extglob ``` ## Usage ```js var isExtglob = require('is-extglob'); ``` **True** ```js isExtglob('?(abc)'); isExtglob('@(abc)'); isExtglob('!(abc)'); isExtglob('*(abc)'); isExtglob('+(abc)'); ``` **False** Escaped extglobs: ```js isExtglob('\\?(abc)'); isExtglob('\\@(abc)'); isExtglob('\\!(abc)'); isExtglob('\\*(abc)'); isExtglob('\\+(abc)'); ``` Everything else... ```js isExtglob('foo.js'); isExtglob('!foo.js'); isExtglob('*.js'); isExtglob('**/abc.js'); isExtglob('abc/*.js'); isExtglob('abc/(aaa|bbb).js'); isExtglob('abc/[a-z].js'); isExtglob('abc/{a,b}.js'); isExtglob('abc/?.js'); isExtglob('abc.js'); isExtglob('abc/def/ghi.js'); ``` ## History **v2.0** Adds support for escaping. Escaped exglobs no longer return true. ## About ### Related projects * [has-glob](https://www.npmjs.com/package/has-glob): Returns `true` if an array has a glob pattern. | [homepage](https://github.com/jonschlinkert/has-glob "Returns `true` if an array has a glob pattern.") * [is-glob](https://www.npmjs.com/package/is-glob): Returns `true` if the given string looks like a glob pattern or an extglob pattern… [more](https://github.com/jonschlinkert/is-glob) | [homepage](https://github.com/jonschlinkert/is-glob "Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a bet") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/jonschlinkert/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ### License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/is-extglob/blob/master/LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.1.31, on October 12, 2016._ The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # levn [![Build Status](https://travis-ci.org/gkz/levn.png)](https://travis-ci.org/gkz/levn) <a name="levn" /> __Light ECMAScript (JavaScript) Value Notation__ Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments). Levn aims to concisely describe JavaScript values in text, and allow for the extraction and validation of those values. Levn uses [type-check](https://github.com/gkz/type-check) for its type format, and to validate the results. MIT license. Version 0.4.1. __How is this different than JSON?__ levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose. npm install levn For updates on levn, [follow me on twitter](https://twitter.com/gkzahariev). ## Quick Examples ```js var parse = require('levn').parse; parse('Number', '2'); // 2 parse('String', '2'); // '2' parse('String', 'levn'); // 'levn' parse('String', 'a b'); // 'a b' parse('Boolean', 'true'); // true parse('Date', '#2011-11-11#'); // (Date object) parse('Date', '2011-11-11'); // (Date object) parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi parse('RegExp', 're'); // /re/ parse('Int', '2'); // 2 parse('Number | String', 'str'); // 'str' parse('Number | String', '2'); // 2 parse('[Number]', '[1,2,3]'); // [1,2,3] parse('(String, Boolean)', '(hi, false)'); // ['hi', false] parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2} // at the top level, you can ommit surrounding delimiters parse('[Number]', '1,2,3'); // [1,2,3] parse('(String, Boolean)', 'hi, false'); // ['hi', false] parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2} // wildcard - auto choose type parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}] ``` ## Usage `require('levn');` returns an object that exposes three properties. `VERSION` is the current version of the library as a string. `parse` and `parsedTypeParse` are functions. ```js // parse(type, input, options); parse('[Number]', '1,2,3'); // [1, 2, 3] // parsedTypeParse(parsedType, input, options); var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ### parse(type, input, options) `parse` casts the string `input` into a JavaScript value according to the specified `type` in the [type format](https://github.com/gkz/type-check#type-format) (and taking account the optional `options`) and returns the resulting JavaScript value. ##### arguments * type - `String` - the type written in the [type format](https://github.com/gkz/type-check#type-format) which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js parse('[Number]', '1,2,3'); // [1, 2, 3] ``` ### parsedTypeParse(parsedType, input, options) `parsedTypeParse` casts the string `input` into a JavaScript value according to the specified `type` which has already been parsed (and taking account the optional `options`) and returns the resulting JavaScript value. You can parse a type using the [type-check](https://github.com/gkz/type-check) library's `parseType` function. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ## Levn Format Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, `parse('Number', '2')` will produce the number `2`, but `parse('String', '2')` will produce the string `"2"`. If you do not provide type information, and simply use `*`, levn will parse the input according the unambiguous "explicit" mode, which we will now detail - you can also set the `explicit` option to true manually in the [options](#options). * `"string"`, `'string'` are parsed as a String, eg. `"a msg"` is `"a msg"` * `#date#` is parsed as a Date, eg. `#2011-11-11#` is `new Date('2011-11-11')` * `/regexp/flags` is parsed as a RegExp, eg. `/re/gi` is `/re/gi` * `undefined`, `null`, `NaN`, `true`, and `false` are all their JavaScript equivalents * `[element1, element2, etc]` is an Array, and the casting procedure is recursively applied to each element. Eg. `[1,2,3]` is `[1,2,3]`. * `(element1, element2, etc)` is an tuple, and the casting procedure is recursively applied to each element. Eg. `(1, a)` is `(1, a)` (is `[1, 'a']`). * `{key1: val1, key2: val2, ...}` is an Object, and the casting procedure is recursively applied to each property. Eg. `{a: 1, b: 2}` is `{a: 1, b: 2}`. * Any test which does not fall under the above, and which does not contain special characters (`[``]``(``)``{``}``:``,`) is a string, eg. `$12- blah` is `"$12- blah"`. If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the [type format](https://github.com/gkz/type-check#type-format) section of [type-check](https://github.com/gkz/type-check) for more information about how to specify types. There are some rules about what levn can do with the information: * If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. `[({})]` is `"[({})]"`, and `"hi"` is `'"hi"'`. * If a Date is expected, the surrounding `#` can be omitted from date literals. Eg. `2011-11-11` is `new Date('2011-11-11')`. * If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing `/` can be omitted - this will have the affect of setting the source of the regex to the input. Eg. `regex` is `/regex/`. * If an Array is expected, and it is the root node (at the top level), the opening `[` and closing `]` can be omitted. Eg. `1,2,3` is `[1,2,3]`. * If a tuple is expected, and it is the root node (at the top level), the opening `(` and closing `)` can be omitted. Eg. `1, a` is `(1, a)` (is `[1, 'a']`). * If an Object is expected, and it is the root node (at the top level), the opening `{` and closing `}` can be omitted. Eg `a: 1, b: 2` is `{a: 1, b: 2}`. If you list multiple types (eg. `Number | String`), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list. Whitespace between special characters and elements is inconsequential. ## Options Options is an object. It is an optional parameter to the `parse` and `parsedTypeParse` functions. ### Explicit A `Boolean`. By default it is `false`. __Example:__ ```js parse('RegExp', 're', {explicit: false}); // /re/ parse('RegExp', 're', {explicit: true}); // Error: ... does not type check... parse('RegExp | String', 're', {explicit: true}); // 're' ``` `explicit` sets whether to be in explicit mode or not. Using `*` automatically activates explicit mode. For more information, read the [levn format](#levn-format) section. ### customTypes An `Object`. Empty `{}` by default. __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function (x) { return x % 2 === 0; }, cast: function (x) { return {type: 'Just', value: parseInt(x)}; } } } } parse('Even', '2', options); // 2 parse('Even', '3', options); // Error: Value: "3" does not type check... ``` __Another Example:__ ```js function Person(name, age){ this.name = name; this.age = age; } var options = { customTypes: { Person: { typeOf: 'Object', validate: function (x) { x instanceof Person; }, cast: function (value, options, typesCast) { var name, age; if ({}.toString.call(value).slice(8, -1) !== 'Object') { return {type: 'Nothing'}; } name = typesCast(value.name, [{type: 'String'}], options); age = typesCast(value.age, [{type: 'Numger'}], options); return {type: 'Just', value: new Person(name, age)}; } } } parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25} ``` `customTypes` is an object whose keys are the name of the types, and whose values are an object with three properties, `typeOf`, `validate`, and `cast`. For more information about `typeOf` and `validate`, please see the [custom types](https://github.com/gkz/type-check#custom-types) section of type-check. `cast` is a function which receives three arguments, the value under question, options, and the typesCast function. In `cast`, attempt to cast the value into the specified type. If you are successful, return an object in the format `{type: 'Just', value: CAST-VALUE}`, if you know it won't work, return `{type: 'Nothing'}`. You can use the `typesCast` function to cast any child values. Remember to pass `options` to it. In your function you can also check for `options.explicit` and act accordingly. ## Technical About `levn` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [type-check](https://github.com/gkz/type-check) to both parse types and validate values. It also uses the [prelude.ls](http://preludels.com/) library. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. ### Esrecurse [![Build Status](https://travis-ci.org/estools/esrecurse.svg?branch=master)](https://travis-ci.org/estools/esrecurse) Esrecurse ([esrecurse](https://github.com/estools/esrecurse)) is [ECMAScript](https://www.ecma-international.org/publications/standards/Ecma-262.htm) recursive traversing functionality. ### Example Usage The following code will output all variables declared at the root of a file. ```javascript esrecurse.visit(ast, { XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); ``` We can use `Visitor` instance. ```javascript var visitor = new esrecurse.Visitor({ XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); visitor.visit(ast); ``` We can inherit `Visitor` instance easily. ```javascript class Derived extends esrecurse.Visitor { constructor() { super(null); } XXXStatement(node) { } } ``` ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { this.visit(node.left); // do something... this.visit(node.right); }; ``` And you can invoke default visiting operation inside custom visit operation. ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { // do something... this.visitChildren(node); }; ``` The `childVisitorKeys` option does customize the behaviour of `this.visitChildren(node)`. We can use user-defined node types. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { // Extending the existing traversing rules. childVisitorKeys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } } ); ``` We can use the `fallback` option as well. If the `fallback` option is `"iteration"`, `esrecurse` would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: 'iteration' } ); ``` If the `fallback` option is a function, `esrecurse` calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: function (node) { return Object.keys(node).filter(function(key) { return key !== 'argument' }); } } ); ``` ### License Copyright (C) 2014 [Yusuke Suzuki](https://github.com/Constellation) (twitter: [@Constellation](https://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # assemblyscript-regex A regex engine for AssemblyScript. [AssemblyScript](https://www.assemblyscript.org/) is a new language, based on TypeScript, that runs on WebAssembly. AssemblyScript has a lightweight standard library, but lacks support for Regular Expression. The project fills that gap! This project exposes an API that mirrors the JavaScript [RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp) class: ```javascript const regex = new RegExp("fo*", "g"); const str = "table football, foul"; let match: Match | null = regex.exec(str); while (match != null) { // first iteration // match.index = 6 // match.matches[0] = "foo" // second iteration // match.index = 16 // match.matches[0] = "fo" match = regex.exec(str); } ``` ## Project status The initial focus of this implementation has been feature support and functionality over performance. It currently supports a sufficient number of regex features to be considered useful, including most character classes, common assertions, groups, alternations, capturing groups and quantifiers. The next phase of development will focussed on more extensive testing and performance. The project currently has reasonable unit test coverage, focussed on positive and negative test cases on a per-feature basis. It also includes a more exhaustive test suite with test cases borrowed from another regex library. ### Feature support Based on the classfication within the [MDN cheatsheet](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions/Cheatsheet) **Character sets** - [x] . - [x] \d - [x] \D - [x] \w - [x] \W - [x] \s - [x] \S - [x] \t - [x] \r - [x] \n - [x] \v - [x] \f - [ ] [\b] - [ ] \0 - [ ] \cX - [x] \xhh - [x] \uhhhh - [ ] \u{hhhh} or \u{hhhhh} - [x] \ **Assertions** - [x] ^ - [x] $ - [ ] \b - [ ] \B **Other assertions** - [ ] x(?=y) Lookahead assertion - [ ] x(?!y) Negative lookahead assertion - [ ] (?<=y)x Lookbehind assertion - [ ] (?<!y)x Negative lookbehind assertion **Groups and ranges** - [x] x|y - [x] [xyz][a-c] - [x] [^xyz][^a-c] - [x] (x) capturing group - [ ] \n back reference - [ ] (?<Name>x) named capturing group - [x] (?:x) Non-capturing group **Quantifiers** - [x] x\* - [x] x+ - [x] x? - [x] x{n} - [x] x{n,} - [x] x{n,m} - [ ] x\*? / x+? / ... **RegExp** - [x] global - [ ] sticky - [x] case insensitive - [x] multiline - [x] dotAll - [ ] unicode ### Development This project is open source, MIT licenced and your contributions are very much welcomed. To get started, check out the repository and install dependencies: ``` $ npm install ``` A few general points about the tools and processes this project uses: - This project uses prettier for code formatting and eslint to provide additional syntactic checks. These are both run on `npm test` and as part of the CI build. - The unit tests are executed using [as-pect](https://github.com/jtenner/as-pect) - a native AssemblyScript test runner - The specification tests are within the `spec` folder. The `npm run test:generate` target transforms these tests into as-pect tests which execute as part of the standard build / test cycle - In order to support improved debugging you can execute this library as TypeScript (rather than WebAssembly), via the `npm run tsrun` target. ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo: [demo](https://estools.github.io/esquery/) The following selectors are supported: * AST node type: `ForStatement` * [wildcard](http://dev.w3.org/csswg/selectors4/#universal-selector): `*` * [attribute existence](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr]` * [attribute value](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr="foo"]` or `[attr=123]` * attribute regex: `[attr=/foo.*/]` or (with flags) `[attr=/foo.*/is]` * attribute conditions: `[attr!="foo"]`, `[attr>2]`, `[attr<3]`, `[attr>=2]`, or `[attr<=3]` * nested attribute: `[attr.level2="foo"]` * field: `FunctionDeclaration > Identifier.id` * [First](http://dev.w3.org/csswg/selectors4/#the-first-child-pseudo) or [last](http://dev.w3.org/csswg/selectors4/#the-last-child-pseudo) child: `:first-child` or `:last-child` * [nth-child](http://dev.w3.org/csswg/selectors4/#the-nth-child-pseudo) (no ax+b support): `:nth-child(2)` * [nth-last-child](http://dev.w3.org/csswg/selectors4/#the-nth-last-child-pseudo) (no ax+b support): `:nth-last-child(1)` * [descendant](http://dev.w3.org/csswg/selectors4/#descendant-combinators): `ancestor descendant` * [child](http://dev.w3.org/csswg/selectors4/#child-combinators): `parent > child` * [following sibling](http://dev.w3.org/csswg/selectors4/#general-sibling-combinators): `node ~ sibling` * [adjacent sibling](http://dev.w3.org/csswg/selectors4/#adjacent-sibling-combinators): `node + adjacent` * [negation](http://dev.w3.org/csswg/selectors4/#negation-pseudo): `:not(ForStatement)` * [has](https://drafts.csswg.org/selectors-4/#has-pseudo): `:has(ForStatement)` * [matches-any](http://dev.w3.org/csswg/selectors4/#matches): `:matches([attr] > :first-child, :last-child)` * [subject indicator](http://dev.w3.org/csswg/selectors4/#subject): `!IfStatement > [name="foo"]` * class of AST node: `:statement`, `:expression`, `:declaration`, `:function`, or `:pattern` [![Build Status](https://travis-ci.org/estools/esquery.png?branch=master)](https://travis-ci.org/estools/esquery) [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] [![Downloads][downloads-image]][downloads-url] [![Join the chat at https://gitter.im/eslint/doctrine](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/eslint/doctrine?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # Doctrine Doctrine is a [JSDoc](http://usejsdoc.org) parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file). ## Installation You can install Doctrine using [npm](https://npmjs.com): ``` $ npm install doctrine --save-dev ``` Doctrine can also be used in web browsers using [Browserify](http://browserify.org). ## Usage Require doctrine inside of your JavaScript: ```js var doctrine = require("doctrine"); ``` ### parse() The primary method is `parse()`, which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are: * `unwrap` - set to `true` to delete the leading `/**`, any `*` that begins a line, and the trailing `*/` from the source text. Default: `false`. * `tags` - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if `tags` is `["param"]`, then only `@param` tags will be returned. Default: `null`. * `recoverable` - set to `true` to keep parsing even when syntax errors occur. Default: `false`. * `sloppy` - set to `true` to allow optional parameters to be specified in brackets (`@param {string} [foo]`). Default: `false`. * `lineNumbers` - set to `true` to add `lineNumber` to each node, specifying the line on which the node is found in the source. Default: `false`. * `range` - set to `true` to add `range` to each node, specifying the start and end index of the node in the original comment. Default: `false`. Here's a simple example: ```js var ast = doctrine.parse( [ "/**", " * This function comment is parsed by doctrine", " * @param {{ok:String}} userName", "*/" ].join('\n'), { unwrap: true }); ``` This example returns the following AST: { "description": "This function comment is parsed by doctrine", "tags": [ { "title": "param", "description": null, "type": { "type": "RecordType", "fields": [ { "type": "FieldType", "key": "ok", "value": { "type": "NameExpression", "name": "String" } } ] }, "name": "userName" } ] } See the [demo page](http://eslint.org/doctrine/demo/) more detail. ## Team These folks keep the project moving and are resources for help: * Nicholas C. Zakas ([@nzakas](https://github.com/nzakas)) - project lead * Yusuke Suzuki ([@constellation](https://github.com/constellation)) - reviewer ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/doctrine/issues). ## Frequently Asked Questions ### Can I pass a whole JavaScript file to Doctrine? No. Doctrine can only parse JSDoc comments, so you'll need to pass just the JSDoc comment to Doctrine in order to work. ### License #### doctrine Copyright JS Foundation and other contributors, https://js.foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. #### esprima some of functions is derived from esprima Copyright (C) 2012, 2011 [Ariya Hidayat](http://ariya.ofilabs.com/about) (twitter: [@ariyahidayat](http://twitter.com/ariyahidayat)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #### closure-compiler some of extensions is derived from closure-compiler Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ ### Where to ask for help? Join our [Chatroom](https://gitter.im/eslint/doctrine) [npm-image]: https://img.shields.io/npm/v/doctrine.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/doctrine [travis-image]: https://img.shields.io/travis/eslint/doctrine/master.svg?style=flat-square [travis-url]: https://travis-ci.org/eslint/doctrine [coveralls-image]: https://img.shields.io/coveralls/eslint/doctrine/master.svg?style=flat-square [coveralls-url]: https://coveralls.io/r/eslint/doctrine?branch=master [downloads-image]: http://img.shields.io/npm/dm/doctrine.svg?style=flat-square [downloads-url]: https://www.npmjs.com/package/doctrine # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/main/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/yargs-parser@19.0.0/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _: [], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _: [2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # 1. ¿Que es SeedingHope? _SeedingHope es un smartcontract desarrollado bajo el protocolo de Near, su finalidad u objetivo es que las personas que emprendan un programa social para ayudar a la comunidad, puedan solicitar donaciones para llevarlos a cabo y cumplir con sus objetivos._ **SeedingHope permite:** 1. Crear un programa social 2. Dar de baja un programa social 3. Consultar los programas sociales que están dados de alta en la aplicación 4. Recibir donaciones para programas sociales. ## 2. Instalación Local _Para poder instalar y correr SeedingHope de manera local (en tu equipo) es necesario que cumplas con una serie de pre-requisitos y pasos que deberás de llevar a cabo en estricto orden._ ### 3. Pre-Requisitos 1. Asegúrese de haber instalado [Node.js] ≥ 12 ((recomendamos usar [nvm]) 2. Asegúrese de haber instalado yarn: ``` npm install -g yarn ``` 3. Instalar dependencias: ``` yarn install ``` 4. Crear un testnet near account 5. Instalar el NEAR CLI globally: ``` yarn install --global near-cli ``` ### 4. Clonar SeedingHope en tu equipo 1. Ingresar al repositorio depositado en GitHub: 2. Dar clic en **Fork** para que se cree una copia en tu cuenta de GitHub 3. Dar clic en **Code** y copiar la ruta del repositorio https:// 4. Ir a la terminal de Ubuntu y escribir el siguiente comando que permitirá clonar SeedingHope: ``` git clone ruta del repositorio ``` Ejemplo: ``` git clone https://github.com/usuariogit/SeedingHope-E9.git ``` 5. Una vez clonado desde la terminal ingresar a la carpeta de proyecto con el comando: ``` cd SeedingHope ``` 6. Loguearse con el usuario testnet indicando los siguientes comandos en la terminal de Ubuntu: ``` near login ``` 7. Se abrirá el navegador solicitando ingreses tu cuenta para loguearte y una vez hecho esto regresas a la terminal donde marcará como exitoso el procedimiento. ### 5. Correr el SmartContract de SeedingHope 1. Ya dentro de la carpeta del proyecto y siguiendo en la terminal, instalar la dependencia de [Node.js]: ``` npm install ``` 2. Generar el código y compilación del smartcontract de manera local: ``` yarn build && near dev-deploy ``` Con estos pasos realizados anteriormente ya podrás ejecutar en tu equipo el SmartContract de SeedingHope. ## Pruebas unitarias ------------------------------------- **1. Dar de Alta un Programa Social** Los usuarios interesados en recibir donaciones para su proyecto social, tendrán que registrar el programa e indicar el monto deseado a recaudar. Para dar de alta un Programa Social desde la línea de comandos de Ubuntu: ``` near call username.testnet nuevoProyecto '{"nombre":"nombre","descripcion":"descripcion ","cantidadMeta":monto}' --account-id username.testnet ``` **2. Dar de Baja un Programa Social** Los programas sociales que ya hayan alcanzado la meta o que su objetivo no fue cumplido podrán ser dados de baja de la siguiente manera, desde la terminal de ubuntu: ``` near call username.testnet borrarProyecto '{"id": id}' --account-id username.testnet ``` **3. Consultar un Programa Social** Generar una lista de programas sociales desde la terminal de Ubuntu que previamente fueron dados de alta: ``` near view usename.testnet mostrarProyectos --account-id usarname.testnet ``` **4. Donación a un Programa social** Consiste en que los donadores aporten Nears para apoyar al programa social y este pueda llegar a su meta y que el programa social pueda ser llevado a cabo: ``` near call username.testnet fondearProyecto '{"id": id, "cantidad": cantidad}' --account-id username.testnet ``` **5. Video demo con la funcionalidad del contrato:** https://youtu.be/OlqeNEvVXII # sprintf.js **sprintf.js** is a complete open source JavaScript sprintf implementation for the *browser* and *node.js*. Its prototype is simple: string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]]) The placeholders in the format string are marked by `%` and are followed by one or more of these elements, in this order: * An optional number followed by a `$` sign that selects which argument index to use for the value. If not specified, arguments will be placed in the same order as the placeholders in the input string. * An optional `+` sign that forces to preceed the result with a plus or minus sign on numeric values. By default, only the `-` sign is used on negative numbers. * An optional padding specifier that says what character to use for padding (if specified). Possible values are `0` or any other character precedeed by a `'` (single quote). The default is to pad with *spaces*. * An optional `-` sign, that causes sprintf to left-align the result of this placeholder. The default is to right-align the result. * An optional number, that says how many characters the result should have. If the value to be returned is shorter than this number, the result will be padded. When used with the `j` (JSON) type specifier, the padding length specifies the tab size used for indentation. * An optional precision modifier, consisting of a `.` (dot) followed by a number, that says how many digits should be displayed for floating point numbers. When used with the `g` type specifier, it specifies the number of significant digits. When used on a string, it causes the result to be truncated. * A type specifier that can be any of: * `%` — yields a literal `%` character * `b` — yields an integer as a binary number * `c` — yields an integer as the character with that ASCII value * `d` or `i` — yields an integer as a signed decimal number * `e` — yields a float using scientific notation * `u` — yields an integer as an unsigned decimal number * `f` — yields a float as is; see notes on precision above * `g` — yields a float as is; see notes on precision above * `o` — yields an integer as an octal number * `s` — yields a string as is * `x` — yields an integer as a hexadecimal number (lower-case) * `X` — yields an integer as a hexadecimal number (upper-case) * `j` — yields a JavaScript object or array as a JSON encoded string ## JavaScript `vsprintf` `vsprintf` is the same as `sprintf` except that it accepts an array of arguments, rather than a variable number of arguments: vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) ## Argument swapping You can also swap the arguments. That is, the order of the placeholders doesn't have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to: sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") And, of course, you can repeat the placeholders without having to increase the number of arguments. ## Named arguments Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - `(` and `)` - and begin with a keyword that refers to a key: var user = { name: "Dolly" } sprintf("Hello %(name)s", user) // Hello Dolly Keywords in replacement fields can be optionally followed by any number of keywords or indexes: var users = [ {name: "Dolly"}, {name: "Molly"}, {name: "Polly"} ] sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly Note: mixing positional and named placeholders is not (yet) supported ## Computed values You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly. sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890 sprintf("Current date and time: %s", function() { return new Date().toString() }) # AngularJS You can now use `sprintf` and `vsprintf` (also aliased as `fmt` and `vfmt` respectively) in your AngularJS projects. See `demo/`. # Installation ## Via Bower bower install sprintf ## Or as a node.js module npm install sprintf-js ### Usage var sprintf = require("sprintf-js").sprintf, vsprintf = require("sprintf-js").vsprintf sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) # License **sprintf.js** is licensed under the terms of the 3-clause BSD license. # cross-spawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build status][appveyor-image]][appveyor-url] [![Coverage Status][codecov-image]][codecov-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/cross-spawn [downloads-image]:https://img.shields.io/npm/dm/cross-spawn.svg [npm-image]:https://img.shields.io/npm/v/cross-spawn.svg [travis-url]:https://travis-ci.org/moxystudio/node-cross-spawn [travis-image]:https://img.shields.io/travis/moxystudio/node-cross-spawn/master.svg [appveyor-url]:https://ci.appveyor.com/project/satazor/node-cross-spawn [appveyor-image]:https://img.shields.io/appveyor/ci/satazor/node-cross-spawn/master.svg [codecov-url]:https://codecov.io/gh/moxystudio/node-cross-spawn [codecov-image]:https://img.shields.io/codecov/c/github/moxystudio/node-cross-spawn/master.svg [david-dm-url]:https://david-dm.org/moxystudio/node-cross-spawn [david-dm-image]:https://img.shields.io/david/moxystudio/node-cross-spawn.svg [david-dm-dev-url]:https://david-dm.org/moxystudio/node-cross-spawn?type=dev [david-dm-dev-image]:https://img.shields.io/david/dev/moxystudio/node-cross-spawn.svg A cross platform solution to node's spawn and spawnSync. ## Installation Node.js version 8 and up: `$ npm install cross-spawn` Node.js version 7 and under: `$ npm install cross-spawn@6` ## Why Node has issues when using spawn on Windows: - It ignores [PATHEXT](https://github.com/joyent/node/issues/2318) - It does not support [shebangs](https://en.wikipedia.org/wiki/Shebang_(Unix)) - Has problems running commands with [spaces](https://github.com/nodejs/node/issues/7367) - Has problems running commands with posix relative paths (e.g.: `./my-folder/my-executable`) - Has an [issue](https://github.com/moxystudio/node-cross-spawn/issues/82) with command shims (files in `node_modules/.bin/`), where arguments with quotes and parenthesis would result in [invalid syntax error](https://github.com/moxystudio/node-cross-spawn/blob/e77b8f22a416db46b6196767bcd35601d7e11d54/test/index.test.js#L149) - No `options.shell` support on node `<v4.8` All these issues are handled correctly by `cross-spawn`. There are some known modules, such as [win-spawn](https://github.com/ForbesLindesay/win-spawn), that try to solve this but they are either broken or provide faulty escaping of shell arguments. ## Usage Exactly the same way as node's [`spawn`](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options) or [`spawnSync`](https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options), so it's a drop in replacement. ```js const spawn = require('cross-spawn'); // Spawn NPM asynchronously const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); // Spawn NPM synchronously const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); ``` ## Caveats ### Using `options.shell` as an alternative to `cross-spawn` Starting from node `v4.8`, `spawn` has a `shell` option that allows you run commands from within a shell. This new option solves the [PATHEXT](https://github.com/joyent/node/issues/2318) issue but: - It's not supported in node `<v4.8` - You must manually escape the command and arguments which is very error prone, specially when passing user input - There are a lot of other unresolved issues from the [Why](#why) section that you must take into account If you are using the `shell` option to spawn a command in a cross platform way, consider using `cross-spawn` instead. You have been warned. ### `options.shell` support While `cross-spawn` adds support for `options.shell` in node `<v4.8`, all of its enhancements are disabled. This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using `options.shell` you are probably targeting a specific platform anyway and you don't want things to get into your way. ### Shebangs support While `cross-spawn` handles shebangs on Windows, its support is limited. More specifically, it just supports `#!/usr/bin/env <program>` where `<program>` must not contain any arguments. If you would like to have the shebang support improved, feel free to contribute via a pull-request. Remember to always test your code on Windows! ## Tests `$ npm test` `$ npm test -- --watch` during development ## License Released under the [MIT License](https://www.opensource.org/licenses/mit-license.php). <p align="center"> <a href="http://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # interpret [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] A dictionary of file extensions and associated module loaders. ## What is it This is used by [Liftoff](http://github.com/tkellen/node-liftoff) to automatically require dependencies for configuration files, and by [rechoir](http://github.com/tkellen/node-rechoir) for registering module loaders. ## API ### extensions Map file types to modules which provide a [require.extensions] loader. ```js { '.babel.js': [ { module: '@babel/register', register: function(hook) { // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353 // which only captures the final extension (.babel.js -> .js) hook({ extensions: '.js' }); }, }, { module: 'babel-register', register: function(hook) { hook({ extensions: '.js' }); }, }, { module: 'babel-core/register', register: function(hook) { hook({ extensions: '.js' }); }, }, { module: 'babel/register', register: function(hook) { hook({ extensions: '.js' }); }, }, ], '.babel.ts': [ { module: '@babel/register', register: function(hook) { hook({ extensions: '.ts' }); }, }, ], '.buble.js': 'buble/register', '.cirru': 'cirru-script/lib/register', '.cjsx': 'node-cjsx/register', '.co': 'coco', '.coffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.coffee.md': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.csv': 'require-csv', '.eg': 'earlgrey/register', '.esm.js': { module: 'esm', register: function(hook) { // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353 // which only captures the final extension (.babel.js -> .js) var esmLoader = hook(module); require.extensions['.js'] = esmLoader('module')._extensions['.js']; }, }, '.iced': ['iced-coffee-script/register', 'iced-coffee-script'], '.iced.md': 'iced-coffee-script/register', '.ini': 'require-ini', '.js': null, '.json': null, '.json5': 'json5/lib/require', '.jsx': [ { module: '@babel/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel-register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel-core/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'node-jsx', register: function(hook) { hook.install({ extension: '.jsx', harmony: true }); }, }, ], '.litcoffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.liticed': 'iced-coffee-script/register', '.ls': ['livescript', 'LiveScript'], '.mjs': '/absolute/path/to/interpret/mjs-stub.js', '.node': null, '.toml': { module: 'toml-require', register: function(hook) { hook.install(); }, }, '.ts': [ 'ts-node/register', 'typescript-node/register', 'typescript-register', 'typescript-require', 'sucrase/register/ts', { module: '@babel/register', register: function(hook) { hook({ extensions: '.ts' }); }, }, ], '.tsx': [ 'ts-node/register', 'typescript-node/register', 'sucrase/register', { module: '@babel/register', register: function(hook) { hook({ extensions: '.tsx' }); }, }, ], '.wisp': 'wisp/engine/node', '.xml': 'require-xml', '.yaml': 'require-yaml', '.yml': 'require-yaml', } ``` ### jsVariants Same as above, but only include the extensions which are javascript variants. ## How to use it Consumers should use the exported `extensions` or `jsVariants` object to determine which module should be loaded for a given extension. If a matching extension is found, consumers should do the following: 1. If the value is null, do nothing. 2. If the value is a string, try to require it. 3. If the value is an object, try to require the `module` property. If successful, the `register` property (a function) should be called with the module passed as the first argument. 4. If the value is an array, iterate over it, attempting step #2 or #3 until one of the attempts does not throw. [require.extensions]: http://nodejs.org/api/globals.html#globals_require_extensions [downloads-image]: http://img.shields.io/npm/dm/interpret.svg [npm-url]: https://www.npmjs.com/package/interpret [npm-image]: http://img.shields.io/npm/v/interpret.svg [travis-url]: https://travis-ci.org/gulpjs/interpret [travis-image]: http://img.shields.io/travis/gulpjs/interpret.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/interpret [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/interpret.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/interpret [coveralls-image]: http://img.shields.io/coveralls/gulpjs/interpret/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # fast-levenshtein - Levenshtein algorithm in Javascript [![Build Status](https://secure.travis-ci.org/hiddentao/fast-levenshtein.png)](http://travis-ci.org/hiddentao/fast-levenshtein) [![NPM module](https://badge.fury.io/js/fast-levenshtein.png)](https://badge.fury.io/js/fast-levenshtein) [![NPM downloads](https://img.shields.io/npm/dm/fast-levenshtein.svg?maxAge=2592000)](https://www.npmjs.com/package/fast-levenshtein) [![Follow on Twitter](https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/hiddentao) An efficient Javascript implementation of the [Levenshtein algorithm](http://en.wikipedia.org/wiki/Levenshtein_distance) with locale-specific collator support. ## Features * Works in node.js and in the browser. * Better performance than other implementations by not needing to store the whole matrix ([more info](http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm)). * Locale-sensitive string comparisions if needed. * Comprehensive test suite and performance benchmark. * Small: <1 KB minified and gzipped ## Installation ### node.js Install using [npm](http://npmjs.org/): ```bash $ npm install fast-levenshtein ``` ### Browser Using bower: ```bash $ bower install fast-levenshtein ``` If you are not using any module loader system then the API will then be accessible via the `window.Levenshtein` object. ## Examples **Default usage** ```javascript var levenshtein = require('fast-levenshtein'); var distance = levenshtein.get('back', 'book'); // 2 var distance = levenshtein.get('我愛你', '我叫你'); // 1 ``` **Locale-sensitive string comparisons** It supports using [Intl.Collator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Collator) for locale-sensitive string comparisons: ```javascript var levenshtein = require('fast-levenshtein'); levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true}); // 1 ``` ## Building and Testing To build the code and run the tests: ```bash $ npm install -g grunt-cli $ npm install $ npm run build ``` ## Performance _Thanks to [Titus Wormer](https://github.com/wooorm) for [encouraging me](https://github.com/hiddentao/fast-levenshtein/issues/1) to do this._ Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM): ```bash Running suite Implementation comparison [benchmark/speed.js]... >> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled) >> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled) >> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled) >> natural x 255 ops/sec ±0.76% (88 runs sampled) >> levenshtein x 180 ops/sec ±3.55% (86 runs sampled) >> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled) Benchmark done. Fastest test is fast-levenshtein at 4.2x faster than levenshtein-component ``` You can run this benchmark yourself by doing: ```bash $ npm install $ npm run build $ npm run benchmark ``` ## Contributing If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes. See [CONTRIBUTING.md](https://github.com/hiddentao/fast-levenshtein/blob/master/CONTRIBUTING.md) for details. ## License MIT - see [LICENSE.md](https://github.com/hiddentao/fast-levenshtein/blob/master/LICENSE.md) # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 66 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. # URI.js URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI ([RFC 3987](http://www.ietf.org/rfc/rfc3987.txt)), IDNA ([RFC 5890](http://www.ietf.org/rfc/rfc5890.txt)), IPv6 Address ([RFC 5952](http://www.ietf.org/rfc/rfc5952.txt)), IPv6 Zone Identifier ([RFC 6874](http://www.ietf.org/rfc/rfc6874.txt)) specifications. URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated). ## API ### Parsing URI.parse("uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body"); //returns: //{ // scheme : "uri", // userinfo : "user:pass", // host : "example.com", // port : 123, // path : "/one/two.three", // query : "q1=a1&q2=a2", // fragment : "body" //} ### Serializing URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer" ### Resolving URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g" ### Normalizing URI.normalize("HTTP://ABC.com:80/%7Esmith/home.html") === "http://abc.com/~smith/home.html" ### Comparison URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true ### IP Support //IPv4 normalization URI.normalize("//192.068.001.000") === "//192.68.1.0" //IPv6 normalization URI.normalize("//[2001:0:0DB8::0:0001]") === "//[2001:0:db8::1]" //IPv6 zone identifier support URI.parse("//[2001:db8::7%25en1]"); //returns: //{ // host : "2001:db8::7%en1" //} ### IRI Support //convert IRI to URI URI.serialize(URI.parse("http://examplé.org/rosé")) === "http://xn--exampl-gva.org/ros%C3%A9" //convert URI to IRI URI.serialize(URI.parse("http://xn--exampl-gva.org/ros%C3%A9"), {iri:true}) === "http://examplé.org/rosé" ### Options All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties: * `scheme` (string) Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior. * `reference` (string) If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme. * `tolerant` (boolean, false) If set to `true`, the parser will relax URI resolving rules. * `absolutePath` (boolean, false) If set to `true`, the serializer will not resolve a relative `path` component. * `iri` (boolean, false) If set to `true`, the serializer will unescape non-ASCII characters as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `unicodeSupport` (boolean, false) If set to `true`, the parser will unescape non-ASCII characters in the parsed output as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `domainHost` (boolean, false) If set to `true`, the library will treat the `host` component as a domain name, and convert IDNs (International Domain Names) as per [RFC 5891](http://www.ietf.org/rfc/rfc5891.txt). ## Scheme Extendable URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. Currently, URI.js has built in support for the following schemes: * http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\] * https \[[RFC 2818](http://www.ietf.org/rfc/rfc2818.txt)\] * ws \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * wss \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * mailto \[[RFC 6068](http://www.ietf.org/rfc/rfc6068.txt)\] * urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\] * urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\] ### HTTP/HTTPS Support URI.equal("HTTP://ABC.COM:80", "http://abc.com/") === true URI.equal("https://abc.com", "HTTPS://ABC.COM:443/") === true ### WS/WSS Support URI.parse("wss://example.com/foo?bar=baz"); //returns: //{ // scheme : "wss", // host: "example.com", // resourceName: "/foo?bar=baz", // secure: true, //} URI.equal("WS://ABC.COM:80/chat#one", "ws://abc.com/chat") === true ### Mailto Support URI.parse("mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!"); //returns: //{ // scheme : "mailto", // to : ["alpha@example.com", "bravo@example.com"], // subject : "SUBSCRIBE", // body : "Sign me up!" //} URI.serialize({ scheme : "mailto", to : ["alpha@example.com"], subject : "REMOVE", body : "Please remove me", headers : { cc : "charlie@example.com" } }) === "mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me" ### URN Support URI.parse("urn:example:foo"); //returns: //{ // scheme : "urn", // nid : "example", // nss : "foo", //} #### URN UUID Support URI.parse("urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6"); //returns: //{ // scheme : "urn", // nid : "uuid", // uuid : "f81d4fae-7dec-11d0-a765-00a0c91e6bf6", //} ## Usage To load in a browser, use the following tag: <script type="text/javascript" src="uri-js/dist/es5/uri.all.min.js"></script> To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line: npm install uri-js # OR yarn add uri-js Then, in your code, load it using: const URI = require("uri-js"); If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using: import * as URI from "uri-js"; Or you can load just what you need using named exports: import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from "uri-js"; ## Breaking changes ### Breaking changes from 3.x URN parsing has been completely changed to better align with the specification. Scheme is now always `urn`, but has two new properties: `nid` which contains the Namspace Identifier, and `nss` which contains the Namespace Specific String. The `nss` property will be removed by higher order scheme handlers, such as the UUID URN scheme handler. The UUID of a URN can now be found in the `uuid` property. ### Breaking changes from 2.x URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful. ### Breaking changes from 1.x The `errors` array on parsed components is now an `error` string. ## assemblyscript-temporal An implementation of temporal within AssemblyScript, with an initial focus on non-timezone-aware classes and functionality. ### Why? AssemblyScript has minimal `Date` support, however, the JS Date API itself is terrible and people tend not to use it that often. As a result libraries like moment / luxon have become staple replacements. However, there is now a [relatively mature TC39 proposal](https://github.com/tc39/proposal-temporal) that adds greatly improved date support to JS. The goal of this project is to implement Temporal for AssemblyScript. ### Usage This library currently supports the following types: #### `PlainDateTime` A `PlainDateTime` represents a calendar date and wall-clock time that does not carry time zone information, e.g. December 7th, 1995 at 3:00 PM (in the Gregorian calendar). For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindatetime.html), this implementation follows the specification as closely as possible. You can create a `PlainDateTime` from individual components, a string or an object literal: ```javascript datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.year; // 2019; datetime.month; // 11; // ... datetime.nanosecond; // 789; datetime = PlainDateTime.from("1976-11-18T12:34:56"); datetime.toString(); // "1976-11-18T12:34:56" datetime = PlainDateTime.from({ year: 1966, month: 3, day: 3 }); datetime.toString(); // "1966-03-03T00:00:00" ``` There are various ways you can manipulate a date: ```javascript // use 'with' to copy a date but with various property values overriden datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.with({ year: 2019 }).toString(); // "2019-11-18T15:23:30.123456789" // use 'add' or 'substract' to add / subtract a duration datetime = PlainDateTime.from("2020-01-12T15:00"); datetime.add({ months: 1 }).toString(); // "2020-02-12T15:00:00"); // add / subtract support Duration objects or object literals datetime.add(new Duration(1)).toString(); // "2021-01-12T15:00:00"); ``` You can compare dates and check for equality ```javascript dt1 = PlainDateTime.from("1976-11-18"); dt2 = PlainDateTime.from("2019-10-29"); PlainDateTime.compare(dt1, dt1); // 0 PlainDateTime.compare(dt1, dt2); // -1 dt1.equals(dt1); // true ``` Currently `PlainDateTime` only supports the ISO 8601 (Gregorian) calendar. #### `PlainDate` A `PlainDate` object represents a calendar date that is not associated with a particular time or time zone, e.g. August 24th, 2006. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindate.html), this implementation follows the specification as closely as possible. The `PlainDate` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainTime` A `PlainTime` object represents a wall-clock time that is not associated with a particular date or time zone, e.g. 7:39 PM. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaintime.html), this implementation follows the specification as closely as possible. The `PlainTime` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainMonthDay` A date without a year component. This is useful to express things like "Bastille Day is on the 14th of July". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainmonthday.html) , this implementation follows the specification as closely as possible. ```javascript const monthDay = PlainMonthDay.from({ month: 7, day: 14 }); // => 07-14 const date = monthDay.toPlainDate({ year: 2030 }); // => 2030-07-14 date.dayOfWeek; // => 7 ``` The `PlainMonthDay` API is almost identical to `PlainDateTime`, so see above for more API usage examples. #### `PlainYearMonth` A date without a day component. This is useful to express things like "the October 2020 meeting". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainyearmonth.html) , this implementation follows the specification as closely as possible. The `PlainYearMonth` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `now` The `now` object has several methods which give information about the current time and date. ```javascript dateTime = now.plainDateTimeISO(); dateTime.toString(); // 2021-04-01T12:05:47.357 ``` ## Contributing This project is open source, MIT licensed and your contributions are very much welcomed. There is a [brief document that outlines implementation progress and priorities](./development.md). functional-red-black-tree ========================= A [fully persistent](http://en.wikipedia.org/wiki/Persistent_data_structure) [red-black tree](http://en.wikipedia.org/wiki/Red%E2%80%93black_tree) written 100% in JavaScript. Works both in node.js and in the browser via [browserify](http://browserify.org/). Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy. Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module. # Install npm install functional-red-black-tree # Example Here is an example of some basic usage: ```javascript //Load the library var createTree = require("functional-red-black-tree") //Create a tree var t1 = createTree() //Insert some items into the tree var t2 = t1.insert(1, "foo") var t3 = t2.insert(2, "bar") //Remove something var t4 = t3.remove(1) ``` # API ```javascript var createTree = require("functional-red-black-tree") ``` ## Overview - [Tree methods](#tree-methods) - [`var tree = createTree([compare])`](#var-tree-=-createtreecompare) - [`tree.keys`](#treekeys) - [`tree.values`](#treevalues) - [`tree.length`](#treelength) - [`tree.get(key)`](#treegetkey) - [`tree.insert(key, value)`](#treeinsertkey-value) - [`tree.remove(key)`](#treeremovekey) - [`tree.find(key)`](#treefindkey) - [`tree.ge(key)`](#treegekey) - [`tree.gt(key)`](#treegtkey) - [`tree.lt(key)`](#treeltkey) - [`tree.le(key)`](#treelekey) - [`tree.at(position)`](#treeatposition) - [`tree.begin`](#treebegin) - [`tree.end`](#treeend) - [`tree.forEach(visitor(key,value)[, lo[, hi]])`](#treeforEachvisitorkeyvalue-lo-hi) - [`tree.root`](#treeroot) - [Node properties](#node-properties) - [`node.key`](#nodekey) - [`node.value`](#nodevalue) - [`node.left`](#nodeleft) - [`node.right`](#noderight) - [Iterator methods](#iterator-methods) - [`iter.key`](#iterkey) - [`iter.value`](#itervalue) - [`iter.node`](#iternode) - [`iter.tree`](#itertree) - [`iter.index`](#iterindex) - [`iter.valid`](#itervalid) - [`iter.clone()`](#iterclone) - [`iter.remove()`](#iterremove) - [`iter.update(value)`](#iterupdatevalue) - [`iter.next()`](#iternext) - [`iter.prev()`](#iterprev) - [`iter.hasNext`](#iterhasnext) - [`iter.hasPrev`](#iterhasprev) ## Tree methods ### `var tree = createTree([compare])` Creates an empty functional tree * `compare` is an optional comparison function, same semantics as array.sort() **Returns** An empty tree ordered by `compare` ### `tree.keys` A sorted array of all the keys in the tree ### `tree.values` An array array of all the values in the tree ### `tree.length` The number of items in the tree ### `tree.get(key)` Retrieves the value associated to the given key * `key` is the key of the item to look up **Returns** The value of the first node associated to `key` ### `tree.insert(key, value)` Creates a new tree with the new pair inserted. * `key` is the key of the item to insert * `value` is the value of the item to insert **Returns** A new tree with `key` and `value` inserted ### `tree.remove(key)` Removes the first item with `key` in the tree * `key` is the key of the item to remove **Returns** A new tree with the given item removed if it exists ### `tree.find(key)` Returns an iterator pointing to the first item in the tree with `key`, otherwise `null`. ### `tree.ge(key)` Find the first item in the tree whose key is `>= key` * `key` is the key to search for **Returns** An iterator at the given element. ### `tree.gt(key)` Finds the first item in the tree whose key is `> key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.lt(key)` Finds the last item in the tree whose key is `< key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.le(key)` Finds the last item in the tree whose key is `<= key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.at(position)` Finds an iterator starting at the given element * `position` is the index at which the iterator gets created **Returns** An iterator starting at position ### `tree.begin` An iterator pointing to the first element in the tree ### `tree.end` An iterator pointing to the last element in the tree ### `tree.forEach(visitor(key,value)[, lo[, hi]])` Walks a visitor function over the nodes of the tree in order. * `visitor(key,value)` is a callback that gets executed on each node. If a truthy value is returned from the visitor, then iteration is stopped. * `lo` is an optional start of the range to visit (inclusive) * `hi` is an optional end of the range to visit (non-inclusive) **Returns** The last value returned by the callback ### `tree.root` Returns the root node of the tree ## Node properties Each node of the tree has the following properties: ### `node.key` The key associated to the node ### `node.value` The value associated to the node ### `node.left` The left subtree of the node ### `node.right` The right subtree of the node ## Iterator methods ### `iter.key` The key of the item referenced by the iterator ### `iter.value` The value of the item referenced by the iterator ### `iter.node` The value of the node at the iterator's current position. `null` is iterator is node valid. ### `iter.tree` The tree associated to the iterator ### `iter.index` Returns the position of this iterator in the sequence. ### `iter.valid` Checks if the iterator is valid ### `iter.clone()` Makes a copy of the iterator ### `iter.remove()` Removes the item at the position of the iterator **Returns** A new binary search tree with `iter`'s item removed ### `iter.update(value)` Updates the value of the node in the tree at this iterator **Returns** A new binary search tree with the corresponding node updated ### `iter.next()` Advances the iterator to the next position ### `iter.prev()` Moves the iterator backward one element ### `iter.hasNext` If true, then the iterator is not at the end of the sequence ### `iter.hasPrev` If true, then the iterator is not at the beginning of the sequence # Credits (c) 2013 Mikola Lysenko. MIT License JS-YAML - YAML 1.2 parser / writer for JavaScript ================================================= [![Build Status](https://travis-ci.org/nodeca/js-yaml.svg?branch=master)](https://travis-ci.org/nodeca/js-yaml) [![NPM version](https://img.shields.io/npm/v/js-yaml.svg)](https://www.npmjs.org/package/js-yaml) __[Online Demo](http://nodeca.github.com/js-yaml/)__ This is an implementation of [YAML](http://yaml.org/), a human-friendly data serialization language. Started as [PyYAML](http://pyyaml.org/) port, it was completely rewritten from scratch. Now it's very fast, and supports 1.2 spec. Installation ------------ ### YAML module for node.js ``` npm install js-yaml ``` ### CLI executable If you want to inspect your YAML files from CLI, install js-yaml globally: ``` npm install -g js-yaml ``` #### Usage ``` usage: js-yaml [-h] [-v] [-c] [-t] file Positional arguments: file File with YAML document(s) Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -c, --compact Display errors in compact mode -t, --trace Show stack trace on error ``` ### Bundled YAML library for browsers ``` html <!-- esprima required only for !!js/function --> <script src="esprima.js"></script> <script src="js-yaml.min.js"></script> <script type="text/javascript"> var doc = jsyaml.load('greeting: hello\nname: world'); </script> ``` Browser support was done mostly for the online demo. If you find any errors - feel free to send pull requests with fixes. Also note, that IE and other old browsers needs [es5-shims](https://github.com/kriskowal/es5-shim) to operate. Notes: 1. We have no resources to support browserified version. Don't expect it to be well tested. Don't expect fast fixes if something goes wrong there. 2. `!!js/function` in browser bundle will not work by default. If you really need it - load `esprima` parser first (via amd or directly). 3. `!!bin` in browser will return `Array`, because browsers do not support node.js `Buffer` and adding Buffer shims is completely useless on practice. API --- Here we cover the most 'useful' methods. If you need advanced details (creating your own tags), see [wiki](https://github.com/nodeca/js-yaml/wiki) and [examples](https://github.com/nodeca/js-yaml/tree/master/examples) for more info. ``` javascript const yaml = require('js-yaml'); const fs = require('fs'); // Get document, or throw exception on error try { const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8')); console.log(doc); } catch (e) { console.log(e); } ``` ### safeLoad (string [ , options ]) **Recommended loading way.** Parses `string` as single YAML document. Returns either a plain object, a string or `undefined`, or throws `YAMLException` on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data. options: - `filename` _(default: null)_ - string to be used as a file path in error/warning messages. - `onWarning` _(default: null)_ - function to call on warning messages. Loader will call this function with an instance of `YAMLException` for each warning. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ - specifies a schema to use. - `FAILSAFE_SCHEMA` - only strings, arrays and plain objects: http://www.yaml.org/spec/1.2/spec.html#id2802346 - `JSON_SCHEMA` - all JSON-supported types: http://www.yaml.org/spec/1.2/spec.html#id2803231 - `CORE_SCHEMA` - same as `JSON_SCHEMA`: http://www.yaml.org/spec/1.2/spec.html#id2804923 - `DEFAULT_SAFE_SCHEMA` - all supported YAML types, without unsafe ones (`!!js/undefined`, `!!js/regexp` and `!!js/function`): http://yaml.org/type/ - `DEFAULT_FULL_SCHEMA` - all supported YAML types. - `json` _(default: false)_ - compatibility with JSON.parse behaviour. If true, then duplicate keys in a mapping will override values rather than throwing an error. NOTE: This function **does not** understand multi-document sources, it throws exception on those. NOTE: JS-YAML **does not** support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use `Null` and `NULL` as `null`, etc. The core schema also has no such restrictions. It allows binary notation for integers. ### load (string [ , options ]) **Use with care with untrusted sources**. The same as `safeLoad()` but uses `DEFAULT_FULL_SCHEMA` by default - adds some JavaScript-specific types: `!!js/function`, `!!js/regexp` and `!!js/undefined`. For untrusted sources, you must additionally validate object structure to avoid injections: ``` javascript const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"'; // I'm just converting that string, what could possibly go wrong? require('js-yaml').load(untrusted_code) + '' ``` ### safeLoadAll (string [, iterator] [, options ]) Same as `safeLoad()`, but understands multi-document sources. Applies `iterator` to each document if specified, or returns array of documents. ``` javascript const yaml = require('js-yaml'); yaml.safeLoadAll(data, function (doc) { console.log(doc); }); ``` ### loadAll (string [, iterator] [ , options ]) Same as `safeLoadAll()` but uses `DEFAULT_FULL_SCHEMA` by default. ### safeDump (object [ , options ]) Serializes `object` as a YAML document. Uses `DEFAULT_SAFE_SCHEMA`, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the `skipInvalid` option to `true`. options: - `indent` _(default: 2)_ - indentation width to use (in spaces). - `noArrayIndent` _(default: false)_ - when true, will not add an indentation level to array elements - `skipInvalid` _(default: false)_ - do not throw on invalid types (like function in the safe schema) and skip pairs and single values with such types. - `flowLevel` (default: -1) - specifies level of nesting, when to switch from block to flow style for collections. -1 means block style everwhere - `styles` - "tag" => "style" map. Each tag may have own set of styles. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ specifies a schema to use. - `sortKeys` _(default: `false`)_ - if `true`, sort keys when dumping YAML. If a function, use the function to sort the keys. - `lineWidth` _(default: `80`)_ - set max line width. - `noRefs` _(default: `false`)_ - if `true`, don't convert duplicate objects into references - `noCompatMode` _(default: `false`)_ - if `true` don't try to be compatible with older yaml versions. Currently: don't quote "yes", "no" and so on, as required for YAML 1.1 - `condenseFlow` _(default: `false`)_ - if `true` flow sequences will be condensed, omitting the space between `a, b`. Eg. `'[a,b]'`, and omitting the space between `key: value` and quoting the key. Eg. `'{"a":b}'` Can be useful when using yaml for pretty URL query params as spaces are %-encoded. The following table show availlable styles (e.g. "canonical", "binary"...) available for each tag (.e.g. !!null, !!int ...). Yaml output is shown on the right side after `=>` (default setting) or `->`: ``` none !!null "canonical" -> "~" "lowercase" => "null" "uppercase" -> "NULL" "camelcase" -> "Null" !!int "binary" -> "0b1", "0b101010", "0b1110001111010" "octal" -> "01", "052", "016172" "decimal" => "1", "42", "7290" "hexadecimal" -> "0x1", "0x2A", "0x1C7A" !!bool "lowercase" => "true", "false" "uppercase" -> "TRUE", "FALSE" "camelcase" -> "True", "False" !!float "lowercase" => ".nan", '.inf' "uppercase" -> ".NAN", '.INF' "camelcase" -> ".NaN", '.Inf' ``` Example: ``` javascript safeDump (object, { 'styles': { '!!null': 'canonical' // dump null as ~ }, 'sortKeys': true // sort object keys }); ``` ### dump (object [ , options ]) Same as `safeDump()` but without limits (uses `DEFAULT_FULL_SCHEMA` by default). Supported YAML types -------------------- The list of standard YAML tags and corresponding JavaScipt types. See also [YAML tag discussion](http://pyyaml.org/wiki/YAMLTagDiscussion) and [YAML types repository](http://yaml.org/type/). ``` !!null '' # null !!bool 'yes' # bool !!int '3...' # number !!float '3.14...' # number !!binary '...base64...' # buffer !!timestamp 'YYYY-...' # date !!omap [ ... ] # array of key-value pairs !!pairs [ ... ] # array or array pairs !!set { ... } # array of objects with given keys and null values !!str '...' # string !!seq [ ... ] # array !!map { ... } # object ``` **JavaScript-specific tags** ``` !!js/regexp /pattern/gim # RegExp !!js/undefined '' # Undefined !!js/function 'function () {...}' # Function ``` Caveats ------- Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling `toString()` method) them at the moment of adding them. ``` yaml --- ? [ foo, bar ] : - baz ? { foo: bar } : - baz - baz ``` ``` javascript { "foo,bar": ["baz"], "[object Object]": ["baz", "baz"] } ``` Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded. ``` yaml &anchor foo: foo: bar *anchor: duplicate key baz: bat *anchor: duplicate key ``` js-yaml for enterprise ---------------------- Available as part of the Tidelift Subscription The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-js-yaml?utm_source=npm-js-yaml&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config [![Build Status](https://api.travis-ci.org/adaltas/node-csv-stringify.svg)](https://travis-ci.org/#!/adaltas/node-csv-stringify) [![NPM](https://img.shields.io/npm/dm/csv-stringify)](https://www.npmjs.com/package/csv-stringify) [![NPM](https://img.shields.io/npm/v/csv-stringify)](https://www.npmjs.com/package/csv-stringify) This package is a stringifier converting records into a CSV text and implementing the Node.js [`stream.Transform` API](https://nodejs.org/api/stream.html). It also provides the easier synchronous and callback-based APIs for conveniency. It is both extremely easy to use and powerful. It was first released in 2010 and is tested against big data sets by a large community. ## Documentation * [Project homepage](http://csv.js.org/stringify/) * [API](http://csv.js.org/stringify/api/) * [Options](http://csv.js.org/stringify/options/) * [Examples](http://csv.js.org/stringify/examples/) ## Main features * Follow the Node.js streaming API * Simplicity with the optional callback API * Support for custom formatters, delimiters, quotes, escape characters and header * Support big datasets * Complete test coverage and samples for inspiration * Only 1 external dependency * to be used conjointly with `csv-generate`, `csv-parse` and `stream-transform` * MIT License ## Usage The module is built on the Node.js Stream API. For the sake of simplicity, a simple callback API is also provided. To give you a quick look, here's an example of the callback API: ```javascript const stringify = require('csv-stringify') const assert = require('assert') // import stringify from 'csv-stringify' // import assert from 'assert/strict' const input = [ [ '1', '2', '3', '4' ], [ 'a', 'b', 'c', 'd' ] ] stringify(input, function(err, output) { const expected = '1,2,3,4\na,b,c,d\n' assert.strictEqual(output, expected, `output.should.eql ${expected}`) console.log("Passed.", output) }) ``` ## Development Tests are executed with mocha. To install it, run `npm install` followed by `npm test`. It will install mocha and its dependencies in your project "node_modules" directory and run the test suite. The tests run against the CoffeeScript source files. To generate the JavaScript files, run `npm run build`. The test suite is run online with [Travis](https://travis-ci.org/#!/adaltas/node-csv-stringify). See the [Travis definition file](https://github.com/adaltas/node-csv-stringify/blob/master/.travis.yml) to view the tested Node.js version. ## Contributors * David Worms: <https://github.com/wdavidw> [csv_home]: https://github.com/adaltas/node-csv [stream_transform]: http://nodejs.org/api/stream.html#stream_class_stream_transform [examples]: http://csv.js.org/stringify/examples/ [csv]: https://github.com/adaltas/node-csv <table><thead> <tr> <th>Linux</th> <th>OS X</th> <th>Windows</th> <th>Coverage</th> <th>Downloads</th> </tr> </thead><tbody><tr> <td colspan="2" align="center"> <a href="https://travis-ci.org/kaelzhang/node-ignore"> <img src="https://travis-ci.org/kaelzhang/node-ignore.svg?branch=master" alt="Build Status" /></a> </td> <td align="center"> <a href="https://ci.appveyor.com/project/kaelzhang/node-ignore"> <img src="https://ci.appveyor.com/api/projects/status/github/kaelzhang/node-ignore?branch=master&svg=true" alt="Windows Build Status" /></a> </td> <td align="center"> <a href="https://codecov.io/gh/kaelzhang/node-ignore"> <img src="https://codecov.io/gh/kaelzhang/node-ignore/branch/master/graph/badge.svg" alt="Coverage Status" /></a> </td> <td align="center"> <a href="https://www.npmjs.org/package/ignore"> <img src="http://img.shields.io/npm/dm/ignore.svg" alt="npm module downloads per month" /></a> </td> </tr></tbody></table> # ignore `ignore` is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore [spec](http://git-scm.com/docs/gitignore). Pay attention that [`minimatch`](https://www.npmjs.org/package/minimatch) does not work in the gitignore way. To filter filenames according to .gitignore file, I recommend this module. ##### Tested on - Linux + Node: `0.8` - `7.x` - Windows + Node: `0.10` - `7.x`, node < `0.10` is not tested due to the lack of support of appveyor. Actually, `ignore` does not rely on any versions of node specially. Since `4.0.0`, ignore will no longer support `node < 6` by default, to use in node < 6, `require('ignore/legacy')`. For details, see [CHANGELOG](https://github.com/kaelzhang/node-ignore/blob/master/CHANGELOG.md). ## Table Of Main Contents - [Usage](#usage) - [`Pathname` Conventions](#pathname-conventions) - [Guide for 2.x -> 3.x](#upgrade-2x---3x) - [Guide for 3.x -> 4.x](#upgrade-3x---4x) - See Also: - [`glob-gitignore`](https://www.npmjs.com/package/glob-gitignore) matches files using patterns and filters them according to gitignore rules. ## Usage ```js import ignore from 'ignore' const ig = ignore().add(['.abc/*', '!.abc/d/']) ``` ### Filter the given paths ```js const paths = [ '.abc/a.js', // filtered out '.abc/d/e.js' // included ] ig.filter(paths) // ['.abc/d/e.js'] ig.ignores('.abc/a.js') // true ``` ### As the filter function ```js paths.filter(ig.createFilter()); // ['.abc/d/e.js'] ``` ### Win32 paths will be handled ```js ig.filter(['.abc\\a.js', '.abc\\d\\e.js']) // if the code above runs on windows, the result will be // ['.abc\\d\\e.js'] ``` ## Why another ignore? - `ignore` is a standalone module, and is much simpler so that it could easy work with other programs, unlike [isaacs](https://npmjs.org/~isaacs)'s [fstream-ignore](https://npmjs.org/package/fstream-ignore) which must work with the modules of the fstream family. - `ignore` only contains utility methods to filter paths according to the specified ignore rules, so - `ignore` never try to find out ignore rules by traversing directories or fetching from git configurations. - `ignore` don't cares about sub-modules of git projects. - Exactly according to [gitignore man page](http://git-scm.com/docs/gitignore), fixes some known matching issues of fstream-ignore, such as: - '`/*.js`' should only match '`a.js`', but not '`abc/a.js`'. - '`**/foo`' should match '`foo`' anywhere. - Prevent re-including a file if a parent directory of that file is excluded. - Handle trailing whitespaces: - `'a '`(one space) should not match `'a '`(two spaces). - `'a \ '` matches `'a '` - All test cases are verified with the result of `git check-ignore`. # Methods ## .add(pattern: string | Ignore): this ## .add(patterns: Array<string | Ignore>): this - **pattern** `String | Ignore` An ignore pattern string, or the `Ignore` instance - **patterns** `Array<String | Ignore>` Array of ignore patterns. Adds a rule or several rules to the current manager. Returns `this` Notice that a line starting with `'#'`(hash) is treated as a comment. Put a backslash (`'\'`) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename. ```js ignore().add('#abc').ignores('#abc') // false ignore().add('\#abc').ignores('#abc') // true ``` `pattern` could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just `ignore().add()` the content of a ignore file: ```js ignore() .add(fs.readFileSync(filenameOfGitignore).toString()) .filter(filenames) ``` `pattern` could also be an `ignore` instance, so that we could easily inherit the rules of another `Ignore` instance. ## <strike>.addIgnoreFile(path)</strike> REMOVED in `3.x` for now. To upgrade `ignore@2.x` up to `3.x`, use ```js import fs from 'fs' if (fs.existsSync(filename)) { ignore().add(fs.readFileSync(filename).toString()) } ``` instead. ## .filter(paths: Array<Pathname>): Array<Pathname> ```ts type Pathname = string ``` Filters the given array of pathnames, and returns the filtered array. - **paths** `Array.<Pathname>` The array of `pathname`s to be filtered. ### `Pathname` Conventions: #### 1. `Pathname` should be a `path.relative()`d pathname `Pathname` should be a string that have been `path.join()`ed, or the return value of `path.relative()` to the current directory. ```js // WRONG ig.ignores('./abc') // WRONG, for it will never happen. // If the gitignore rule locates at the root directory, // `'/abc'` should be changed to `'abc'`. // ``` // path.relative('/', '/abc') -> 'abc' // ``` ig.ignores('/abc') // Right ig.ignores('abc') // Right ig.ignores(path.join('./abc')) // path.join('./abc') -> 'abc' ``` In other words, each `Pathname` here should be a relative path to the directory of the gitignore rules. Suppose the dir structure is: ``` /path/to/your/repo |-- a | |-- a.js | |-- .b | |-- .c |-- .DS_store ``` Then the `paths` might be like this: ```js [ 'a/a.js' '.b', '.c/.DS_store' ] ``` Usually, you could use [`glob`](http://npmjs.org/package/glob) with `option.mark = true` to fetch the structure of the current directory: ```js import glob from 'glob' glob('**', { // Adds a / character to directory matches. mark: true }, (err, files) => { if (err) { return console.error(err) } let filtered = ignore().add(patterns).filter(files) console.log(filtered) }) ``` #### 2. filenames and dirnames `node-ignore` does NO `fs.stat` during path matching, so for the example below: ```js ig.add('config/') // `ig` does NOT know if 'config' is a normal file, directory or something ig.ignores('config') // And it returns `false` ig.ignores('config/') // returns `true` ``` Specially for people who develop some library based on `node-ignore`, it is important to understand that. ## .ignores(pathname: Pathname): boolean > new in 3.2.0 Returns `Boolean` whether `pathname` should be ignored. ```js ig.ignores('.abc/a.js') // true ``` ## .createFilter() Creates a filter function which could filter an array of paths with `Array.prototype.filter`. Returns `function(path)` the filter function. ## `options.ignorecase` since 4.0.0 Similar as the `core.ignorecase` option of [git-config](https://git-scm.com/docs/git-config), `node-ignore` will be case insensitive if `options.ignorecase` is set to `true` (default value), otherwise case sensitive. ```js const ig = ignore({ ignorecase: false }) ig.add('*.png') ig.ignores('*.PNG') // false ``` **** # Upgrade Guide ## Upgrade 2.x -> 3.x - All `options` of 2.x are unnecessary and removed, so just remove them. - `ignore()` instance is no longer an [`EventEmitter`](nodejs.org/api/events.html), and all events are unnecessary and removed. - `.addIgnoreFile()` is removed, see the [.addIgnoreFile](#addignorefilepath) section for details. ## Upgrade 3.x -> 4.x Since `4.0.0`, `ignore` will no longer support node < 6, to use `ignore` in node < 6: ```js var ignore = require('ignore/legacy') ``` **** # Collaborators - [@whitecolor](https://github.com/whitecolor) *Alex* - [@SamyPesse](https://github.com/SamyPesse) *Samy Pessé* - [@azproduction](https://github.com/azproduction) *Mikhail Davydov* - [@TrySound](https://github.com/TrySound) *Bogdan Chadkin* - [@JanMattner](https://github.com/JanMattner) *Jan Mattner* - [@ntwb](https://github.com/ntwb) *Stephen Edgar* - [@kasperisager](https://github.com/kasperisager) *Kasper Isager* - [@sandersn](https://github.com/sandersn) *Nathan Shively-Sanders* # which Like the unix `which` utility. Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so `hash -r` is not needed when the PATH changes. ## USAGE ```javascript var which = require('which') // async usage which('node', function (er, resolvedPath) { // er is returned if no "node" is found on the PATH // if it is found, then the absolute path to the exec is returned }) // or promise which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... }) // sync usage // throws if not found var resolved = which.sync('node') // if nothrow option is used, returns null if not found resolved = which.sync('node', {nothrow: true}) // Pass options to override the PATH and PATHEXT environment vars. which('node', { path: someOtherPath }, function (er, resolved) { if (er) throw er console.log('found at %j', resolved) }) ``` ## CLI USAGE Same as the BSD `which(1)` binary. ``` usage: which [-as] program ... ``` ## OPTIONS You may pass an options object as the second argument. - `path`: Use instead of the `PATH` environment variable. - `pathExt`: Use instead of the `PATHEXT` environment variable. - `all`: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string. <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![Build Status](https://travis-ci.org/epoberezkin/json-schema-traverse.svg?branch=master)](https://travis-ci.org/epoberezkin/json-schema-traverse) [![npm version](https://badge.fury.io/js/json-schema-traverse.svg)](https://www.npmjs.com/package/json-schema-traverse) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # Acorn A tiny, fast JavaScript parser written in JavaScript. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface **parse**`(input, options)` is the main interface to the library. The `input` parameter is a string, `options` can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the [ESTree spec](https://github.com/estree/estree). ```javascript let acorn = require("acorn"); console.log(acorn.parse("1 + 1")); ``` When encountering a syntax error, the parser will raise a `SyntaxError` object with a meaningful message. The error object will have a `pos` property that indicates the string offset at which the error occurred, and a `loc` object that contains a `{line, column}` object referring to that same position. Options can be provided by passing a second argument, which should be an object containing any of these fields: - **ecmaVersion**: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10. **NOTE**: Only 'stage 4' (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins. - **sourceType**: Indicate the mode the code should be parsed in. Can be either `"script"` or `"module"`. This influences global strict mode and parsing of `import` and `export` declarations. **NOTE**: If set to `"module"`, then static `import` / `export` syntax will be valid, even if `ecmaVersion` is less than 6. - **onInsertedSemicolon**: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if `locations` is on, also a `{line, column}` object representing this position. - **onTrailingComma**: Like `onInsertedSemicolon`, but for trailing commas. - **allowReserved**: If `false`, using a reserved word will generate an error. Defaults to `true` for `ecmaVersion` 3, `false` for higher versions. When given the value `"never"`, reserved words and keywords can also not be used as property names (as in Internet Explorer's old parser). - **allowReturnOutsideFunction**: By default, a return statement at the top level raises an error. Set this to `true` to accept such code. - **allowImportExportEverywhere**: By default, `import` and `export` declarations can only appear at a program's top level. Setting this option to `true` allows them anywhere where a statement is allowed. - **allowAwaitOutsideFunction**: By default, `await` expressions can only appear inside `async` functions. Setting this option to `true` allows to have top-level `await` expressions. They are still not allowed in non-`async` functions, though. - **allowHashBang**: When this is enabled (off by default), if the code starts with the characters `#!` (as in a shellscript), the first line will be treated as a comment. - **locations**: When `true`, each node has a `loc` object attached with `start` and `end` subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. Default is `false`. - **onToken**: If a function is passed for this option, each found token will be passed in same format as tokens returned from `tokenizer().getToken()`. If array is passed, each found token is pushed to it. Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **onComment**: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters: - `block`: `true` if the comment is a block comment, false if it is a line comment. - `text`: The content of the comment. - `start`: Character offset of the start of the comment. - `end`: Character offset of the end of the comment. When the `locations` options is on, the `{line, column}` locations of the comment’s start and end are passed as two additional parameters. If array is passed for this option, each found comment is pushed to it as object in Esprima format: ```javascript { "type": "Line" | "Block", "value": "comment text", "start": Number, "end": Number, // If `locations` option is on: "loc": { "start": {line: Number, column: Number} "end": {line: Number, column: Number} }, // If `ranges` option is on: "range": [Number, Number] } ``` Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **ranges**: Nodes have their start and end characters offsets recorded in `start` and `end` properties (directly on the node, rather than the `loc` object, which holds line/column data. To also add a [semi-standardized](https://bugzilla.mozilla.org/show_bug.cgi?id=745678) `range` property holding a `[start, end]` array with the same numbers, set the `ranges` option to `true`. - **program**: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the `program` option in subsequent parses. This will add the toplevel forms of the parsed file to the "Program" (top) node of an existing parse tree. - **sourceFile**: When the `locations` option is `true`, you can pass this option to add a `source` attribute in every node’s `loc` object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose. - **directSourceFile**: Like `sourceFile`, but a `sourceFile` property will be added (regardless of the `location` option) directly to the nodes, rather than the `loc` object. - **preserveParens**: If this option is `true`, parenthesized expressions are represented by (non-standard) `ParenthesizedExpression` nodes that have a single `expression` property containing the expression inside parentheses. **parseExpressionAt**`(input, offset, options)` will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression. **tokenizer**`(input, options)` returns an object with a `getToken` method that can be called repeatedly to get the next token, a `{start, end, type, value}` object (with added `loc` property when the `locations` option is enabled and `range` property when the `ranges` option is enabled). When the token's type is `tokTypes.eof`, you should stop calling the method, since it will keep returning that same token forever. In ES6 environment, returned result can be used as any other protocol-compliant iterable: ```javascript for (let token of acorn.tokenizer(str)) { // iterate over the tokens } // transform code to array of tokens: var tokens = [...acorn.tokenizer(str)]; ``` **tokTypes** holds an object mapping names to the token type objects that end up in the `type` properties of tokens. **getLineInfo**`(input, offset)` can be used to get a `{line, column}` object for a given program string and offset. ### The `Parser` class Instances of the **`Parser`** class contain all the state and logic that drives a parse. It has static methods `parse`, `parseExpressionAt`, and `tokenizer` that match the top-level functions by the same name. When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static `extend` method. ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); var JSXParser = acorn.Parser.extend(jsx()); JSXParser.parse("foo(<bar/>)"); ``` The `extend` method takes any number of plugin values, and returns a new `Parser` class that includes the extra parser logic provided by the plugins. ## Command line interface The `bin/acorn` utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options: - `--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10`: Sets the ECMAScript version to parse. Default is version 9. - `--module`: Sets the parsing mode to `"module"`. Is set to `"script"` otherwise. - `--locations`: Attaches a "loc" object to each node with "start" and "end" subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. - `--allow-hash-bang`: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment. - `--compact`: No whitespace is used in the AST output. - `--silent`: Do not output the AST, just return the exit status. - `--help`: Print the usage information and quit. The utility spits out the syntax tree as JSON data. ## Existing plugins - [`acorn-jsx`](https://github.com/RReverser/acorn-jsx): Parse [Facebook JSX syntax extensions](https://github.com/facebook/jsx) Plugins for ECMAScript proposals: - [`acorn-stage3`](https://github.com/acornjs/acorn-stage3): Parse most stage 3 proposals, bundling: - [`acorn-class-fields`](https://github.com/acornjs/acorn-class-fields): Parse [class fields proposal](https://github.com/tc39/proposal-class-fields) - [`acorn-import-meta`](https://github.com/acornjs/acorn-import-meta): Parse [import.meta proposal](https://github.com/tc39/proposal-import-meta) - [`acorn-private-methods`](https://github.com/acornjs/acorn-private-methods): parse [private methods, getters and setters proposal](https://github.com/tc39/proposal-private-methods)n # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/images/ajv_logo.png"> # Ajv: Another JSON Schema Validator The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07. [![Build Status](https://travis-ci.org/ajv-validator/ajv.svg?branch=master)](https://travis-ci.org/ajv-validator/ajv) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm (beta)](https://img.shields.io/npm/v/ajv/beta)](https://www.npmjs.com/package/ajv/v/7.0.0-beta.0) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv v7 beta is released [Ajv version 7.0.0-beta.0](https://github.com/ajv-validator/ajv/tree/v7-beta) is released with these changes: - to reduce the mistakes in JSON schemas and unexpected validation results, [strict mode](./docs/strict-mode.md) is added - it prohibits ignored or ambiguous JSON Schema elements. - to make code injection from untrusted schemas impossible, [code generation](./docs/codegen.md) is fully re-written to be safe. - to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas. - schemas are compiled to ES6 code (ES5 code generation is supported with an option). - to improve reliability and maintainability the code is migrated to TypeScript. **Please note**: - the support for JSON-Schema draft-04 is removed - if you have schemas using "id" attributes you have to replace them with "\$id" (or continue using version 6 that will be supported until 02/28/2021). - all formats are separated to ajv-formats package - they have to be explicitely added if you use them. See [release notes](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) for the details. To install the new version: ```bash npm install ajv@beta ``` See [Getting started with v7](https://github.com/ajv-validator/ajv/tree/v7-beta#usage) for code example. ## Mozilla MOSS grant and OpenJS Foundation [<img src="https://www.poberezkin.com/images/mozilla.png" width="240" height="68">](https://www.mozilla.org/en-US/moss/) &nbsp;&nbsp;&nbsp; [<img src="https://www.poberezkin.com/images/openjs.png" width="220" height="68">](https://openjsf.org/blog/2020/08/14/ajv-joins-openjs-foundation-as-an-incubation-project/) Ajv has been awarded a grant from Mozilla’s [Open Source Support (MOSS) program](https://www.mozilla.org/en-US/moss/) in the “Foundational Technology” track! It will sponsor the development of Ajv support of [JSON Schema version 2019-09](https://tools.ietf.org/html/draft-handrews-json-schema-02) and of [JSON Type Definition](https://tools.ietf.org/html/draft-ucarion-json-type-definition-04). Ajv also joined [OpenJS Foundation](https://openjsf.org/) – having this support will help ensure the longevity and stability of Ajv for all its users. This [blog post](https://www.poberezkin.com/posts/2020-08-14-ajv-json-validator-mozilla-open-source-grant-openjs-foundation.html) has more details. I am looking for the long term maintainers of Ajv – working with [ReadySet](https://www.thereadyset.co/), also sponsored by Mozilla, to establish clear guidelines for the role of a "maintainer" and the contribution standards, and to encourage a wider, more inclusive, contribution from the community. ## Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Using version 6 [JSON Schema draft-07](http://json-schema.org/latest/json-schema-validation.html) is published. [Ajv version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0) that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes). __Please note__: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance: ```javascript ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json')); ``` To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId: ```javascript var ajv = new Ajv({schemaId: 'id'}); // If you want to use both draft-04 and draft-06/07 schemas: // var ajv = new Ajv({schemaId: 'auto'}); ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json')); ``` ## Contents - [Performance](#performance) - [Features](#features) - [Getting started](#getting-started) - [Frequently Asked Questions](https://github.com/ajv-validator/ajv/blob/master/FAQ.md) - [Using in browser](#using-in-browser) - [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) - [Command line interface](#command-line-interface) - Validation - [Keywords](#validation-keywords) - [Annotation keywords](#annotation-keywords) - [Formats](#formats) - [Combining schemas with $ref](#ref) - [$data reference](#data-reference) - NEW: [$merge and $patch keywords](#merge-and-patch-keywords) - [Defining custom keywords](#defining-custom-keywords) - [Asynchronous schema compilation](#asynchronous-schema-compilation) - [Asynchronous validation](#asynchronous-validation) - [Security considerations](#security-considerations) - [Security contact](#security-contact) - [Untrusted schemas](#untrusted-schemas) - [Circular references in objects](#circular-references-in-javascript-objects) - [Trusted schemas](#security-risks-of-trusted-schemas) - [ReDoS attack](#redos-attack) - Modifying data during validation - [Filtering data](#filtering-data) - [Assigning defaults](#assigning-defaults) - [Coercing data types](#coercing-data-types) - API - [Methods](#api) - [Options](#options) - [Validation errors](#validation-errors) - [Plugins](#plugins) - [Related packages](#related-packages) - [Some packages using Ajv](#some-packages-using-ajv) - [Tests, Contributing, Changes history](#tests) - [Support, Code of conduct, License](#open-source-software-support) ## Performance Ajv generates code using [doT templates](https://github.com/olado/doT) to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=32,4,1&chs=600x416&chxl=-1:|djv|ajv|json-schema-validator-generator|jsen|is-my-json-valid|themis|z-schema|jsck|skeemas|json-schema-library|tv4&chd=t:100,98,72.1,66.8,50.1,15.1,6.1,3.8,1.2,0.7,0.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements full JSON Schema [draft-06/07](http://json-schema.org/) and draft-04 standards: - all validation keywords (see [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md)) - full support of remote refs (remote schemas have to be added with `addSchema` or compiled to be available) - support of circular references between schemas - correct string lengths for strings with unicode pairs (can be turned off) - [formats](#formats) defined by JSON Schema draft-07 standard and custom formats (can be turned off) - [validates schemas against meta-schema](#api-validateschema) - supports [browsers](#using-in-browser) and Node.js 0.10-14.x - [asynchronous loading](#asynchronous-schema-compilation) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](#options) - [error messages with parameters](#validation-errors) describing error reasons to allow creating custom error messages - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [filtering data](#filtering-data) from additional properties - [assigning defaults](#assigning-defaults) to missing properties and items - [coercing data](#coercing-data-types) to the types specified in `type` keywords - [custom keywords](#defining-custom-keywords) - draft-06/07 keywords `const`, `contains`, `propertyNames` and `if/then/else` - draft-06 boolean schemas (`true`/`false` as a schema to always pass/fail). - keywords `switch`, `patternRequired`, `formatMaximum` / `formatMinimum` and `formatExclusiveMaximum` / `formatExclusiveMinimum` from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [$data reference](#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](#asynchronous-validation) of custom formats and keywords ## Install ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://tonicdev.com/npm/ajv The fastest validation call: ```javascript // Node.js require: var Ajv = require('ajv'); // or ESM/TypeScript import import Ajv from 'ajv'; var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true} var validate = ajv.compile(schema); var valid = validate(data); if (!valid) console.log(validate.errors); ``` or with less code ```javascript // ... var valid = ajv.validate(schema, data); if (!valid) console.log(ajv.errors); // ... ``` or ```javascript // ... var valid = ajv.addSchema(schema, 'mySchema') .validate('mySchema', data); if (!valid) console.log(ajv.errorsText()); // ... ``` See [API](#api) and [Options](#options) for more details. Ajv compiles schemas to functions and caches them in all cases (using schema serialized with [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again. The best performance is achieved when using compiled functions returned by `compile` or `getSchema` methods (there is no additional function call). __Please note__: every time a validation function or `ajv.validate` are called `errors` property is overwritten. You need to copy `errors` array reference to another variable if you want to use it later (e.g., in the callback). See [Validation errors](#validation-errors) __Note for TypeScript users__: `ajv` provides its own TypeScript declarations out of the box, so you don't need to install the deprecated `@types/ajv` module. ## Using in browser You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle. If you need to use Ajv in several bundles you can create a separate UMD bundle using `npm run bundle` script (thanks to [siddo420](https://github.com/siddo420)). Then you need to load Ajv in the browser: ```html <script src="ajv.min.js"></script> ``` This bundle can be used with different module systems; it creates global `Ajv` if no module system is found. The browser bundle is available on [cdnjs](https://cdnjs.com/libraries/ajv). Ajv is tested with these browsers: [![Sauce Test Status](https://saucelabs.com/browser-matrix/epoberezkin.svg)](https://saucelabs.com/u/epoberezkin) __Please note__: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue [#234](https://github.com/ajv-validator/ajv/issues/234)). ### Ajv and Content Security Policies (CSP) If you're using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a `script-src` directive that includes the value `'unsafe-eval'`. :warning: NOTE, however, that `unsafe-eval` is NOT recommended in a secure CSP[[1]](https://developer.chrome.com/extensions/contentSecurityPolicy#relaxing-eval), as it has the potential to open the document to cross-site scripting (XSS) attacks. In order to make use of Ajv without easing your CSP, you can [pre-compile a schema using the CLI](https://github.com/ajv-validator/ajv-cli#compile-schemas). This will transpile the schema JSON into a JavaScript file that exports a `validate` function that works simlarly to a schema compiled at runtime. Note that pre-compilation of schemas is performed using [ajv-pack](https://github.com/ajv-validator/ajv-pack) and there are [some limitations to the schema features it can compile](https://github.com/ajv-validator/ajv-pack#limitations). A successfully pre-compiled schema is equivalent to the same schema compiled at runtime. ## Command line interface CLI is available as a separate npm package [ajv-cli](https://github.com/ajv-validator/ajv-cli). It supports: - compiling JSON Schemas to test their validity - BETA: generating standalone module exporting a validation function to be used without Ajv (using [ajv-pack](https://github.com/ajv-validator/ajv-pack)) - migrate schemas to draft-07 (using [json-schema-migrate](https://github.com/epoberezkin/json-schema-migrate)) - validating data file(s) against JSON Schema - testing expected validity of data against JSON Schema - referenced schemas - custom meta-schemas - files in JSON, JSON5, YAML, and JavaScript format - all Ajv options - reporting changes in data after validation in [JSON-patch](https://tools.ietf.org/html/rfc6902) format ## Validation keywords Ajv supports all validation keywords from draft-07 of JSON Schema standard: - [type](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#type) - [for numbers](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-numbers) - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf - [for strings](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-strings) - maxLength, minLength, pattern, format - [for arrays](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-arrays) - maxItems, minItems, uniqueItems, items, additionalItems, [contains](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#contains) - [for objects](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-objects) - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, [propertyNames](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#propertynames) - [for all types](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-all-types) - enum, [const](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#const) - [compound keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#compound-keywords) - not, oneOf, anyOf, allOf, [if/then/else](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#ifthenelse) With [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package Ajv also supports validation keywords from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) for JSON Schema standard: - [patternRequired](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#patternrequired-proposed) - like `required` but with patterns that some property should match. - [formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#formatmaximum--formatminimum-and-exclusiveformatmaximum--exclusiveformatminimum-proposed) - setting limits for date, time, etc. See [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md) for more details. ## Annotation keywords JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation. - `title` and `description`: information about the data represented by that schema - `$comment` (NEW in draft-07): information for developers. With option `$comment` Ajv logs or passes the comment string to the user-supplied function. See [Options](#options). - `default`: a default value of the data instance, see [Assigning defaults](#assigning-defaults). - `examples` (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema. - `readOnly` and `writeOnly` (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.). - `contentEncoding`: [RFC 2045](https://tools.ietf.org/html/rfc2045#section-6.1 ), e.g., "base64". - `contentMediaType`: [RFC 2046](https://tools.ietf.org/html/rfc2046), e.g., "image/png". __Please note__: Ajv does not implement validation of the keywords `examples`, `contentEncoding` and `contentMediaType` but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance. ## Formats Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use "format" keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see [ReDoS attack](#redos-attack). __Please note__: if you need to use "format" keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios. The following formats are implemented for string validation with "format" keyword: - _date_: full-date according to [RFC3339](http://tools.ietf.org/html/rfc3339#section-5.6). - _time_: time with optional time-zone. - _date-time_: date-time from the same source (time-zone is mandatory). `date`, `time` and `date-time` validate ranges in `full` mode and only regexp in `fast` mode (see [options](#options)). - _uri_: full URI. - _uri-reference_: URI reference, including full and relative URIs. - _uri-template_: URI template according to [RFC6570](https://tools.ietf.org/html/rfc6570) - _url_ (deprecated): [URL record](https://url.spec.whatwg.org/#concept-url). - _email_: email address. - _hostname_: host name according to [RFC1034](http://tools.ietf.org/html/rfc1034#section-3.5). - _ipv4_: IP address v4. - _ipv6_: IP address v6. - _regex_: tests whether a string is a valid regular expression by passing it to RegExp constructor. - _uuid_: Universally Unique IDentifier according to [RFC4122](http://tools.ietf.org/html/rfc4122). - _json-pointer_: JSON-pointer according to [RFC6901](https://tools.ietf.org/html/rfc6901). - _relative-json-pointer_: relative JSON-pointer according to [this draft](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00). __Please note__: JSON Schema draft-07 also defines formats `iri`, `iri-reference`, `idn-hostname` and `idn-email` for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here. There are two modes of format validation: `fast` and `full`. This mode affects formats `date`, `time`, `date-time`, `uri`, `uri-reference`, and `email`. See [Options](#options) for details. You can add additional formats and replace any of the formats above using [addFormat](#api-addformat) method. The option `unknownFormats` allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See [Options](#options) for details. You can find regular expressions used for format validation and the sources that were used in [formats.js](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js). ## <a name="ref"></a>Combining schemas with $ref You can structure your validation logic across multiple schema files and have schemas reference each other using `$ref` keyword. Example: ```javascript var schema = { "$id": "http://example.com/schemas/schema.json", "type": "object", "properties": { "foo": { "$ref": "defs.json#/definitions/int" }, "bar": { "$ref": "defs.json#/definitions/str" } } }; var defsSchema = { "$id": "http://example.com/schemas/defs.json", "definitions": { "int": { "type": "integer" }, "str": { "type": "string" } } }; ``` Now to compile your schema you can either pass all schemas to Ajv instance: ```javascript var ajv = new Ajv({schemas: [schema, defsSchema]}); var validate = ajv.getSchema('http://example.com/schemas/schema.json'); ``` or use `addSchema` method: ```javascript var ajv = new Ajv; var validate = ajv.addSchema(defsSchema) .compile(schema); ``` See [Options](#options) and [addSchema](#api) method. __Please note__: - `$ref` is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of `addSchema` method or as a property name in `schemas` option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using `compileAsync` method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See [Asynchronous schema compilation](#asynchronous-schema-compilation). ## $data reference With `$data` option you can use values from the validated data as the values for the schema keywords. See [proposal](https://github.com/json-schema-org/json-schema-spec/issues/51) for more information about how it works. `$data` reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems. The value of "$data" should be a [JSON-pointer](https://tools.ietf.org/html/rfc6901) to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a [relative JSON-pointer](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00) (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema). Examples. This schema requires that the value in property `smaller` is less or equal than the value in the property larger: ```javascript var ajv = new Ajv({$data: true}); var schema = { "properties": { "smaller": { "type": "number", "maximum": { "$data": "1/larger" } }, "larger": { "type": "number" } } }; var validData = { smaller: 5, larger: 7 }; ajv.validate(schema, validData); // true ``` This schema requires that the properties have the same format as their field names: ```javascript var schema = { "additionalProperties": { "type": "string", "format": { "$data": "0#" } } }; var validData = { 'date-time': '1963-06-19T08:30:06.283185Z', email: 'joe.bloggs@example.com' } ``` `$data` reference is resolved safely - it won't throw even if some property is undefined. If `$data` resolves to `undefined` the validation succeeds (with the exclusion of `const` keyword). If `$data` resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails. ## $merge and $patch keywords With the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) you can use the keywords `$merge` and `$patch` that allow extending JSON Schemas with patches using formats [JSON Merge Patch (RFC 7396)](https://tools.ietf.org/html/rfc7396) and [JSON Patch (RFC 6902)](https://tools.ietf.org/html/rfc6902). To add keywords `$merge` and `$patch` to Ajv instance use this code: ```javascript require('ajv-merge-patch')(ajv); ``` Examples. Using `$merge`: ```json { "$merge": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": { "properties": { "q": { "type": "number" } } } } } ``` Using `$patch`: ```json { "$patch": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": [ { "op": "add", "path": "/properties/q", "value": { "type": "number" } } ] } } ``` The schemas above are equivalent to this schema: ```json { "type": "object", "properties": { "p": { "type": "string" }, "q": { "type": "number" } }, "additionalProperties": false } ``` The properties `source` and `with` in the keywords `$merge` and `$patch` can use absolute or relative `$ref` to point to other schemas previously added to the Ajv instance or to the fragments of the current schema. See the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) for more information. ## Defining custom keywords The advantages of using custom keywords are: - allow creating validation scenarios that cannot be expressed using JSON Schema - simplify your schemas - help bringing a bigger part of the validation logic to your schemas - make your schemas more expressive, less verbose and closer to your application domain - implement custom data processors that modify your data (`modifying` option MUST be used in keyword definition) and/or create side effects while the data is being validated If a keyword is used only for side-effects and its validation result is pre-defined, use option `valid: true/false` in keyword definition to simplify both generated code (no error handling in case of `valid: true`) and your keyword functions (no need to return any validation result). The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas. You can define custom keywords with [addKeyword](#api-addkeyword) method. Keywords are defined on the `ajv` instance level - new instances will not have previously defined keywords. Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema. Example. `range` and `exclusiveRange` keywords using compiled schema: ```javascript ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) { var min = sch[0]; var max = sch[1]; return parentSchema.exclusiveRange === true ? function (data) { return data > min && data < max; } : function (data) { return data >= min && data <= max; } } }); var schema = { "range": [2, 4], "exclusiveRange": true }; var validate = ajv.compile(schema); console.log(validate(2.01)); // true console.log(validate(3.99)); // true console.log(validate(2)); // false console.log(validate(4)); // false ``` Several custom keywords (typeof, instanceof, range and propertyNames) are defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - they can be used for your schemas and as a starting point for your own custom keywords. See [Defining custom keywords](https://github.com/ajv-validator/ajv/blob/master/CUSTOM.md) for more details. ## Asynchronous schema compilation During asynchronous compilation remote references are loaded using supplied function. See `compileAsync` [method](#api-compileAsync) and `loadSchema` [option](#options). Example: ```javascript var ajv = new Ajv({ loadSchema: loadSchema }); ajv.compileAsync(schema).then(function (validate) { var valid = validate(data); // ... }); function loadSchema(uri) { return request.json(uri).then(function (res) { if (res.statusCode >= 400) throw new Error('Loading error: ' + res.statusCode); return res.body; }); } ``` __Please note__: [Option](#options) `missingRefs` should NOT be set to `"ignore"` or `"fail"` for asynchronous compilation to work. ## Asynchronous validation Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add `async: true` in the keyword or format definition (see [addFormat](#api-addformat), [addKeyword](#api-addkeyword) and [Defining custom keywords](#defining-custom-keywords)). If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have `"$async": true` keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without `$async` keyword Ajv will throw an exception during schema compilation. __Please note__: all asynchronous subschemas that are referenced from the current or other schemas should have `"$async": true` keyword as well, otherwise the schema compilation will fail. Validation function for an asynchronous custom format/keyword should return a promise that resolves with `true` or `false` (or rejects with `new Ajv.ValidationError(errors)` if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to [es7 async functions](http://tc39.github.io/ecmascript-asyncawait/) that can optionally be transpiled with [nodent](https://github.com/MatAtBread/nodent). Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via `processCode` option. See [Options](#options). The compiled validation function has `$async: true` property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas. Validation result will be a promise that resolves with validated data or rejects with an exception `Ajv.ValidationError` that contains the array of validation errors in `errors` property. Example: ```javascript var ajv = new Ajv; // require('ajv-async')(ajv); ajv.addKeyword('idExists', { async: true, type: 'number', validate: checkIdExists }); function checkIdExists(schema, data) { return knex(schema.table) .select('id') .where('id', data) .then(function (rows) { return !!rows.length; // true if record is found }); } var schema = { "$async": true, "properties": { "userId": { "type": "integer", "idExists": { "table": "users" } }, "postId": { "type": "integer", "idExists": { "table": "posts" } } } }; var validate = ajv.compile(schema); validate({ userId: 1, postId: 19 }) .then(function (data) { console.log('Data is valid', data); // { userId: 1, postId: 19 } }) .catch(function (err) { if (!(err instanceof Ajv.ValidationError)) throw err; // data is invalid console.log('Validation errors:', err.errors); }); ``` ### Using transpilers with asynchronous validation functions. [ajv-async](https://github.com/ajv-validator/ajv-async) uses [nodent](https://github.com/MatAtBread/nodent) to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser). #### Using nodent ```javascript var ajv = new Ajv; require('ajv-async')(ajv); // in the browser if you want to load ajv-async bundle separately you can: // window.ajvAsync(ajv); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` #### Using other transpilers ```javascript var ajv = new Ajv({ processCode: transpileFunc }); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` See [Options](#options). ## Security considerations JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider. ##### Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ##### Untrusted schemas Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application. If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. [#557](https://github.com/ajv-validator/ajv/issues/557)) - validating certain data can be slow It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in `pattern` and `patternProperties` keywords. Regardless the measures you take, using untrusted schemas increases security risks. ##### Circular references in JavaScript objects Ajv does not support schemas and validated data that have circular references in objects. See [issue #802](https://github.com/ajv-validator/ajv/issues/802). An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references. ##### Security risks of trusted schemas Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to): - `pattern` and `format` for large strings - in some cases using `maxLength` can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see [ReDoS attack](#redos-attack)). - `patternProperties` for large property names - use `propertyNames` to mitigate, but some regular expressions can have exponential evaluation time as well. - `uniqueItems` for large non-scalar arrays - use `maxItems` to mitigate __Please note__: The suggestions above to prevent slow validation would only work if you do NOT use `allErrors: true` in production code (using it would continue validation after validation errors). You can validate your JSON schemas against [this meta-schema](https://github.com/ajv-validator/ajv/blob/master/lib/refs/json-schema-secure.json) to check that these recommendations are followed: ```javascript const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json')); const schema1 = {format: 'email'}; isSchemaSecure(schema1); // false const schema2 = {format: 'email', maxLength: MAX_LENGTH}; isSchemaSecure(schema2); // true ``` __Please note__: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results. ##### Content Security Policies (CSP) See [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) ## ReDoS attack Certain regular expressions can lead to the exponential evaluation time even with relatively short strings. Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see [safe-regex](https://github.com/substack/safe-regex), for example. __Please note__: some formats that Ajv implements use [regular expressions](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js) that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources __it is strongly recommended__ to consider the following: - making assessment of "format" implementations in Ajv. - using `format: 'fast'` option that simplifies some of the regular expressions (although it does not guarantee that they are safe). - replacing format implementations provided by Ajv with your own implementations of "format" keyword that either uses different regular expressions or another approach to format validation. Please see [addFormat](#api-addformat) method. - disabling format validation by ignoring "format" keyword with option `format: false` Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios. ## Filtering data With [option `removeAdditional`](#options) (added by [andyscott](https://github.com/andyscott)) you can filter data during the validation. This option modifies original data. Example: ```javascript var ajv = new Ajv({ removeAdditional: true }); var schema = { "additionalProperties": false, "properties": { "foo": { "type": "number" }, "bar": { "additionalProperties": { "type": "number" }, "properties": { "baz": { "type": "string" } } } } } var data = { "foo": 0, "additional1": 1, // will be removed; `additionalProperties` == false "bar": { "baz": "abc", "additional2": 2 // will NOT be removed; `additionalProperties` != false }, } var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 } ``` If `removeAdditional` option in the example above were `"all"` then both `additional1` and `additional2` properties would have been removed. If the option were `"failing"` then property `additional1` would have been removed regardless of its value and property `additional2` would have been removed only if its value were failing the schema in the inner `additionalProperties` (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed). __Please note__: If you use `removeAdditional` option with `additionalProperties` keyword inside `anyOf`/`oneOf` keywords your validation can fail with this schema, for example: ```json { "type": "object", "oneOf": [ { "properties": { "foo": { "type": "string" } }, "required": [ "foo" ], "additionalProperties": false }, { "properties": { "bar": { "type": "integer" } }, "required": [ "bar" ], "additionalProperties": false } ] } ``` The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties. With the option `removeAdditional: true` the validation will pass for the object `{ "foo": "abc"}` but will fail for the object `{"bar": 1}`. It happens because while the first subschema in `oneOf` is validated, the property `bar` is removed because it is an additional property according to the standard (because it is not included in `properties` keyword in the same schema). While this behaviour is unexpected (issues [#129](https://github.com/ajv-validator/ajv/issues/129), [#134](https://github.com/ajv-validator/ajv/issues/134)), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way: ```json { "type": "object", "properties": { "foo": { "type": "string" }, "bar": { "type": "integer" } }, "additionalProperties": false, "oneOf": [ { "required": [ "foo" ] }, { "required": [ "bar" ] } ] } ``` The schema above is also more efficient - it will compile into a faster function. ## Assigning defaults With [option `useDefaults`](#options) Ajv will assign values from `default` keyword in the schemas of `properties` and `items` (when it is the array of schemas) to the missing properties and items. With the option value `"empty"` properties and items equal to `null` or `""` (empty string) will be considered missing and assigned defaults. This option modifies original data. __Please note__: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema. Example 1 (`default` in `properties`): ```javascript var ajv = new Ajv({ useDefaults: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "string", "default": "baz" } }, "required": [ "foo", "bar" ] }; var data = { "foo": 1 }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": "baz" } ``` Example 2 (`default` in `items`): ```javascript var schema = { "type": "array", "items": [ { "type": "number" }, { "type": "string", "default": "foo" } ] } var data = [ 1 ]; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // [ 1, "foo" ] ``` `default` keywords in other cases are ignored: - not in `properties` or `items` subschemas - in schemas inside `anyOf`, `oneOf` and `not` (see [#42](https://github.com/ajv-validator/ajv/issues/42)) - in `if` subschema of `switch` keyword - in schemas generated by custom macro keywords The [`strictDefaults` option](#options) customizes Ajv's behavior for the defaults that Ajv ignores (`true` raises an error, and `"log"` outputs a warning). ## Coercing data types When you are validating user inputs all your data properties are usually strings. The option `coerceTypes` allows you to have your data types coerced to the types specified in your schema `type` keywords, both to pass the validation and to use the correctly typed data afterwards. This option modifies original data. __Please note__: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value. Example 1: ```javascript var ajv = new Ajv({ coerceTypes: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "boolean" } }, "required": [ "foo", "bar" ] }; var data = { "foo": "1", "bar": "false" }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": false } ``` Example 2 (array coercions): ```javascript var ajv = new Ajv({ coerceTypes: 'array' }); var schema = { "properties": { "foo": { "type": "array", "items": { "type": "number" } }, "bar": { "type": "boolean" } } }; var data = { "foo": "1", "bar": ["false"] }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": [1], "bar": false } ``` The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords). See [Coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md) for details. ## API ##### new Ajv(Object options) -&gt; Object Create Ajv instance. ##### .compile(Object schema) -&gt; Function&lt;Object data&gt; Generate validating function and cache the compiled schema for future use. Validating function returns a boolean value. This function has properties `errors` and `schema`. Errors encountered during the last validation are assigned to `errors` property (it is assigned `null` if there was no errors). `schema` property contains the reference to the original schema. The schema passed to this method will be validated against meta-schema unless `validateSchema` option is false. If schema is invalid, an error will be thrown. See [options](#options). ##### <a name="api-compileAsync"></a>.compileAsync(Object schema [, Boolean meta] [, Function callback]) -&gt; Promise Asynchronous version of `compile` method that loads missing remote schemas using asynchronous function in `options.loadSchema`. This function returns a Promise that resolves to a validation function. An optional callback passed to `compileAsync` will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when: - missing schema can't be loaded (`loadSchema` returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved. - schema (or some loaded/referenced schema) is invalid. The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded. You can asynchronously compile meta-schema by passing `true` as the second parameter. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### .validate(Object schema|String key|String ref, data) -&gt; Boolean Validate data using passed schema (it will be compiled and cached). Instead of the schema you can use the key that was previously passed to `addSchema`, the schema id if it was present in the schema or any previously resolved reference. Validation errors will be available in the `errors` property of Ajv instance (`null` if there were no errors). __Please note__: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later. If the schema is asynchronous (has `$async` keyword on the top level) this method returns a Promise. See [Asynchronous validation](#asynchronous-validation). ##### .addSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole. Array of schemas can be passed (schemas should have ids), the second parameter will be ignored. Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key. Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data. Although `addSchema` does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time. By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by `validateSchema` option. __Please note__: Ajv uses the [method chaining syntax](https://en.wikipedia.org/wiki/Method_chaining) for all methods with the prefix `add*` and `remove*`. This allows you to do nice things like the following. ```javascript var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri); ``` ##### .addMetaSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of `addSchema` because there may be instance options that would compile a meta schema incorrectly (at the moment it is `removeAdditional` option). There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option `meta` is set to `false`. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See `validateSchema`. ##### <a name="api-validateschema"></a>.validateSchema(Object schema) -&gt; Boolean Validates schema. This method should be used to validate schemas rather than `validate` due to the inconsistency of `uri` format in JSON Schema standard. By default this method is called automatically when the schema is added, so you rarely need to use it directly. If schema doesn't have `$schema` property, it is validated against draft 6 meta-schema (option `meta` should not be false). If schema has `$schema` property, then the schema with this id (that should be previously added) is used to validate passed schema. Errors will be available at `ajv.errors`. ##### .getSchema(String key) -&gt; Function&lt;Object data&gt; Retrieve compiled schema previously added with `addSchema` by the key passed to `addSchema` or by its full reference (id). The returned validating function has `schema` property with the reference to the original schema. ##### .removeSchema([Object schema|String key|String ref|RegExp pattern]) -&gt; Ajv Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references. Schema can be removed using: - key passed to `addSchema` - it's full reference (id) - RegExp that should match schema id or key (meta-schemas won't be removed) - actual schema object that will be stable-stringified to remove schema from cache If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared. ##### <a name="api-addformat"></a>.addFormat(String name, String|RegExp|Function|Object format) -&gt; Ajv Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance. Strings are converted to RegExp. Function should return validation result as `true` or `false`. If object is passed it should have properties `validate`, `compare` and `async`: - _validate_: a string, RegExp or a function as described above. - _compare_: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords `formatMaximum`/`formatMinimum` (defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package). It should return `1` if the first value is bigger than the second value, `-1` if it is smaller and `0` if it is equal. - _async_: an optional `true` value if `validate` is an asynchronous function; in this case it should return a promise that resolves with a value `true` or `false`. - _type_: an optional type of data that the format applies to. It can be `"string"` (default) or `"number"` (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass. Custom formats can be also added via `formats` option. ##### <a name="api-addkeyword"></a>.addKeyword(String keyword, Object definition) -&gt; Ajv Add custom validation keyword to Ajv instance. Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance. Keyword must start with a letter, `_` or `$`, and may continue with letters, numbers, `_`, `$`, or `-`. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions. Example Keywords: - `"xyz-example"`: valid, and uses prefix for the xyz project to avoid name collisions. - `"example"`: valid, but not recommended as it could collide with future versions of JSON Schema etc. - `"3-example"`: invalid as numbers are not allowed to be the first character in a keyword Keyword definition is an object with the following properties: - _type_: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types. - _validate_: validating function - _compile_: compiling function - _macro_: macro function - _inline_: compiling function that returns code (as string) - _schema_: an optional `false` value used with "validate" keyword to not pass schema - _metaSchema_: an optional meta-schema for keyword schema - _dependencies_: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation - _modifying_: `true` MUST be passed if keyword modifies data - _statements_: `true` can be passed in case inline keyword generates statements (as opposed to expression) - _valid_: pass `true`/`false` to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - _$data_: an optional `true` value to support [$data reference](#data-reference) as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - _async_: an optional `true` value if the validation function is asynchronous (whether it is compiled or passed in _validate_ property); in this case it should return a promise that resolves with a value `true` or `false`. This option is ignored in case of "macro" and "inline" keywords. - _errors_: an optional boolean or string `"full"` indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation. _compile_, _macro_ and _inline_ are mutually exclusive, only one should be used at a time. _validate_ can be used separately or in addition to them to support $data reference. __Please note__: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate `type` keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed. See [Defining custom keywords](#defining-custom-keywords) for more details. ##### .getKeyword(String keyword) -&gt; Object|Boolean Returns custom keyword definition, `true` for pre-defined keywords and `false` if the keyword is unknown. ##### .removeKeyword(String keyword) -&gt; Ajv Removes custom or pre-defined keyword so you can redefine them. While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results. __Please note__: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use `removeSchema` method and compile them again. ##### .errorsText([Array&lt;Object&gt; errors [, Object options]]) -&gt; String Returns the text with all errors in a String. Options can have properties `separator` (string used to separate errors, ", " by default) and `dataVar` (the variable name that dataPaths are prefixed with, "data" by default). ## Options Defaults: ```javascript { // validation and reporting options: $data: false, allErrors: false, verbose: false, $comment: false, // NEW in Ajv version 6.0 jsonPointers: false, uniqueItems: true, unicode: true, nullable: false, format: 'fast', formats: {}, unknownFormats: true, schemas: {}, logger: undefined, // referenced schema options: schemaId: '$id', missingRefs: true, extendRefs: 'ignore', // recommended 'fail' loadSchema: undefined, // function(uri: string): Promise {} // options to modify validated data: removeAdditional: false, useDefaults: false, coerceTypes: false, // strict mode options strictDefaults: false, strictKeywords: false, strictNumbers: false, // asynchronous validation options: transpile: undefined, // requires ajv-async package // advanced options: meta: true, validateSchema: true, addUsedSchema: true, inlineRefs: true, passContext: false, loopRequired: Infinity, ownProperties: false, multipleOfPrecision: false, errorDataPath: 'object', // deprecated messages: true, sourceCode: false, processCode: undefined, // function (str: string, schema: object): string {} cache: new Cache, serialize: undefined } ``` ##### Validation and reporting options - _$data_: support [$data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See [API](#api). - _allErrors_: check all rules collecting all errors. Default is to return after the first error. - _verbose_: include the reference to the part of the schema (`schema` and `parentSchema`) and validated data in errors (false by default). - _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$comment` keyword to a function. Option values: - `false` (default): ignore $comment keyword. - `true`: log the keyword value to console. - function: pass the keyword value, its schema path and root schema to the specified function - _jsonPointers_: set `dataPath` property of errors using [JSON Pointers](https://tools.ietf.org/html/rfc6901) instead of JavaScript property access notation. - _uniqueItems_: validate `uniqueItems` keyword (true by default). - _unicode_: calculate correct length of strings with unicode pairs (true by default). Pass `false` to use `.length` of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - _nullable_: support keyword "nullable" from [Open API 3 specification](https://swagger.io/docs/specification/data-models/data-types/). - _format_: formats validation mode. Option values: - `"fast"` (default) - simplified and fast validation (see [Formats](#formats) for details of which formats are available and affected by this option). - `"full"` - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - `false` - ignore all format keywords. - _formats_: an object with custom formats. Keys and values will be passed to `addFormat` method. - _keywords_: an object with custom keywords. Keys and values will be passed to `addKeyword` method. - _unknownFormats_: handling of unknown formats. Option values: - `true` (default) - if an unknown format is encountered the exception is thrown during schema compilation. If `format` keyword value is [$data reference](#data-reference) and it is unknown the validation will fail. - `[String]` - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. If `format` keyword value is [$data reference](#data-reference) and it is not in this array the validation will fail. - `"ignore"` - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification. - _schemas_: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method `addSchema(value, key)` will be called for each schema in this object. - _logger_: sets the logging method. Default is the global `console` object that should have methods `log`, `warn` and `error`. See [Error logging](#error-logging). Option values: - custom logger - it should have methods `log`, `warn` and `error`. If any of these methods is missing an exception will be thrown. - `false` - logging is disabled. ##### Referenced schema options - _schemaId_: this option defines which keywords are used as schema URI. Option value: - `"$id"` (default) - only use `$id` keyword as schema URI (as specified in JSON Schema draft-06/07), ignore `id` keyword (if it is present a warning will be logged). - `"id"` - only use `id` keyword as schema URI (as specified in JSON Schema draft-04), ignore `$id` keyword (if it is present a warning will be logged). - `"auto"` - use both `$id` and `id` keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation. - _missingRefs_: handling of missing referenced schemas. Option values: - `true` (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties `missingRef` (with hash fragment) and `missingSchema` (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). - `"ignore"` - to log error during compilation and always pass validation. - `"fail"` - to log error and successfully compile schema but fail validation if this rule is checked. - _extendRefs_: validation of other keywords when `$ref` is present in the schema. Option values: - `"ignore"` (default) - when `$ref` is used other keywords are ignored (as per [JSON Reference](https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03#section-3) standard). A warning will be logged during the schema compilation. - `"fail"` (recommended) - if other validation keywords are used together with `$ref` the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing. - `true` - validate all keywords in the schemas with `$ref` (the default behaviour in versions before 5.0.0). - _loadSchema_: asynchronous function that will be used to load remote schemas when `compileAsync` [method](#api-compileAsync) is used and some reference is missing (option `missingRefs` should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### Options to modify validated data - _removeAdditional_: remove additional properties - see example in [Filtering data](#filtering-data). This option is not used if schema is added with `addMetaSchema` method. Option values: - `false` (default) - not to remove additional properties - `"all"` - all additional properties are removed, regardless of `additionalProperties` keyword in schema (and no validation is made for them). - `true` - only additional properties with `additionalProperties` keyword equal to `false` are removed. - `"failing"` - additional properties that fail schema validation will be removed (where `additionalProperties` keyword is `false` or schema). - _useDefaults_: replace missing or undefined properties and items with the values from corresponding `default` keywords. Default behaviour is to ignore `default` keywords. This option is not used if schema is added with `addMetaSchema` method. See examples in [Assigning defaults](#assigning-defaults). Option values: - `false` (default) - do not use defaults - `true` - insert defaults by value (object literal is used). - `"empty"` - in addition to missing or undefined, use defaults for properties and items that are equal to `null` or `""` (an empty string). - `"shared"` (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well. - _coerceTypes_: change data type of data to match `type` keyword. See the example in [Coercing data types](#coercing-data-types) and [coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md). Option values: - `false` (default) - no type coercion. - `true` - coerce scalar data types. - `"array"` - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema). ##### Strict mode options - _strictDefaults_: report ignored `default` keywords in schemas. Option values: - `false` (default) - ignored defaults are not reported - `true` - if an ignored default is present, throw an error - `"log"` - if an ignored default is present, log warning - _strictKeywords_: report unknown keywords in schemas. Option values: - `false` (default) - unknown keywords are not reported - `true` - if an unknown keyword is present, throw an error - `"log"` - if an unknown keyword is present, log warning - _strictNumbers_: validate numbers strictly, failing validation for NaN and Infinity. Option values: - `false` (default) - NaN or Infinity will pass validation for numeric types - `true` - NaN or Infinity will not pass validation for numeric types ##### Asynchronous validation options - _transpile_: Requires [ajv-async](https://github.com/ajv-validator/ajv-async) package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values: - `undefined` (default) - transpile with [nodent](https://github.com/MatAtBread/nodent) if async functions are not supported. - `true` - always transpile with nodent. - `false` - do not transpile; if async functions are not supported an exception will be thrown. ##### Advanced options - _meta_: add [meta-schema](http://json-schema.org/documentation.html) so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no `$schema` keyword. This default meta-schema MUST have `$schema` keyword. - _validateSchema_: validate added/compiled schemas against meta-schema (true by default). `$schema` property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with `addMetaSchema` method. Option values: - `true` (default) - if the validation fails, throw the exception. - `"log"` - if the validation fails, log error. - `false` - skip schema validation. - _addUsedSchema_: by default methods `compile` and `validate` add schemas to the instance if they have `$id` (or `id`) property that doesn't start with "#". If `$id` is present and it is not unique the exception will be thrown. Set this option to `false` to skip adding schemas to the instance and the `$id` uniqueness check when these methods are used. This option does not affect `addSchema` method. - _inlineRefs_: Affects compilation of referenced schemas. Option values: - `true` (default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. - `false` - to not inline referenced schemas (they will be compiled as separate functions). - integer number - to limit the maximum number of keywords of the schema that will be inlined. - _passContext_: pass validation context to custom keyword functions. If this option is `true` and you pass some context to the compiled validation function with `validate.call(context, data)`, the `context` will be available as `this` in your custom keywords. By default `this` is Ajv instance. - _loopRequired_: by default `required` keyword is compiled into a single expression (or a sequence of statements in `allErrors` mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which `required` keyword will be validated in a loop - smaller validation function size but also worse performance. - _ownProperties_: by default Ajv iterates over all enumerable object properties; when this option is `true` only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - _multipleOfPrecision_: by default `multipleOf` keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue [#84](https://github.com/ajv-validator/ajv/issues/84)). If you need to use fractional dividers set this option to some positive integer N to have `multipleOf` validated using this formula: `Math.abs(Math.round(division) - division) < 1e-N` (it is slower but allows for float arithmetics deviations). - _errorDataPath_ (deprecated): set `dataPath` to point to 'object' (default) or to 'property' when validating keywords `required`, `additionalProperties` and `dependencies`. - _messages_: Include human-readable messages in errors. `true` by default. `false` can be passed when custom messages are used (e.g. with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n)). - _sourceCode_: add `sourceCode` property to validating function (for debugging; this code can be different from the result of toString call). - _processCode_: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options: - `beautify` that formatted the generated function using [js-beautify](https://github.com/beautify-web/js-beautify). If you want to beautify the generated code pass a function calling `require('js-beautify').js_beautify` as `processCode: code => js_beautify(code)`. - `transpile` that transpiled asynchronous validation function. You can still use `transpile` option with [ajv-async](https://github.com/ajv-validator/ajv-async) package. See [Asynchronous validation](#asynchronous-validation) for more information. - _cache_: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache [sacjs](https://github.com/epoberezkin/sacjs) can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods `put(key, value)`, `get(key)`, `del(key)` and `clear()`. - _serialize_: an optional function to serialize schema to cache key. Pass `false` to use schema itself as a key (e.g., if WeakMap used as a cache). By default [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) is used. ## Validation errors In case of validation failure, Ajv assigns the array of errors to `errors` property of validation function (or to `errors` property of Ajv instance when `validate` or `validateSchema` methods were called). In case of [asynchronous validation](#asynchronous-validation), the returned promise is rejected with exception `Ajv.ValidationError` that has `errors` property. ### Error objects Each error is an object with the following properties: - _keyword_: validation keyword. - _dataPath_: the path to the part of the data that was validated. By default `dataPath` uses JavaScript property access notation (e.g., `".prop[1].subProp"`). When the option `jsonPointers` is true (see [Options](#options)) `dataPath` will be set using JSON pointer standard (e.g., `"/prop/1/subProp"`). - _schemaPath_: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation. - _params_: the object with the additional information about error that can be used to create custom error messages (e.g., using [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package). See below for parameters set by all keywords. - _message_: the standard error message (can be excluded with option `messages` set to false). - _schema_: the schema of the keyword (added with `verbose` option). - _parentSchema_: the schema containing the keyword (added with `verbose` option) - _data_: the data validated by the keyword (added with `verbose` option). __Please note__: `propertyNames` keyword schema validation errors have an additional property `propertyName`, `dataPath` points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property `keyword` equal to `"propertyNames"`. ### Error parameters Properties of `params` object in errors depend on the keyword that failed validation. - `maxItems`, `minItems`, `maxLength`, `minLength`, `maxProperties`, `minProperties` - property `limit` (number, the schema of the keyword). - `additionalItems` - property `limit` (the maximum number of allowed items in case when `items` keyword is an array of schemas and `additionalItems` is false). - `additionalProperties` - property `additionalProperty` (the property not used in `properties` and `patternProperties` keywords). - `dependencies` - properties: - `property` (dependent property), - `missingProperty` (required missing dependency - only the first one is reported currently) - `deps` (required dependencies, comma separated list as a string), - `depsCount` (the number of required dependencies). - `format` - property `format` (the schema of the keyword). - `maximum`, `minimum` - properties: - `limit` (number, the schema of the keyword), - `exclusive` (boolean, the schema of `exclusiveMaximum` or `exclusiveMinimum`), - `comparison` (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=") - `multipleOf` - property `multipleOf` (the schema of the keyword) - `pattern` - property `pattern` (the schema of the keyword) - `required` - property `missingProperty` (required property that is missing). - `propertyNames` - property `propertyName` (an invalid property name). - `patternRequired` (in ajv-keywords) - property `missingPattern` (required pattern that did not match any property). - `type` - property `type` (required type(s), a string, can be a comma-separated list) - `uniqueItems` - properties `i` and `j` (indices of duplicate items). - `const` - property `allowedValue` pointing to the value (the schema of the keyword). - `enum` - property `allowedValues` pointing to the array of values (the schema of the keyword). - `$ref` - property `ref` with the referenced schema URI. - `oneOf` - property `passingSchemas` (array of indices of passing schemas, null if no schema passes). - custom keywords (in case keyword definition doesn't create errors) - property `keyword` (the keyword name). ### Error logging Using the `logger` option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - **Required Methods**: `log`, `warn`, `error` ```javascript var otherLogger = new OtherLogger(); var ajv = new Ajv({ logger: { log: console.log.bind(console), warn: function warn() { otherLogger.logWarn.apply(otherLogger, arguments); }, error: function error() { otherLogger.logError.apply(otherLogger, arguments); console.error.apply(console, arguments); } } }); ``` ## Plugins Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions: - it exports a function - this function accepts ajv instance as the first parameter and returns the same instance to allow chaining - this function can accept an optional configuration as the second parameter If you have published a useful plugin please submit a PR to add it to the next section. ## Related packages - [ajv-async](https://github.com/ajv-validator/ajv-async) - plugin to configure async validation mode - [ajv-bsontype](https://github.com/BoLaMN/ajv-bsontype) - plugin to validate mongodb's bsonType formats - [ajv-cli](https://github.com/jessedc/ajv-cli) - command line interface - [ajv-errors](https://github.com/ajv-validator/ajv-errors) - plugin for custom error messages - [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) - internationalised error messages - [ajv-istanbul](https://github.com/ajv-validator/ajv-istanbul) - plugin to instrument generated validation code to measure test coverage of your schemas - [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) - plugin with custom validation keywords (select, typeof, etc.) - [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) - plugin with keywords $merge and $patch - [ajv-pack](https://github.com/ajv-validator/ajv-pack) - produces a compact module exporting validation functions - [ajv-formats-draft2019](https://github.com/luzlab/ajv-formats-draft2019) - format validators for draft2019 that aren't already included in ajv (ie. `idn-hostname`, `idn-email`, `iri`, `iri-reference` and `duration`). ## Some packages using Ajv - [webpack](https://github.com/webpack/webpack) - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser - [jsonscript-js](https://github.com/JSONScript/jsonscript-js) - the interpreter for [JSONScript](http://www.jsonscript.org) - scripted processing of existing endpoints and services - [osprey-method-handler](https://github.com/mulesoft-labs/osprey-method-handler) - Express middleware for validating requests and responses based on a RAML method object, used in [osprey](https://github.com/mulesoft/osprey) - validating API proxy generated from a RAML definition - [har-validator](https://github.com/ahmadnassri/har-validator) - HTTP Archive (HAR) validator - [jsoneditor](https://github.com/josdejong/jsoneditor) - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org - [JSON Schema Lint](https://github.com/nickcmaynard/jsonschemalint) - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com - [objection](https://github.com/vincit/objection.js) - SQL-friendly ORM for Node.js - [table](https://github.com/gajus/table) - formats data into a string table - [ripple-lib](https://github.com/ripple/ripple-lib) - a JavaScript API for interacting with [Ripple](https://ripple.com) in Node.js and the browser - [restbase](https://github.com/wikimedia/restbase) - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content - [hippie-swagger](https://github.com/CacheControl/hippie-swagger) - [Hippie](https://github.com/vesln/hippie) wrapper that provides end to end API testing with swagger validation - [react-form-controlled](https://github.com/seeden/react-form-controlled) - React controlled form components with validation - [rabbitmq-schema](https://github.com/tjmehta/rabbitmq-schema) - a schema definition module for RabbitMQ graphs and messages - [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema - [chai-ajv-json-schema](https://github.com/peon374/chai-ajv-json-schema) - chai plugin to us JSON Schema with expect in mocha tests - [grunt-jsonschema-ajv](https://github.com/SignpostMarv/grunt-jsonschema-ajv) - Grunt plugin for validating files against JSON Schema - [extract-text-webpack-plugin](https://github.com/webpack-contrib/extract-text-webpack-plugin) - extract text from bundle into a file - [electron-builder](https://github.com/electron-userland/electron-builder) - a solution to package and build a ready for distribution Electron app - [addons-linter](https://github.com/mozilla/addons-linter) - Mozilla Add-ons Linter - [gh-pages-generator](https://github.com/epoberezkin/gh-pages-generator) - multi-page site generator converting markdown files to GitHub pages - [ESLint](https://github.com/eslint/eslint) - the pluggable linting utility for JavaScript and JSX ## Tests ``` npm install git submodule update --init npm test ``` ## Contributing All validation functions are generated using doT templates in [dot](https://github.com/ajv-validator/ajv/tree/master/lib/dot) folder. Templates are precompiled so doT is not a run-time dependency. `npm run build` - compiles templates to [dotjs](https://github.com/ajv-validator/ajv/tree/master/lib/dotjs) folder. `npm run watch` - automatically compiles templates when files in dot folder change Please see [Contributing guidelines](https://github.com/ajv-validator/ajv/blob/master/CONTRIBUTING.md) ## Changes history See https://github.com/ajv-validator/ajv/releases __Please note__: [Changes in version 7.0.0-beta](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](https://github.com/ajv-validator/ajv/blob/master/CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](https://github.com/ajv-validator/ajv/blob/master/LICENSE) Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde (olalonde@gmail.com) - James Talmage (james@talmage.io) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) # eslint-utils [![npm version](https://img.shields.io/npm/v/eslint-utils.svg)](https://www.npmjs.com/package/eslint-utils) [![Downloads/month](https://img.shields.io/npm/dm/eslint-utils.svg)](http://www.npmtrends.com/eslint-utils) [![Build Status](https://github.com/mysticatea/eslint-utils/workflows/CI/badge.svg)](https://github.com/mysticatea/eslint-utils/actions) [![Coverage Status](https://codecov.io/gh/mysticatea/eslint-utils/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/eslint-utils) [![Dependency Status](https://david-dm.org/mysticatea/eslint-utils.svg)](https://david-dm.org/mysticatea/eslint-utils) ## 🏁 Goal This package provides utility functions and classes for make ESLint custom rules. For examples: - [getStaticValue](https://eslint-utils.mysticatea.dev/api/ast-utils.html#getstaticvalue) evaluates static value on AST. - [ReferenceTracker](https://eslint-utils.mysticatea.dev/api/scope-utils.html#referencetracker-class) checks the members of modules/globals as handling assignments and destructuring. ## 📖 Usage See [documentation](https://eslint-utils.mysticatea.dev/). ## 📰 Changelog See [releases](https://github.com/mysticatea/eslint-utils/releases). ## ❤️ Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run clean` removes the coverage result of `npm test` command. - `npm run coverage` shows the coverage result of the last `npm test` command. - `npm run lint` runs ESLint. - `npm run watch` runs tests on each file change. CodingHope Smart Contract ================== A [smart contract] written in [AssemblyScript] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install [Node.js] ≥ 12 Exploring The Code ================== 1. The main smart contract code lives in `assembly/index.ts`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard AssemblyScript tests using [as-pect]. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [AssemblyScript]: https://www.assemblyscript.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [as-pect]: https://www.npmjs.com/package/@as-pect/cli # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> # is-glob [![NPM version](https://img.shields.io/npm/v/is-glob.svg?style=flat)](https://www.npmjs.com/package/is-glob) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![NPM total downloads](https://img.shields.io/npm/dt/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![Build Status](https://img.shields.io/github/workflow/status/micromatch/is-glob/dev)](https://github.com/micromatch/is-glob/actions) > Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-glob ``` You might also be interested in [is-valid-glob](https://github.com/jonschlinkert/is-valid-glob) and [has-glob](https://github.com/jonschlinkert/has-glob). ## Usage ```js var isGlob = require('is-glob'); ``` ### Default behavior **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js'); isGlob('*.js'); isGlob('**/abc.js'); isGlob('abc/*.js'); isGlob('abc/(aaa|bbb).js'); isGlob('abc/[a-z].js'); isGlob('abc/{a,b}.js'); //=> true ``` Extglobs ```js isGlob('abc/@(a).js'); isGlob('abc/!(a).js'); isGlob('abc/+(a).js'); isGlob('abc/*(a).js'); isGlob('abc/?(a).js'); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('abc/\\@(a).js'); isGlob('abc/\\!(a).js'); isGlob('abc/\\+(a).js'); isGlob('abc/\\*(a).js'); isGlob('abc/\\?(a).js'); isGlob('\\!foo.js'); isGlob('\\*.js'); isGlob('\\*\\*/abc.js'); isGlob('abc/\\*.js'); isGlob('abc/\\(aaa|bbb).js'); isGlob('abc/\\[a-z].js'); isGlob('abc/\\{a,b}.js'); //=> false ``` Patterns that do not have glob patterns return `false`: ```js isGlob('abc.js'); isGlob('abc/def/ghi.js'); isGlob('foo.js'); isGlob('abc/@.js'); isGlob('abc/+.js'); isGlob('abc/?.js'); isGlob(); isGlob(null); //=> false ``` Arrays are also `false` (If you want to check if an array has a glob pattern, use [has-glob](https://github.com/jonschlinkert/has-glob)): ```js isGlob(['**/*.js']); isGlob(['foo.js']); //=> false ``` ### Option strict When `options.strict === false` the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return `false` may return `true`. This is done so that matching libraries like [micromatch](https://github.com/micromatch/micromatch) have a chance at determining if the pattern is a glob or not. **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js', {strict: false}); isGlob('*.js', {strict: false}); isGlob('**/abc.js', {strict: false}); isGlob('abc/*.js', {strict: false}); isGlob('abc/(aaa|bbb).js', {strict: false}); isGlob('abc/[a-z].js', {strict: false}); isGlob('abc/{a,b}.js', {strict: false}); //=> true ``` Extglobs ```js isGlob('abc/@(a).js', {strict: false}); isGlob('abc/!(a).js', {strict: false}); isGlob('abc/+(a).js', {strict: false}); isGlob('abc/*(a).js', {strict: false}); isGlob('abc/?(a).js', {strict: false}); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('\\!foo.js', {strict: false}); isGlob('\\*.js', {strict: false}); isGlob('\\*\\*/abc.js', {strict: false}); isGlob('abc/\\*.js', {strict: false}); isGlob('abc/\\(aaa|bbb).js', {strict: false}); isGlob('abc/\\[a-z].js', {strict: false}); isGlob('abc/\\{a,b}.js', {strict: false}); //=> false ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [assemble](https://www.npmjs.com/package/assemble): Get the rocks out of your socks! Assemble makes you fast at creating web projects… [more](https://github.com/assemble/assemble) | [homepage](https://github.com/assemble/assemble "Get the rocks out of your socks! Assemble makes you fast at creating web projects. Assemble is used by thousands of projects for rapid prototyping, creating themes, scaffolds, boilerplates, e-books, UI components, API documentation, blogs, building websit") * [base](https://www.npmjs.com/package/base): Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks | [homepage](https://github.com/node-base/base "Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks") * [update](https://www.npmjs.com/package/update): Be scalable! Update is a new, open source developer framework and CLI for automating updates… [more](https://github.com/update/update) | [homepage](https://github.com/update/update "Be scalable! Update is a new, open source developer framework and CLI for automating updates of any kind in code projects.") * [verb](https://www.npmjs.com/package/verb): Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used… [more](https://github.com/verbose/verb) | [homepage](https://github.com/verbose/verb "Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used on hundreds of projects of all sizes to generate everything from API docs to readmes.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 47 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [doowb](https://github.com/doowb) | | 1 | [phated](https://github.com/phated) | | 1 | [danhper](https://github.com/danhper) | | 1 | [paulmillr](https://github.com/paulmillr) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 27, 2019._ ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Overview [![Build Status](https://travis-ci.org/lydell/js-tokens.svg?branch=master)](https://travis-ci.org/lydell/js-tokens) ======== A regex that tokenizes JavaScript. ```js var jsTokens = require("js-tokens").default var jsString = "var foo=opts.foo;\n..." jsString.match(jsTokens) // ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...] ``` Installation ============ `npm install js-tokens` ```js import jsTokens from "js-tokens" // or: var jsTokens = require("js-tokens").default ``` Usage ===== ### `jsTokens` ### A regex with the `g` flag that matches JavaScript tokens. The regex _always_ matches, even invalid JavaScript and the empty string. The next match is always directly after the previous. ### `var token = matchToToken(match)` ### ```js import {matchToToken} from "js-tokens" // or: var matchToToken = require("js-tokens").matchToToken ``` Takes a `match` returned by `jsTokens.exec(string)`, and returns a `{type: String, value: String}` object. The following types are available: - string - comment - regex - number - name - punctuator - whitespace - invalid Multi-line comments and strings also have a `closed` property indicating if the token was closed or not (see below). Comments and strings both come in several flavors. To distinguish them, check if the token starts with `//`, `/*`, `'`, `"` or `` ` ``. Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use [is-keyword-js] to tell them apart. Whitespace includes both line terminators and other whitespace. [is-keyword-js]: https://github.com/crissdev/is-keyword-js ECMAScript support ================== The intention is to always support the latest ECMAScript version whose feature set has been finalized. If adding support for a newer version requires changes, a new version with a major verion bump will be released. Currently, ECMAScript 2018 is supported. Invalid code handling ===================== Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input. Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input. Unterminated regex literals are likely matched as division and whatever is inside the regex. Invalid ASCII characters have their own capturing group. Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the [ES2018](#es2018) section. Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple. Strings may contain invalid escape sequences. Limitations =========== Tokenizing JavaScript using regexes—in fact, _one single regex_—won’t be perfect. But that’s not the point either. You may compare jsTokens with [esprima] by using `esprima-compare.js`. See `npm run esprima-compare`! [esprima]: http://esprima.org/ ### Template string interpolation ### Template strings are matched as single tokens, from the starting `` ` `` to the ending `` ` ``, including interpolations (whose tokens are not matched individually). Matching template string interpolations requires recursive balancing of `{` and `}`—something that JavaScript regexes cannot do. Only one level of nesting is supported. ### Division and regex literals collision ### Consider this example: ```js var g = 9.82 var number = bar / 2/g var regex = / 2/g ``` A human can easily understand that in the `number` line we’re dealing with division, and in the `regex` line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the `/` characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the [ES2018](#es2018) section). When the `jsTokens` regex scans throught the above, it will see the following at the end of both the `number` and `regex` rows: ```js / 2/g ``` It is then impossible to know if that is a regex literal, or part of an expression dealing with division. Here is a similar case: ```js foo /= 2/g foo(/= 2/g) ``` The first line divides the `foo` variable with `2/g`. The second line calls the `foo` function with the regex literal `/= 2/g`. Again, since `jsTokens` only sees forwards, it cannot tell the two cases apart. There are some cases where we _can_ tell division and regex literals apart, though. First off, we have the simple cases where there’s only one slash in the line: ```js var foo = 2/g foo /= 2 ``` Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line. Secondly, not every character is a valid regex flag. ```js var number = bar / 2/e ``` The above example is also correctly identified as division, because `e` is not a valid regex flag. I initially wanted to future-proof by allowing `[a-zA-Z]*` (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard `g`, `m`, `i`, `y` and `u` flags are allowed. This means that the above example will be identified as division as long as you don’t rename the `e` variable to some permutation of `gmiyus` 1 to 6 characters long. Lastly, we can look _forward_ for information. - If the token following what looks like a regex literal is not valid after a regex literal, but is valid in a division expression, then the regex literal is treated as division instead. For example, a flagless regex cannot be followed by a string, number or name, but all of those three can be the denominator of a division. - Generally, if what looks like a regex literal is followed by an operator, the regex literal is treated as division instead. This is because regexes are seldomly used with operators (such as `+`, `*`, `&&` and `==`), but division could likely be part of such an expression. Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as: If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code). ### ES2018 ### ES2018 added some nice regex improvements to the language. - [Unicode property escapes] should allow telling names and invalid non-ASCII characters apart without blowing up the regex size. - [Lookbehind assertions] should allow matching telling division and regex literals apart in more cases. - [Named capture groups] might simplify some things. These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features. [Unicode property escapes]: http://2ality.com/2017/07/regexp-unicode-property-escapes.html [Lookbehind assertions]: http://2ality.com/2017/05/regexp-lookbehind-assertions.html [Named capture groups]: http://2ality.com/2017/05/regexp-named-capture-groups.html License ======= [MIT](LICENSE). <a name="table"></a> # Table > Produces a string that represents array data in a text table. [![Travis build status](http://img.shields.io/travis/gajus/table/master.svg?style=flat-square)](https://travis-ci.org/gajus/table) [![Coveralls](https://img.shields.io/coveralls/gajus/table.svg?style=flat-square)](https://coveralls.io/github/gajus/table) [![NPM version](http://img.shields.io/npm/v/table.svg?style=flat-square)](https://www.npmjs.org/package/table) [![Canonical Code Style](https://img.shields.io/badge/code%20style-canonical-blue.svg?style=flat-square)](https://github.com/gajus/canonical) [![Twitter Follow](https://img.shields.io/twitter/follow/kuizinas.svg?style=social&label=Follow)](https://twitter.com/kuizinas) * [Table](#table) * [Features](#table-features) * [Install](#table-install) * [Usage](#table-usage) * [API](#table-api) * [table](#table-api-table-1) * [createStream](#table-api-createstream) * [getBorderCharacters](#table-api-getbordercharacters) ![Demo of table displaying a list of missions to the Moon.](./.README/demo.png) <a name="table-features"></a> ## Features * Works with strings containing [fullwidth](https://en.wikipedia.org/wiki/Halfwidth_and_fullwidth_forms) characters. * Works with strings containing [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code). * Configurable border characters. * Configurable content alignment per column. * Configurable content padding per column. * Configurable column width. * Text wrapping. <a name="table-install"></a> ## Install ```bash npm install table ``` [![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/gajus) [![Become a Patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/gajus) <a name="table-usage"></a> ## Usage ```js import { table } from 'table'; // Using commonjs? // const { table } = require('table'); const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; console.log(table(data)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ ``` <a name="table-api"></a> ## API <a name="table-api-table-1"></a> ### table Returns the string in the table format **Parameters:** - **_data_:** The data to display - Type: `any[][]` - Required: `true` - **_config_:** Table configuration - Type: `object` - Required: `false` <a name="table-api-table-1-config-border"></a> ##### config.border Type: `{ [type: string]: string }`\ Default: `honeywell` [template](#getbordercharacters) Custom borders. The keys are any of: - `topLeft`, `topRight`, `topBody`,`topJoin` - `bottomLeft`, `bottomRight`, `bottomBody`, `bottomJoin` - `joinLeft`, `joinRight`, `joinBody`, `joinJoin` - `bodyLeft`, `bodyRight`, `bodyJoin` - `headerJoin` ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: { topBody: `─`, topJoin: `┬`, topLeft: `┌`, topRight: `┐`, bottomBody: `─`, bottomJoin: `┴`, bottomLeft: `└`, bottomRight: `┘`, bodyLeft: `│`, bodyRight: `│`, bodyJoin: `│`, joinBody: `─`, joinLeft: `├`, joinRight: `┤`, joinJoin: `┼` } }; console.log(table(data, config)); ``` ``` ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ ``` <a name="table-api-table-1-config-drawverticalline"></a> ##### config.drawVerticalLine Type: `(lineIndex: number, columnCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a vertical line. This callback is called for each vertical border of the table. If the table has `n` columns, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawVerticalLine: (lineIndex, columnCount) => { return lineIndex === 0 || lineIndex === columnCount; } }; console.log(table(data, config)); ``` ``` ╔════════════╗ ║ 0A 0B 0C ║ ╟────────────╢ ║ 1A 1B 1C ║ ╟────────────╢ ║ 2A 2B 2C ║ ╟────────────╢ ║ 3A 3B 3C ║ ╟────────────╢ ║ 4A 4B 4C ║ ╚════════════╝ ``` <a name="table-api-table-1-config-drawhorizontalline"></a> ##### config.drawHorizontalLine Type: `(lineIndex: number, rowCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a horizontal line. This callback is called for each horizontal border of the table. If the table has `n` rows, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. If the table has `n` rows and contains the header, then the range will be `[0, n+1]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawHorizontalLine: (lineIndex, rowCount) => { return lineIndex === 0 || lineIndex === 1 || lineIndex === rowCount - 1 || lineIndex === rowCount; } }; console.log(table(data, config)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ║ 2A │ 2B │ 2C ║ ║ 3A │ 3B │ 3C ║ ╟────┼────┼────╢ ║ 4A │ 4B │ 4C ║ ╚════╧════╧════╝ ``` <a name="table-api-table-1-config-singleline"></a> ##### config.singleLine Type: `boolean`\ Default: `false` If `true`, horizontal lines inside the table are not drawn. This option also overrides the `config.drawHorizontalLine` if specified. ```js const data = [ ['-rw-r--r--', '1', 'pandorym', 'staff', '1529', 'May 23 11:25', 'LICENSE'], ['-rw-r--r--', '1', 'pandorym', 'staff', '16327', 'May 23 11:58', 'README.md'], ['drwxr-xr-x', '76', 'pandorym', 'staff', '2432', 'May 23 12:02', 'dist'], ['drwxr-xr-x', '634', 'pandorym', 'staff', '20288', 'May 23 11:54', 'node_modules'], ['-rw-r--r--', '1,', 'pandorym', 'staff', '525688', 'May 23 11:52', 'package-lock.json'], ['-rw-r--r--@', '1', 'pandorym', 'staff', '2440', 'May 23 11:25', 'package.json'], ['drwxr-xr-x', '27', 'pandorym', 'staff', '864', 'May 23 11:25', 'src'], ['drwxr-xr-x', '20', 'pandorym', 'staff', '640', 'May 23 11:25', 'test'], ]; const config = { singleLine: true }; console.log(table(data, config)); ``` ``` ╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 1529 │ May 23 11:25 │ LICENSE ║ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 16327 │ May 23 11:58 │ README.md ║ ║ drwxr-xr-x │ 76 │ pandorym │ staff │ 2432 │ May 23 12:02 │ dist ║ ║ drwxr-xr-x │ 634 │ pandorym │ staff │ 20288 │ May 23 11:54 │ node_modules ║ ║ -rw-r--r-- │ 1, │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║ ║ -rw-r--r--@ │ 1 │ pandorym │ staff │ 2440 │ May 23 11:25 │ package.json ║ ║ drwxr-xr-x │ 27 │ pandorym │ staff │ 864 │ May 23 11:25 │ src ║ ║ drwxr-xr-x │ 20 │ pandorym │ staff │ 640 │ May 23 11:25 │ test ║ ╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝ ``` <a name="table-api-table-1-config-columns"></a> ##### config.columns Type: `Column[] | { [columnIndex: number]: Column }` Column specific configurations. <a name="table-api-table-1-config-columns-config-columns-width"></a> ###### config.columns[*].width Type: `number`\ Default: the maximum cell widths of the column Column width (excluding the paddings). ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: { 1: { width: 10 } } }; console.log(table(data, config)); ``` ``` ╔════╤════════════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────────────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────────────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════════════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-alignment"></a> ###### config.columns[*].alignment Type: `'center' | 'justify' | 'left' | 'right'`\ Default: `'left'` Cell content horizontal alignment ```js const data = [ ['0A', '0B', '0C', '0D 0E 0F'], ['1A', '1B', '1C', '1D 1E 1F'], ['2A', '2B', '2C', '2D 2E 2F'], ]; const config = { columnDefault: { width: 10, }, columns: [ { alignment: 'left' }, { alignment: 'center' }, { alignment: 'right' }, { alignment: 'justify' } ], }; console.log(table(data, config)); ``` ``` ╔════════════╤════════════╤════════════╤════════════╗ ║ 0A │ 0B │ 0C │ 0D 0E 0F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C │ 1D 1E 1F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C │ 2D 2E 2F ║ ╚════════════╧════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-verticalalignment"></a> ###### config.columns[*].verticalAlignment Type: `'top' | 'middle' | 'bottom'`\ Default: `'top'` Cell content vertical alignment ```js const data = [ ['A', 'B', 'C', 'DEF'], ]; const config = { columnDefault: { width: 1, }, columns: [ { verticalAlignment: 'top' }, { verticalAlignment: 'middle' }, { verticalAlignment: 'bottom' }, ], }; console.log(table(data, config)); ``` ``` ╔═══╤═══╤═══╤═══╗ ║ A │ │ │ D ║ ║ │ B │ │ E ║ ║ │ │ C │ F ║ ╚═══╧═══╧═══╧═══╝ ``` <a name="table-api-table-1-config-columns-config-columns-paddingleft"></a> ###### config.columns[*].paddingLeft Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the left. <a name="table-api-table-1-config-columns-config-columns-paddingright"></a> ###### config.columns[*].paddingRight Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the right. The `paddingLeft` and `paddingRight` options do not count on the column width. So the column has `width = 5`, `paddingLeft = 2` and `paddingRight = 2` will have the total width is `9`. ```js const data = [ ['0A', 'AABBCC', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: [ { paddingLeft: 3 }, { width: 2, paddingRight: 3 } ] }; console.log(table(data, config)); ``` ``` ╔══════╤══════╤════╗ ║ 0A │ AA │ 0C ║ ║ │ BB │ ║ ║ │ CC │ ║ ╟──────┼──────┼────╢ ║ 1A │ 1B │ 1C ║ ╟──────┼──────┼────╢ ║ 2A │ 2B │ 2C ║ ╚══════╧══════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-truncate"></a> ###### config.columns[*].truncate Type: `number`\ Default: `Infinity` The number of characters is which the content will be truncated. To handle a content that overflows the container width, `table` package implements [text wrapping](#config.columns[*].wrapWord). However, sometimes you may want to truncate content that is too long to be displayed in the table. ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20, truncate: 100 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convall… ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-wrapword"></a> ###### config.columns[*].wrapWord Type: `boolean`\ Default: `false` The `table` package implements auto text wrapping, i.e., text that has the width greater than the container width will be separated into multiple lines at the nearest space or one of the special characters: `\|/_.,;-`. When `wrapWord` is `false`: ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convallis ║ ║ dapibus. Nunc venena ║ ║ tis tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` When `wrapWord` is `true`: ``` ╔══════════════════════╗ ║ Lorem ipsum dolor ║ ║ sit amet, ║ ║ consectetur ║ ║ adipiscing elit. ║ ║ Phasellus pulvinar ║ ║ nibh sed mauris ║ ║ convallis dapibus. ║ ║ Nunc venenatis ║ ║ tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columndefault"></a> ##### config.columnDefault Type: `Column`\ Default: `{}` The default configuration for all columns. Column-specific settings will overwrite the default values. <a name="table-api-table-1-config-header"></a> ##### config.header Type: `object` Header configuration. The header configuration inherits the most of the column's, except: - `content` **{string}**: the header content. - `width:` calculate based on the content width automatically. - `alignment:` `center` be default. - `verticalAlignment:` is not supported. - `config.border.topJoin` will be `config.border.topBody` for prettier. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ]; const config = { columnDefault: { width: 10, }, header: { alignment: 'center', content: 'THE HEADER\nThis is the table about something', }, } console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════╗ ║ THE HEADER ║ ║ This is the table about something ║ ╟────────────┬────────────┬────────────╢ ║ 0A │ 0B │ 0C ║ ╟────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C ║ ╟────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C ║ ╚════════════╧════════════╧════════════╝ ``` <a name="table-api-createstream"></a> ### createStream `table` package exports `createStream` function used to draw a table and append rows. **Parameter:** - _**config:**_ the same as `table`'s, except `config.columnDefault.width` and `config.columnCount` must be provided. ```js import { createStream } from 'table'; const config = { columnDefault: { width: 50 }, columnCount: 1 }; const stream = createStream(config); setInterval(() => { stream.write([new Date()]); }, 500); ``` ![Streaming current date.](./.README/api/stream/streaming.gif) `table` package uses ANSI escape codes to overwrite the output of the last line when a new row is printed. The underlying implementation is explained in this [Stack Overflow answer](http://stackoverflow.com/a/32938658/368691). Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g. ```js import { createStream } from 'table'; import _ from 'lodash'; const config = { columnDefault: { width: 50 }, columnCount: 3, columns: [ { width: 10, alignment: 'right' }, { alignment: 'center' }, { width: 10 } ] }; const stream = createStream(config); let i = 0; setInterval(() => { let random; random = _.sample('abcdefghijklmnopqrstuvwxyz', _.random(1, 30)).join(''); stream.write([i++, new Date(), random]); }, 500); ``` ![Streaming random data.](./.README/api/stream/streaming-random.gif) <a name="table-api-getbordercharacters"></a> ### getBorderCharacters **Parameter:** - **_template_** - Type: `'honeywell' | 'norc' | 'ramac' | 'void'` - Required: `true` You can load one of the predefined border templates using `getBorderCharacters` function. ```js import { table, getBorderCharacters } from 'table'; const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: getBorderCharacters(`name of the template`) }; console.log(table(data, config)); ``` ``` # honeywell ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ # norc ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ # ramac (ASCII; for use in terminals that do not support Unicode characters) +----+----+----+ | 0A | 0B | 0C | |----|----|----| | 1A | 1B | 1C | |----|----|----| | 2A | 2B | 2C | +----+----+----+ # void (no borders; see "borderless table" section of the documentation) 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` Raise [an issue](https://github.com/gajus/table/issues) if you'd like to contribute a new border template. <a name="table-api-getbordercharacters-borderless-table"></a> #### Borderless Table Simply using `void` border character template creates a table with a lot of unnecessary spacing. To create a more pleasant to the eye table, reset the padding and remove the joining rows, e.g. ```js const output = table(data, { border: getBorderCharacters('void'), columnDefault: { paddingLeft: 0, paddingRight: 1 }, drawHorizontalLine: () => false } ); console.log(output); ``` ``` 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # isexe Minimal module to check if a file is executable, and a normal file. Uses `fs.stat` and tests against the `PATHEXT` environment variable on Windows. ## USAGE ```javascript var isexe = require('isexe') isexe('some-file-name', function (err, isExe) { if (err) { console.error('probably file does not exist or something', err) } else if (isExe) { console.error('this thing can be run') } else { console.error('cannot be run') } }) // same thing but synchronous, throws errors var isExe = isexe.sync('some-file-name') // treat errors as just "not executable" isexe('maybe-missing-file', { ignoreErrors: true }, callback) var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true }) ``` ## API ### `isexe(path, [options], [callback])` Check if the path is executable. If no callback provided, and a global `Promise` object is available, then a Promise will be returned. Will raise whatever errors may be raised by `fs.stat`, unless `options.ignoreErrors` is set to true. ### `isexe.sync(path, [options])` Same as `isexe` but returns the value and throws any errors raised. ### Options * `ignoreErrors` Treat all errors as "no, this is not executable", but don't raise them. * `uid` Number to use as the user id * `gid` Number to use as the group id * `pathExt` List of path extensions to use instead of `PATHEXT` environment variable on Windows. binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = new LRU(options) , otherCache = new LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. Setting it to a non-number or negative number will throw a `TypeError`. Setting it to 0 makes it be `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a `TypeError`. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. * `updateAgeOnGet` When using time-expiring entries with `maxAge`, setting this to `true` will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.) ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) # file-entry-cache > Super simple cache for file metadata, useful for process that work o a given series of files > and that only need to repeat the job on the changed ones since the previous run of the process — Edit [![NPM Version](http://img.shields.io/npm/v/file-entry-cache.svg?style=flat)](https://npmjs.org/package/file-entry-cache) [![Build Status](http://img.shields.io/travis/royriojas/file-entry-cache.svg?style=flat)](https://travis-ci.org/royriojas/file-entry-cache) ## install ```bash npm i --save file-entry-cache ``` ## Usage The module exposes two functions `create` and `createFromFile`. ## `create(cacheName, [directory, useCheckSum])` - **cacheName**: the name of the cache to be created - **directory**: Optional the directory to load the cache from - **usecheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ## `createFromFile(pathToCache, [useCheckSum])` - **pathToCache**: the path to the cache file (this combines the cache name and directory) - **useCheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ```js // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var fileEntryCache = require('file-entry-cache'); var cache = fileEntryCache.create('testCache'); var files = expand('../fixtures/*.txt'); // the first time this method is called, will return all the files var oFiles = cache.getUpdatedFiles(files); // this will persist this to disk checking each file stats and // updating the meta attributes `size` and `mtime`. // custom fields could also be added to the meta object and will be persisted // in order to retrieve them later cache.reconcile(); // use this if you want the non visited file entries to be kept in the cache // for more than one execution // // cache.reconcile( true /* noPrune */) // on a second run var cache2 = fileEntryCache.create('testCache'); // will return now only the files that were modified or none // if no files were modified previous to the execution of this function var oFiles = cache.getUpdatedFiles(files); // if you want to prevent a file from being considered non modified // something useful if a file failed some sort of validation // you can then remove the entry from the cache doing cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles` // that will effectively make the file to appear again as modified until the validation is passed. In that // case you should not remove it from the cache // if you need all the files, so you can determine what to do with the changed ones // you can call var oFiles = cache.normalizeEntries(files); // oFiles will be an array of objects like the following entry = { key: 'some/name/file', the path to the file changed: true, // if the file was changed since previous run meta: { size: 3242, // the size of the file mtime: 231231231, // the modification time of the file data: {} // some extra field stored for this file (useful to save the result of a transformation on the file } } ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistence (write-back cache) in order to make a script that will beautify files with `esformatter` to execute only on the files that were changed since the last run. In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second. This module uses [flat-cache](https://www.npmjs.com/package/flat-cache) a super simple `key/value` cache storage with optional file persistance. The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for `getChangedFiles` will return only the files that were modified. Making the process to end faster. This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the `meta` field, of the entries. Anything added to the meta field will be persisted. Those processes won't need to call `getChangedFiles` they will instead call `normalizeEntries` that will return the entries with a `changed` field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed. In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed. ## Important notes - The values set on the meta attribute of the entries should be `stringify-able` ones if possible, flat-cache uses `circular-json` to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular values - All the changes to the cache state are done to memory first and only persisted after reconcile. ## License MIT # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # regexpp [![npm version](https://img.shields.io/npm/v/regexpp.svg)](https://www.npmjs.com/package/regexpp) [![Downloads/month](https://img.shields.io/npm/dm/regexpp.svg)](http://www.npmtrends.com/regexpp) [![Build Status](https://github.com/mysticatea/regexpp/workflows/CI/badge.svg)](https://github.com/mysticatea/regexpp/actions) [![codecov](https://codecov.io/gh/mysticatea/regexpp/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/regexpp) [![Dependency Status](https://david-dm.org/mysticatea/regexpp.svg)](https://david-dm.org/mysticatea/regexpp) A regular expression parser for ECMAScript. ## 💿 Installation ```bash $ npm install regexpp ``` - require Node.js 8 or newer. ## 📖 Usage ```ts import { AST, RegExpParser, RegExpValidator, RegExpVisitor, parseRegExpLiteral, validateRegExpLiteral, visitRegExpAST } from "regexpp" ``` ### parseRegExpLiteral(source, options?) Parse a given regular expression literal then make AST object. This is equivalent to `new RegExpParser(options).parseLiteral(source)`. - **Parameters:** - `source` (`string | RegExp`) The source code to parse. - `options?` ([`RegExpParser.Options`]) The options to parse. - **Return:** - The AST of the regular expression. ### validateRegExpLiteral(source, options?) Validate a given regular expression literal. This is equivalent to `new RegExpValidator(options).validateLiteral(source)`. - **Parameters:** - `source` (`string`) The source code to validate. - `options?` ([`RegExpValidator.Options`]) The options to validate. ### visitRegExpAST(ast, handlers) Visit each node of a given AST. This is equivalent to `new RegExpVisitor(handlers).visit(ast)`. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. ### RegExpParser #### new RegExpParser(options?) - **Parameters:** - `options?` ([`RegExpParser.Options`]) The options to parse. #### parser.parseLiteral(source, start?, end?) Parse a regular expression literal. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"/abc/g"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression. #### parser.parsePattern(source, start?, end?, uFlag?) Parse a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"abc"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. - **Return:** - The AST of the regular expression pattern. #### parser.parseFlags(source, start?, end?) Parse a regular expression flags. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"gim"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression flags. ### RegExpValidator #### new RegExpValidator(options) - **Parameters:** - `options` ([`RegExpValidator.Options`]) The options to validate. #### validator.validateLiteral(source, start, end) Validate a regular expression literal. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. #### validator.validatePattern(source, start, end, uFlag) Validate a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. #### validator.validateFlags(source, start, end) Validate a regular expression flags. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. ### RegExpVisitor #### new RegExpVisitor(handlers) - **Parameters:** - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. #### visitor.visit(ast) Validate a regular expression literal. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. ## 📰 Changelog - [GitHub Releases](https://github.com/mysticatea/regexpp/releases) ## 🍻 Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run build` compiles TypeScript source code to `index.js`, `index.js.map`, and `index.d.ts`. - `npm run clean` removes the temporary files which are created by `npm test` and `npm run build`. - `npm run lint` runs ESLint. - `npm run update:test` updates test fixtures. - `npm run update:ids` updates `src/unicode/ids.ts`. - `npm run watch` runs tests with `--watch` option. [`AST.Node`]: src/ast.ts#L4 [`RegExpParser.Options`]: src/parser.ts#L539 [`RegExpValidator.Options`]: src/validator.ts#L127 [`RegExpVisitor.Handlers`]: src/visitor.ts#L204 # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). # flat-cache > A stupidly simple key/value storage using files to persist the data [![NPM Version](http://img.shields.io/npm/v/flat-cache.svg?style=flat)](https://npmjs.org/package/flat-cache) [![Build Status](https://api.travis-ci.org/royriojas/flat-cache.svg?branch=master)](https://travis-ci.org/royriojas/flat-cache) ## install ```bash npm i --save flat-cache ``` ## Usage ```js var flatCache = require('flat-cache') // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var cache = flatCache.load('cacheId'); // sets a key on the cache cache.setKey('key', { foo: 'var' }); // get a key from the cache cache.getKey('key') // { foo: 'var' } // fetch the entire persisted object cache.all() // { 'key': { foo: 'var' } } // remove a key cache.removeKey('key'); // removes a key from the cache // save it to disk cache.save(); // very important, if you don't save no changes will be persisted. // cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys // loads the cache from a given directory, if one does // not exists for the given Id a new one will be prepared to be created var cache = flatCache.load('cacheId', path.resolve('./path/to/folder')); // The following methods are useful to clear the cache // delete a given cache flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists. // delete all cache flatCache.clearAll(); // remove the cache directory ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistance in order to make a script that will beutify files with `esformatter` only execute on the files that were changed since the last run. To make that possible we need to store the `fileSize` and `modificationTime` of the files. So a simple `key/value` storage was needed and Bam! this module was born. ## Important notes - If no directory is especified when the `load` method is called, a folder named `.cache` will be created inside the module directory when `cache.save` is called. If you're committing your `node_modules` to any vcs, you might want to ignore the default `.cache` folder, or specify a custom directory. - The values set on the keys of the cache should be `stringify-able` ones, meaning no circular references - All the changes to the cache state are done to memory - I could have used a timer or `Object.observe` to deliver the changes to disk, but I wanted to keep this module intentionally dumb and simple - Non visited keys are removed when `cache.save()` is called. If this is not desired, you can pass `true` to the save call like: `cache.save( true /* noPrune */ )`. ## License MIT ## Changelog [changelog](./changelog.md) # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ## Timezone support In order to provide support for timezones, without relying on the JavaScript host or any other time-zone aware environment, this library makes use of teh IANA Timezone Database directly: https://www.iana.org/time-zones The database files are parsed by the scripts in this folder, which emit AssemblyScript code which is used to process the various rules at runtime. # is-core-module <sup>[![Version Badge][2]][1]</sup> [![github actions][actions-image]][actions-url] [![coverage][codecov-image]][codecov-url] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] Is this specifier a node.js core module? Optionally provide a node version to check; defaults to the current node version. ## Example ```js var isCore = require('is-core-module'); var assert = require('assert'); assert(isCore('fs')); assert(!isCore('butts')); ``` ## Tests Clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-core-module [2]: https://versionbadg.es/inspect-js/is-core-module.svg [5]: https://david-dm.org/inspect-js/is-core-module.svg [6]: https://david-dm.org/inspect-js/is-core-module [7]: https://david-dm.org/inspect-js/is-core-module/dev-status.svg [8]: https://david-dm.org/inspect-js/is-core-module#info=devDependencies [11]: https://nodei.co/npm/is-core-module.png?downloads=true&stars=true [license-image]: https://img.shields.io/npm/l/is-core-module.svg [license-url]: LICENSE [downloads-image]: https://img.shields.io/npm/dm/is-core-module.svg [downloads-url]: https://npm-stat.com/charts.html?package=is-core-module [codecov-image]: https://codecov.io/gh/inspect-js/is-core-module/branch/main/graphs/badge.svg [codecov-url]: https://app.codecov.io/gh/inspect-js/is-core-module/ [actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/inspect-js/is-core-module [actions-url]: https://github.com/inspect-js/is-core-module/actions # fast-json-stable-stringify Deterministic `JSON.stringify()` - a faster version of [@substack](https://github.com/substack)'s json-stable-strigify without [jsonify](https://github.com/substack/jsonify). You can also pass in a custom comparison function. [![Build Status](https://travis-ci.org/epoberezkin/fast-json-stable-stringify.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-json-stable-stringify) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-json-stable-stringify/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-json-stable-stringify?branch=master) # example ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; console.log(stringify(obj)); ``` output: ``` {"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8} ``` # methods ``` js var stringify = require('fast-json-stable-stringify') ``` ## var str = stringify(obj, opts) Return a deterministic stringified string `str` from the object `obj`. ## options ### cmp If `opts` is given, you can supply an `opts.cmp` to have a custom comparison function for object keys. Your function `opts.cmp` is called with these parameters: ``` js opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue }) ``` For example, to sort on the object key names in reverse order you could write: ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; var s = stringify(obj, function (a, b) { return a.key < b.key ? 1 : -1; }); console.log(s); ``` which results in the output string: ``` {"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3} ``` Or if you wanted to sort on the object values in reverse order, you could write: ``` var stringify = require('fast-json-stable-stringify'); var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 }; var s = stringify(obj, function (a, b) { return a.value < b.value ? 1 : -1; }); console.log(s); ``` which outputs: ``` {"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10} ``` ### cycles Pass `true` in `opts.cycles` to stringify circular property as `__cycle__` - the result will not be a valid JSON string in this case. TypeError will be thrown in case of circular object without this option. # install With [npm](https://npmjs.org) do: ``` npm install fast-json-stable-stringify ``` # benchmark To run benchmark (requires Node.js 6+): ``` node benchmark ``` Results: ``` fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled) json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled) fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled) faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled) The fastest is fast-stable-stringify ``` ## Enterprise support fast-json-stable-stringify package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-json-stable-stringify?utm_source=npm-fast-json-stable-stringify&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. # license [MIT](https://github.com/epoberezkin/fast-json-stable-stringify/blob/master/LICENSE) # v8-compile-cache [![Build Status](https://travis-ci.org/zertosh/v8-compile-cache.svg?branch=master)](https://travis-ci.org/zertosh/v8-compile-cache) `v8-compile-cache` attaches a `require` hook to use [V8's code cache](https://v8project.blogspot.com/2015/07/code-caching.html) to speed up instantiation time. The "code cache" is the work of parsing and compiling done by V8. The ability to tap into V8 to produce/consume this cache was introduced in [Node v5.7.0](https://nodejs.org/en/blog/release/v5.7.0/). ## Usage 1. Add the dependency: ```sh $ npm install --save v8-compile-cache ``` 2. Then, in your entry module add: ```js require('v8-compile-cache'); ``` **Requiring `v8-compile-cache` in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by `v8-compile-cache`.** ## Options Set the environment variable `DISABLE_V8_COMPILE_CACHE=1` to disable the cache. Cache directory is defined by environment variable `V8_COMPILE_CACHE_CACHE_DIR` or defaults to `<os.tmpdir()>/v8-compile-cache-<V8_VERSION>`. ## Internals Cache files are suffixed `.BLOB` and `.MAP` corresponding to the entry module that required `v8-compile-cache`. The cache is _entry module specific_ because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis. ## Benchmarks See https://github.com/zertosh/v8-compile-cache/tree/master/bench. **Load Times:** | Module | Without Cache | With Cache | | ---------------- | -------------:| ----------:| | `babel-core` | `218ms` | `185ms` | | `yarn` | `153ms` | `113ms` | | `yarn` (bundled) | `228ms` | `105ms` | _^ Includes the overhead of loading the cache itself._ ## Acknowledgements * `FileSystemBlobStore` and `NativeCompileCache` are based on Atom's implementation of their v8 compile cache: - https://github.com/atom/atom/blob/b0d7a8a/src/file-system-blob-store.js - https://github.com/atom/atom/blob/b0d7a8a/src/native-compile-cache.js * `mkdirpSync` is based on: - https://github.com/substack/node-mkdirp/blob/f2003bb/index.js#L55-L98 # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. These files are compiled dot templates from dot folder. Do NOT edit them directly, edit the templates and run `npm run build` from main ajv folder. # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT # flatted [![Downloads](https://img.shields.io/npm/dm/flatted.svg)](https://www.npmjs.com/package/flatted) [![Coverage Status](https://coveralls.io/repos/github/WebReflection/flatted/badge.svg?branch=main)](https://coveralls.io/github/WebReflection/flatted?branch=main) [![Build Status](https://travis-ci.com/WebReflection/flatted.svg?branch=main)](https://travis-ci.com/WebReflection/flatted) [![License: ISC](https://img.shields.io/badge/License-ISC-yellow.svg)](https://opensource.org/licenses/ISC) ![WebReflection status](https://offline.report/status/webreflection.svg) ![snow flake](./flatted.jpg) <sup>**Social Media Photo by [Matt Seymour](https://unsplash.com/@mattseymour) on [Unsplash](https://unsplash.com/)**</sup> A super light (0.5K) and fast circular JSON parser, directly from the creator of [CircularJSON](https://github.com/WebReflection/circular-json/#circularjson). Now available also for **[PHP](./php/flatted.php)**. ```js npm i flatted ``` Usable via [CDN](https://unpkg.com/flatted) or as regular module. ```js // ESM import {parse, stringify, toJSON, fromJSON} from 'flatted'; // CJS const {parse, stringify, toJSON, fromJSON} = require('flatted'); const a = [{}]; a[0].a = a; a.push(a); stringify(a); // [["1","0"],{"a":"0"}] ``` ## toJSON and from JSON If you'd like to implicitly survive JSON serialization, these two helpers helps: ```js import {toJSON, fromJSON} from 'flatted'; class RecursiveMap extends Map { static fromJSON(any) { return new this(fromJSON(any)); } toJSON() { return toJSON([...this.entries()]); } } const recursive = new RecursiveMap; const same = {}; same.same = same; recursive.set('same', same); const asString = JSON.stringify(recursive); const asMap = RecursiveMap.fromJSON(JSON.parse(asString)); asMap.get('same') === asMap.get('same').same; // true ``` ## Flatted VS JSON As it is for every other specialized format capable of serializing and deserializing circular data, you should never `JSON.parse(Flatted.stringify(data))`, and you should never `Flatted.parse(JSON.stringify(data))`. The only way this could work is to `Flatted.parse(Flatted.stringify(data))`, as it is also for _CircularJSON_ or any other, otherwise there's no granted data integrity. Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won't be serialized and unserialized as expected. ### New in V1: Exact same JSON API * Added a [reviver](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Syntax) parameter to `.parse(string, reviver)` and revive your own objects. * Added a [replacer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Syntax) and a `space` parameter to `.stringify(object, replacer, space)` for feature parity with JSON signature. ### Compatibility All ECMAScript engines compatible with `Map`, `Set`, `Object.keys`, and `Array.prototype.reduce` will work, even if polyfilled. ### How does it work ? While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. `*` Once parsed, all indexes will be replaced through the flattened collection. <sup><sub>`*` represented as string to avoid conflicts with numbers</sub></sup> ```js // logic example var a = [{one: 1}, {two: '2'}]; a[0].a = a; // a is the main object, will be at index '0' // {one: 1} is the second object, index '1' // {two: '2'} the third, in '2', and it has a string // which will be found at index '3' Flatted.stringify(a); // [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"] // a[one,two] {one: 1, a} {two: '2'} '2' ``` # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # function-bind <!-- [![build status][travis-svg]][travis-url] [![NPM version][npm-badge-svg]][npm-url] [![Coverage Status][5]][6] [![gemnasium Dependency Status][7]][8] [![Dependency status][deps-svg]][deps-url] [![Dev Dependency status][dev-deps-svg]][dev-deps-url] --> <!-- [![browser support][11]][12] --> Implementation of function.prototype.bind ## Example I mainly do this for unit tests I run on phantomjs. PhantomJS does not have Function.prototype.bind :( ```js Function.prototype.bind = require("function-bind") ``` ## Installation `npm install function-bind` ## Contributors - Raynos ## MIT Licenced [travis-svg]: https://travis-ci.org/Raynos/function-bind.svg [travis-url]: https://travis-ci.org/Raynos/function-bind [npm-badge-svg]: https://badge.fury.io/js/function-bind.svg [npm-url]: https://npmjs.org/package/function-bind [5]: https://coveralls.io/repos/Raynos/function-bind/badge.png [6]: https://coveralls.io/r/Raynos/function-bind [7]: https://gemnasium.com/Raynos/function-bind.png [8]: https://gemnasium.com/Raynos/function-bind [deps-svg]: https://david-dm.org/Raynos/function-bind.svg [deps-url]: https://david-dm.org/Raynos/function-bind [dev-deps-svg]: https://david-dm.org/Raynos/function-bind/dev-status.svg [dev-deps-url]: https://david-dm.org/Raynos/function-bind#info=devDependencies [11]: https://ci.testling.com/Raynos/function-bind.png [12]: https://ci.testling.com/Raynos/function-bind # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) [Build]: http://img.shields.io/travis/litejs/natural-compare-lite.png [Coverage]: http://img.shields.io/coveralls/litejs/natural-compare-lite.png [1]: https://travis-ci.org/litejs/natural-compare-lite [2]: https://coveralls.io/r/litejs/natural-compare-lite [npm package]: https://npmjs.org/package/natural-compare-lite [GitHub repo]: https://github.com/litejs/natural-compare-lite @version 1.4.0 @date 2015-10-26 @stability 3 - Stable Natural Compare &ndash; [![Build][]][1] [![Coverage][]][2] =============== Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a "natural ordering". ```text Standard sorting: Natural order sorting: img1.png img1.png img10.png img2.png img12.png img10.png img2.png img12.png ``` String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function. ### Installation - In browser ```html <script src=min.natural-compare.js></script> ``` - In node.js: `npm install natural-compare-lite` ```javascript require("natural-compare-lite") ``` ### Usage ```javascript // Simple case sensitive example var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"]; a.sort(String.naturalCompare); // ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"] // Use wrapper function for case insensitivity a.sort(function(a, b){ return String.naturalCompare(a.toLowerCase(), b.toLowerCase()); }) // In most cases we want to sort an array of objects var a = [ {"street":"350 5th Ave", "room":"A-1021"} , {"street":"350 5th Ave", "room":"A-21046-b"} ]; // sort by street, then by room a.sort(function(a, b){ return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room); }) // When text transformation is needed (eg toLowerCase()), // it is best for performance to keep // transformed key in that object. // There are no need to do text transformation // on each comparision when sorting. var a = [ {"make":"Audi", "model":"A6"} , {"make":"Kia", "model":"Rio"} ]; // sort by make, then by model a.map(function(car){ car.sort_key = (car.make + " " + car.model).toLowerCase(); }) a.sort(function(a, b){ return String.naturalCompare(a.sort_key, b.sort_key); }) ``` - Works well with dates in ISO format eg "Rev 2012-07-26.doc". ### Custom alphabet It is possible to configure a custom alphabet to achieve a desired order. ```javascript // Estonian alphabet String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy" ["t", "z", "x", "õ"].sort(String.naturalCompare) // ["z", "t", "õ", "x"] // Russian alphabet String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя" ["Ё", "А", "Б"].sort(String.naturalCompare) // ["А", "Б", "Ё"] ``` External links -------------- - [GitHub repo][https://github.com/litejs/natural-compare-lite] - [jsperf test](http://jsperf.com/natural-sort-2/12) Licence ------- Copyright (c) 2012-2015 Lauri Rooden &lt;lauri@rooden.ee&gt; [The MIT License](http://lauri.rooden.ee/mit-license.txt) # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` Like `chown -R`. Takes the same arguments as `fs.chown()` Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/img/ajv.svg"> &nbsp; # Ajv JSON schema validator The fastest JSON validator for Node.js and browser. Supports JSON Schema draft-04/06/07/2019-09/2020-12 ([draft-04 support](https://ajv.js.org/json-schema.html#draft-04) requires ajv-draft-04 package) and JSON Type Definition [RFC8927](https://datatracker.ietf.org/doc/rfc8927/). [![build](https://github.com/ajv-validator/ajv/workflows/build/badge.svg)](https://github.com/ajv-validator/ajv/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Platinum sponsors [<img src="https://ajv.js.org/img/mozilla.svg" width="45%">](https://www.mozilla.org)<img src="https://ajv.js.org/img/gap.svg" width="8%">[<img src="https://ajv.js.org/img/reserved.svg" width="45%">](https://opencollective.com/ajv) ## Ajv online event - May 20, 10am PT / 6pm UK We will talk about: - new features of Ajv version 8. - the improvements sponsored by Mozilla's MOSS grant. - how Ajv is used in JavaScript applications. Speakers: - [Evgeny Poberezkin](https://github.com/epoberezkin), the creator of Ajv. - [Mehan Jayasuriya](https://github.com/mehan), Program Officer at Mozilla Foundation, leading the [MOSS](https://www.mozilla.org/en-US/moss/) and other programs investing in the open source and community ecosystems. - [Matteo Collina](https://github.com/mcollina), Technical Director at NearForm and Node.js Technical Steering Committee member, creator of Fastify web framework. - [Kin Lane](https://github.com/kinlane), Chief Evangelist at Postman. Studying the tech, business & politics of APIs since 2010. Presidential Innovation Fellow during the Obama administration. - [Ulysse Carion](https://github.com/ucarion), the creator of JSON Type Definition specification. [Gajus Kuizinas](https://github.com/gajus) will host the event. Please [register here](https://us02web.zoom.us/webinar/register/2716192553618/WN_erJ_t4ICTHOnGC1SOybNnw). ## Contributing More than 100 people contributed to Ajv, and we would love to have you join the development. We welcome implementing new features that will benefit many users and ideas to improve our documentation. Please review [Contributing guidelines](./CONTRIBUTING.md) and [Code components](https://ajv.js.org/components.html). ## Documentation All documentation is available on the [Ajv website](https://ajv.js.org). Some useful site links: - [Getting started](https://ajv.js.org/guide/getting-started.html) - [JSON Schema vs JSON Type Definition](https://ajv.js.org/guide/schema-language.html) - [API reference](https://ajv.js.org/api.html) - [Strict mode](https://ajv.js.org/strict-mode.html) - [Standalone validation code](https://ajv.js.org/standalone.html) - [Security considerations](https://ajv.js.org/security.html) - [Command line interface](https://ajv.js.org/packages/ajv-cli.html) - [Frequently Asked Questions](https://ajv.js.org/faq.html) ## <a name="sponsors"></a>Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Performance Ajv generates code to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=62,4,1&chs=600x416&chxl=-1:|ajv|@exodus&#x2F;schemasafe|is-my-json-valid|djv|@cfworker&#x2F;json-schema|jsonschema&chd=t:100,69.2,51.5,13.1,5.1,1.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements JSON Schema [draft-06/07/2019-09/2020-12](http://json-schema.org/) standards (draft-04 is supported in v6): - all validation keywords (see [JSON Schema validation keywords](https://ajv.js.org/json-schema.html)) - [OpenAPI](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md) extensions: - NEW: keyword [discriminator](https://ajv.js.org/json-schema.html#discriminator). - keyword [nullable](https://ajv.js.org/json-schema.html#nullable). - full support of remote references (remote schemas have to be added with `addSchema` or compiled to be available) - support of recursive references between schemas - correct string lengths for strings with unicode pairs - JSON Schema [formats](https://ajv.js.org/guide/formats.html) (with [ajv-formats](https://github.com/ajv-validator/ajv-formats) plugin). - [validates schemas against meta-schema](https://ajv.js.org/api.html#api-validateschema) - NEW: supports [JSON Type Definition](https://datatracker.ietf.org/doc/rfc8927/): - all keywords (see [JSON Type Definition schema forms](https://ajv.js.org/json-type-definition.html)) - meta-schema for JTD schemas - "union" keyword and user-defined keywords (can be used inside "metadata" member of the schema) - supports [browsers](https://ajv.js.org/guide/environments.html#browsers) and Node.js 10.x - current - [asynchronous loading](https://ajv.js.org/guide/managing-schemas.html#asynchronous-schema-loading) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](https://ajv.js.org/options.html#allerrors) - [error messages with parameters](https://ajv.js.org/api.html#validation-errors) describing error reasons to allow error message generation - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [removing-additional-properties](https://ajv.js.org/guide/modifying-data.html#removing-additional-properties) - [assigning defaults](https://ajv.js.org/guide/modifying-data.html#assigning-defaults) to missing properties and items - [coercing data](https://ajv.js.org/guide/modifying-data.html#coercing-data-types) to the types specified in `type` keywords - [user-defined keywords](https://ajv.js.org/guide/user-keywords.html) - additional extension keywords with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [\$data reference](https://ajv.js.org/guide/combining-schemas.html#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](https://ajv.js.org/guide/async-validation.html) of user-defined formats and keywords ## Install To install version 8: ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://runkit.com/npm/ajv In JavaScript: ```javascript // or ESM/TypeScript import import Ajv from "ajv" // Node.js require: const Ajv = require("ajv") const ajv = new Ajv() // options can be passed, e.g. {allErrors: true} const schema = { type: "object", properties: { foo: {type: "integer"}, bar: {type: "string"} }, required: ["foo"], additionalProperties: false, } const data = { foo: 1, bar: "abc" } const validate = ajv.compile(schema) const valid = validate(data) if (!valid) console.log(validate.errors) ``` Learn how to use Ajv and see more examples in the [Guide: getting started](https://ajv.js.org/guide/getting-started.html) ## Changes history See [https://github.com/ajv-validator/ajv/releases](https://github.com/ajv-validator/ajv/releases) **Please note**: [Changes in version 8.0.0](https://github.com/ajv-validator/ajv/releases/tag/v8.0.0) [Version 7.0.0](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](./CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](./LICENSE) # ShellJS - Unix shell commands for Node.js [![Travis](https://img.shields.io/travis/shelljs/shelljs/master.svg?style=flat-square&label=unix)](https://travis-ci.org/shelljs/shelljs) [![AppVeyor](https://img.shields.io/appveyor/ci/shelljs/shelljs/master.svg?style=flat-square&label=windows)](https://ci.appveyor.com/project/shelljs/shelljs/branch/master) [![Codecov](https://img.shields.io/codecov/c/github/shelljs/shelljs/master.svg?style=flat-square&label=coverage)](https://codecov.io/gh/shelljs/shelljs) [![npm version](https://img.shields.io/npm/v/shelljs.svg?style=flat-square)](https://www.npmjs.com/package/shelljs) [![npm downloads](https://img.shields.io/npm/dm/shelljs.svg?style=flat-square)](https://www.npmjs.com/package/shelljs) ShellJS is a portable **(Windows/Linux/OS X)** implementation of Unix shell commands on top of the Node.js API. You can use it to eliminate your shell script's dependency on Unix while still keeping its familiar and powerful commands. You can also install it globally so you can run it from outside Node projects - say goodbye to those gnarly Bash scripts! ShellJS is proudly tested on every node release since `v4`! The project is [unit-tested](http://travis-ci.org/shelljs/shelljs) and battle-tested in projects like: + [Firebug](http://getfirebug.com/) - Firefox's infamous debugger + [JSHint](http://jshint.com) & [ESLint](http://eslint.org/) - popular JavaScript linters + [Zepto](http://zeptojs.com) - jQuery-compatible JavaScript library for modern browsers + [Yeoman](http://yeoman.io/) - Web application stack and development tool + [Deployd.com](http://deployd.com) - Open source PaaS for quick API backend generation + And [many more](https://npmjs.org/browse/depended/shelljs). If you have feedback, suggestions, or need help, feel free to post in our [issue tracker](https://github.com/shelljs/shelljs/issues). Think ShellJS is cool? Check out some related projects in our [Wiki page](https://github.com/shelljs/shelljs/wiki)! Upgrading from an older version? Check out our [breaking changes](https://github.com/shelljs/shelljs/wiki/Breaking-Changes) page to see what changes to watch out for while upgrading. ## Command line use If you just want cross platform UNIX commands, checkout our new project [shelljs/shx](https://github.com/shelljs/shx), a utility to expose `shelljs` to the command line. For example: ``` $ shx mkdir -p foo $ shx touch foo/bar.txt $ shx rm -rf foo ``` ## Plugin API ShellJS now supports third-party plugins! You can learn more about using plugins and writing your own ShellJS commands in [the wiki](https://github.com/shelljs/shelljs/wiki/Using-ShellJS-Plugins). ## A quick note about the docs For documentation on all the latest features, check out our [README](https://github.com/shelljs/shelljs). To read docs that are consistent with the latest release, check out [the npm page](https://www.npmjs.com/package/shelljs) or [shelljs.org](http://documentup.com/shelljs/shelljs). ## Installing Via npm: ```bash $ npm install [-g] shelljs ``` ## Examples ```javascript var shell = require('shelljs'); if (!shell.which('git')) { shell.echo('Sorry, this script requires git'); shell.exit(1); } // Copy files to release dir shell.rm('-rf', 'out/Release'); shell.cp('-R', 'stuff/', 'out/Release'); // Replace macros in each .js file shell.cd('lib'); shell.ls('*.js').forEach(function (file) { shell.sed('-i', 'BUILD_VERSION', 'v0.1.2', file); shell.sed('-i', /^.*REMOVE_THIS_LINE.*$/, '', file); shell.sed('-i', /.*REPLACE_LINE_WITH_MACRO.*\n/, shell.cat('macro.js'), file); }); shell.cd('..'); // Run external tool synchronously if (shell.exec('git commit -am "Auto-commit"').code !== 0) { shell.echo('Error: Git commit failed'); shell.exit(1); } ``` ## Exclude options If you need to pass a parameter that looks like an option, you can do so like: ```js shell.grep('--', '-v', 'path/to/file'); // Search for "-v", no grep options shell.cp('-R', '-dir', 'outdir'); // If already using an option, you're done ``` ## Global vs. Local We no longer recommend using a global-import for ShellJS (i.e. `require('shelljs/global')`). While still supported for convenience, this pollutes the global namespace, and should therefore only be used with caution. Instead, we recommend a local import (standard for npm packages): ```javascript var shell = require('shelljs'); shell.echo('hello world'); ``` <!-- DO NOT MODIFY BEYOND THIS POINT - IT'S AUTOMATICALLY GENERATED --> ## Command reference All commands run synchronously, unless otherwise stated. All commands accept standard bash globbing characters (`*`, `?`, etc.), compatible with the [node `glob` module](https://github.com/isaacs/node-glob). For less-commonly used commands and features, please check out our [wiki page](https://github.com/shelljs/shelljs/wiki). ### cat([options,] file [, file ...]) ### cat([options,] file_array) Available options: + `-n`: number all output lines Examples: ```javascript var str = cat('file*.txt'); var str = cat('file1', 'file2'); var str = cat(['file1', 'file2']); // same as above ``` Returns a string containing the given file, or a concatenated string containing the files if more than one file is given (a new line character is introduced between each file). ### cd([dir]) Changes to directory `dir` for the duration of the script. Changes to home directory if no argument is supplied. ### chmod([options,] octal_mode || octal_string, file) ### chmod([options,] symbolic_mode, file) Available options: + `-v`: output a diagnostic for every file processed + `-c`: like verbose, but report only when a change is made + `-R`: change files and directories recursively Examples: ```javascript chmod(755, '/Users/brandon'); chmod('755', '/Users/brandon'); // same as above chmod('u+x', '/Users/brandon'); chmod('-R', 'a-w', '/Users/brandon'); ``` Alters the permissions of a file or directory by either specifying the absolute permissions in octal form or expressing the changes in symbols. This command tries to mimic the POSIX behavior as much as possible. Notable exceptions: + In symbolic modes, `a-r` and `-r` are identical. No consideration is given to the `umask`. + There is no "quiet" option, since default behavior is to run silent. ### cp([options,] source [, source ...], dest) ### cp([options,] source_array, dest) Available options: + `-f`: force (default behavior) + `-n`: no-clobber + `-u`: only copy if `source` is newer than `dest` + `-r`, `-R`: recursive + `-L`: follow symlinks + `-P`: don't follow symlinks Examples: ```javascript cp('file1', 'dir1'); cp('-R', 'path/to/dir/', '~/newCopy/'); cp('-Rf', '/tmp/*', '/usr/local/*', '/home/tmp'); cp('-Rf', ['/tmp/*', '/usr/local/*'], '/home/tmp'); // same as above ``` Copies files. ### pushd([options,] [dir | '-N' | '+N']) Available options: + `-n`: Suppresses the normal change of directory when adding directories to the stack, so that only the stack is manipulated. + `-q`: Supresses output to the console. Arguments: + `dir`: Sets the current working directory to the top of the stack, then executes the equivalent of `cd dir`. + `+N`: Brings the Nth directory (counting from the left of the list printed by dirs, starting with zero) to the top of the list by rotating the stack. + `-N`: Brings the Nth directory (counting from the right of the list printed by dirs, starting with zero) to the top of the list by rotating the stack. Examples: ```javascript // process.cwd() === '/usr' pushd('/etc'); // Returns /etc /usr pushd('+1'); // Returns /usr /etc ``` Save the current directory on the top of the directory stack and then `cd` to `dir`. With no arguments, `pushd` exchanges the top two directories. Returns an array of paths in the stack. ### popd([options,] ['-N' | '+N']) Available options: + `-n`: Suppress the normal directory change when removing directories from the stack, so that only the stack is manipulated. + `-q`: Supresses output to the console. Arguments: + `+N`: Removes the Nth directory (counting from the left of the list printed by dirs), starting with zero. + `-N`: Removes the Nth directory (counting from the right of the list printed by dirs), starting with zero. Examples: ```javascript echo(process.cwd()); // '/usr' pushd('/etc'); // '/etc /usr' echo(process.cwd()); // '/etc' popd(); // '/usr' echo(process.cwd()); // '/usr' ``` When no arguments are given, `popd` removes the top directory from the stack and performs a `cd` to the new top directory. The elements are numbered from 0, starting at the first directory listed with dirs (i.e., `popd` is equivalent to `popd +0`). Returns an array of paths in the stack. ### dirs([options | '+N' | '-N']) Available options: + `-c`: Clears the directory stack by deleting all of the elements. + `-q`: Supresses output to the console. Arguments: + `+N`: Displays the Nth directory (counting from the left of the list printed by dirs when invoked without options), starting with zero. + `-N`: Displays the Nth directory (counting from the right of the list printed by dirs when invoked without options), starting with zero. Display the list of currently remembered directories. Returns an array of paths in the stack, or a single path if `+N` or `-N` was specified. See also: `pushd`, `popd` ### echo([options,] string [, string ...]) Available options: + `-e`: interpret backslash escapes (default) + `-n`: remove trailing newline from output Examples: ```javascript echo('hello world'); var str = echo('hello world'); echo('-n', 'no newline at end'); ``` Prints `string` to stdout, and returns string with additional utility methods like `.to()`. ### exec(command [, options] [, callback]) Available options: + `async`: Asynchronous execution. If a callback is provided, it will be set to `true`, regardless of the passed value (default: `false`). + `silent`: Do not echo program output to console (default: `false`). + `encoding`: Character encoding to use. Affects the values returned to stdout and stderr, and what is written to stdout and stderr when not in silent mode (default: `'utf8'`). + and any option available to Node.js's [`child_process.exec()`](https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback) Examples: ```javascript var version = exec('node --version', {silent:true}).stdout; var child = exec('some_long_running_process', {async:true}); child.stdout.on('data', function(data) { /* ... do something with data ... */ }); exec('some_long_running_process', function(code, stdout, stderr) { console.log('Exit code:', code); console.log('Program output:', stdout); console.log('Program stderr:', stderr); }); ``` Executes the given `command` _synchronously_, unless otherwise specified. When in synchronous mode, this returns a `ShellString` (compatible with ShellJS v0.6.x, which returns an object of the form `{ code:..., stdout:... , stderr:... }`). Otherwise, this returns the child process object, and the `callback` receives the arguments `(code, stdout, stderr)`. Not seeing the behavior you want? `exec()` runs everything through `sh` by default (or `cmd.exe` on Windows), which differs from `bash`. If you need bash-specific behavior, try out the `{shell: 'path/to/bash'}` option. ### find(path [, path ...]) ### find(path_array) Examples: ```javascript find('src', 'lib'); find(['src', 'lib']); // same as above find('.').filter(function(file) { return file.match(/\.js$/); }); ``` Returns array of all files (however deep) in the given paths. The main difference from `ls('-R', path)` is that the resulting file names include the base directories (e.g., `lib/resources/file1` instead of just `file1`). ### grep([options,] regex_filter, file [, file ...]) ### grep([options,] regex_filter, file_array) Available options: + `-v`: Invert `regex_filter` (only print non-matching lines). + `-l`: Print only filenames of matching files. + `-i`: Ignore case. Examples: ```javascript grep('-v', 'GLOBAL_VARIABLE', '*.js'); grep('GLOBAL_VARIABLE', '*.js'); ``` Reads input string from given files and returns a string containing all lines of the file that match the given `regex_filter`. ### head([{'-n': \<num\>},] file [, file ...]) ### head([{'-n': \<num\>},] file_array) Available options: + `-n <num>`: Show the first `<num>` lines of the files Examples: ```javascript var str = head({'-n': 1}, 'file*.txt'); var str = head('file1', 'file2'); var str = head(['file1', 'file2']); // same as above ``` Read the start of a file. ### ln([options,] source, dest) Available options: + `-s`: symlink + `-f`: force Examples: ```javascript ln('file', 'newlink'); ln('-sf', 'file', 'existing'); ``` Links `source` to `dest`. Use `-f` to force the link, should `dest` already exist. ### ls([options,] [path, ...]) ### ls([options,] path_array) Available options: + `-R`: recursive + `-A`: all files (include files beginning with `.`, except for `.` and `..`) + `-L`: follow symlinks + `-d`: list directories themselves, not their contents + `-l`: list objects representing each file, each with fields containing `ls -l` output fields. See [`fs.Stats`](https://nodejs.org/api/fs.html#fs_class_fs_stats) for more info Examples: ```javascript ls('projs/*.js'); ls('-R', '/users/me', '/tmp'); ls('-R', ['/users/me', '/tmp']); // same as above ls('-l', 'file.txt'); // { name: 'file.txt', mode: 33188, nlink: 1, ...} ``` Returns array of files in the given `path`, or files in the current directory if no `path` is provided. ### mkdir([options,] dir [, dir ...]) ### mkdir([options,] dir_array) Available options: + `-p`: full path (and create intermediate directories, if necessary) Examples: ```javascript mkdir('-p', '/tmp/a/b/c/d', '/tmp/e/f/g'); mkdir('-p', ['/tmp/a/b/c/d', '/tmp/e/f/g']); // same as above ``` Creates directories. ### mv([options ,] source [, source ...], dest') ### mv([options ,] source_array, dest') Available options: + `-f`: force (default behavior) + `-n`: no-clobber Examples: ```javascript mv('-n', 'file', 'dir/'); mv('file1', 'file2', 'dir/'); mv(['file1', 'file2'], 'dir/'); // same as above ``` Moves `source` file(s) to `dest`. ### pwd() Returns the current directory. ### rm([options,] file [, file ...]) ### rm([options,] file_array) Available options: + `-f`: force + `-r, -R`: recursive Examples: ```javascript rm('-rf', '/tmp/*'); rm('some_file.txt', 'another_file.txt'); rm(['some_file.txt', 'another_file.txt']); // same as above ``` Removes files. ### sed([options,] search_regex, replacement, file [, file ...]) ### sed([options,] search_regex, replacement, file_array) Available options: + `-i`: Replace contents of `file` in-place. _Note that no backups will be created!_ Examples: ```javascript sed('-i', 'PROGRAM_VERSION', 'v0.1.3', 'source.js'); sed(/.*DELETE_THIS_LINE.*\n/, '', 'source.js'); ``` Reads an input string from `file`s, and performs a JavaScript `replace()` on the input using the given `search_regex` and `replacement` string or function. Returns the new string after replacement. Note: Like unix `sed`, ShellJS `sed` supports capture groups. Capture groups are specified using the `$n` syntax: ```javascript sed(/(\w+)\s(\w+)/, '$2, $1', 'file.txt'); ``` ### set(options) Available options: + `+/-e`: exit upon error (`config.fatal`) + `+/-v`: verbose: show all commands (`config.verbose`) + `+/-f`: disable filename expansion (globbing) Examples: ```javascript set('-e'); // exit upon first error set('+e'); // this undoes a "set('-e')" ``` Sets global configuration variables. ### sort([options,] file [, file ...]) ### sort([options,] file_array) Available options: + `-r`: Reverse the results + `-n`: Compare according to numerical value Examples: ```javascript sort('foo.txt', 'bar.txt'); sort('-r', 'foo.txt'); ``` Return the contents of the `file`s, sorted line-by-line. Sorting multiple files mixes their content (just as unix `sort` does). ### tail([{'-n': \<num\>},] file [, file ...]) ### tail([{'-n': \<num\>},] file_array) Available options: + `-n <num>`: Show the last `<num>` lines of `file`s Examples: ```javascript var str = tail({'-n': 1}, 'file*.txt'); var str = tail('file1', 'file2'); var str = tail(['file1', 'file2']); // same as above ``` Read the end of a `file`. ### tempdir() Examples: ```javascript var tmp = tempdir(); // "/tmp" for most *nix platforms ``` Searches and returns string containing a writeable, platform-dependent temporary directory. Follows Python's [tempfile algorithm](http://docs.python.org/library/tempfile.html#tempfile.tempdir). ### test(expression) Available expression primaries: + `'-b', 'path'`: true if path is a block device + `'-c', 'path'`: true if path is a character device + `'-d', 'path'`: true if path is a directory + `'-e', 'path'`: true if path exists + `'-f', 'path'`: true if path is a regular file + `'-L', 'path'`: true if path is a symbolic link + `'-p', 'path'`: true if path is a pipe (FIFO) + `'-S', 'path'`: true if path is a socket Examples: ```javascript if (test('-d', path)) { /* do something with dir */ }; if (!test('-f', path)) continue; // skip if it's a regular file ``` Evaluates `expression` using the available primaries and returns corresponding value. ### ShellString.prototype.to(file) Examples: ```javascript cat('input.txt').to('output.txt'); ``` Analogous to the redirection operator `>` in Unix, but works with `ShellStrings` (such as those returned by `cat`, `grep`, etc.). _Like Unix redirections, `to()` will overwrite any existing file!_ ### ShellString.prototype.toEnd(file) Examples: ```javascript cat('input.txt').toEnd('output.txt'); ``` Analogous to the redirect-and-append operator `>>` in Unix, but works with `ShellStrings` (such as those returned by `cat`, `grep`, etc.). ### touch([options,] file [, file ...]) ### touch([options,] file_array) Available options: + `-a`: Change only the access time + `-c`: Do not create any files + `-m`: Change only the modification time + `-d DATE`: Parse `DATE` and use it instead of current time + `-r FILE`: Use `FILE`'s times instead of current time Examples: ```javascript touch('source.js'); touch('-c', '/path/to/some/dir/source.js'); touch({ '-r': FILE }, '/path/to/some/dir/source.js'); ``` Update the access and modification times of each `FILE` to the current time. A `FILE` argument that does not exist is created empty, unless `-c` is supplied. This is a partial implementation of [`touch(1)`](http://linux.die.net/man/1/touch). ### uniq([options,] [input, [output]]) Available options: + `-i`: Ignore case while comparing + `-c`: Prefix lines by the number of occurrences + `-d`: Only print duplicate lines, one for each group of identical lines Examples: ```javascript uniq('foo.txt'); uniq('-i', 'foo.txt'); uniq('-cd', 'foo.txt', 'bar.txt'); ``` Filter adjacent matching lines from `input`. ### which(command) Examples: ```javascript var nodeExec = which('node'); ``` Searches for `command` in the system's `PATH`. On Windows, this uses the `PATHEXT` variable to append the extension if it's not already executable. Returns string containing the absolute path to `command`. ### exit(code) Exits the current process with the given exit `code`. ### error() Tests if error occurred in the last command. Returns a truthy value if an error returned, or a falsy value otherwise. **Note**: do not rely on the return value to be an error message. If you need the last error message, use the `.stderr` attribute from the last command's return value instead. ### ShellString(str) Examples: ```javascript var foo = ShellString('hello world'); ``` Turns a regular string into a string-like object similar to what each command returns. This has special methods, like `.to()` and `.toEnd()`. ### env['VAR_NAME'] Object containing environment variables (both getter and setter). Shortcut to `process.env`. ### Pipes Examples: ```javascript grep('foo', 'file1.txt', 'file2.txt').sed(/o/g, 'a').to('output.txt'); echo('files with o\'s in the name:\n' + ls().grep('o')); cat('test.js').exec('node'); // pipe to exec() call ``` Commands can send their output to another command in a pipe-like fashion. `sed`, `grep`, `cat`, `exec`, `to`, and `toEnd` can appear on the right-hand side of a pipe. Pipes can be chained. ## Configuration ### config.silent Example: ```javascript var sh = require('shelljs'); var silentState = sh.config.silent; // save old silent state sh.config.silent = true; /* ... */ sh.config.silent = silentState; // restore old silent state ``` Suppresses all command output if `true`, except for `echo()` calls. Default is `false`. ### config.fatal Example: ```javascript require('shelljs/global'); config.fatal = true; // or set('-e'); cp('this_file_does_not_exist', '/dev/null'); // throws Error here /* more commands... */ ``` If `true`, the script will throw a Javascript error when any shell.js command encounters an error. Default is `false`. This is analogous to Bash's `set -e`. ### config.verbose Example: ```javascript config.verbose = true; // or set('-v'); cd('dir/'); rm('-rf', 'foo.txt', 'bar.txt'); exec('echo hello'); ``` Will print each command as follows: ``` cd dir/ rm -rf foo.txt bar.txt exec echo hello ``` ### config.globOptions Example: ```javascript config.globOptions = {nodir: true}; ``` Use this value for calls to `glob.sync()` instead of the default options. ### config.reset() Example: ```javascript var shell = require('shelljs'); // Make changes to shell.config, and do stuff... /* ... */ shell.config.reset(); // reset to original state // Do more stuff, but with original settings /* ... */ ``` Reset `shell.config` to the defaults: ```javascript { fatal: false, globOptions: {}, maxdepth: 255, noglob: false, silent: false, verbose: false, } ``` ## Team | [![Nate Fischer](https://avatars.githubusercontent.com/u/5801521?s=130)](https://github.com/nfischer) | [![Brandon Freitag](https://avatars1.githubusercontent.com/u/5988055?v=3&s=130)](http://github.com/freitagbr) | |:---:|:---:| | [Nate Fischer](https://github.com/nfischer) | [Brandon Freitag](http://github.com/freitagbr) | <h1 align="center">Enquirer</h1> <p align="center"> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/v/enquirer.svg" alt="version"> </a> <a href="https://travis-ci.org/enquirer/enquirer"> <img src="https://img.shields.io/travis/enquirer/enquirer.svg" alt="travis"> </a> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/dm/enquirer.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <b>Stylish CLI prompts that are user-friendly, intuitive and easy to create.</b><br> <sub>>_ Prompts should be more like conversations than inquisitions▌</sub> </p> <br> <p align="center"> <sub>(Example shows Enquirer's <a href="#survey-prompt">Survey Prompt</a>)</a></sub> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"><br> <sub>The terminal in all examples is <a href="https://hyper.is/">Hyper</a>, theme is <a href="https://github.com/jonschlinkert/hyper-monokai-extended">hyper-monokai-extended</a>.</sub><br><br> <a href="#built-in-prompts"><strong>See more prompt examples</strong></a> </p> <br> <br> Created by [jonschlinkert](https://github.com/jonschlinkert) and [doowb](https://github.com/doowb), Enquirer is fast, easy to use, and lightweight enough for small projects, while also being powerful and customizable enough for the most advanced use cases. * **Fast** - [Loads in ~4ms](#-performance) (that's about _3-4 times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps_) * **Lightweight** - Only one dependency, the excellent [ansi-colors](https://github.com/doowb/ansi-colors) by [Brian Woodward](https://github.com/doowb). * **Easy to implement** - Uses promises and async/await and sensible defaults to make prompts easy to create and implement. * **Easy to use** - Thrill your users with a better experience! Navigating around input and choices is a breeze. You can even create [quizzes](examples/fun/countdown.js), or [record](examples/fun/record.js) and [playback](examples/fun/play.js) key bindings to aid with tutorials and videos. * **Intuitive** - Keypress combos are available to simplify usage. * **Flexible** - All prompts can be used standalone or chained together. * **Stylish** - Easily override semantic styles and symbols for any part of the prompt. * **Extensible** - Easily create and use custom prompts by extending Enquirer's built-in [prompts](#-prompts). * **Pluggable** - Add advanced features to Enquirer using plugins. * **Validation** - Optionally validate user input with any prompt. * **Well tested** - All prompts are well-tested, and tests are easy to create without having to use brittle, hacky solutions to spy on prompts or "inject" values. * **Examples** - There are numerous [examples](examples) available to help you get started. If you like Enquirer, please consider starring or tweeting about this project to show your support. Thanks! <br> <p align="center"> <b>>_ Ready to start making prompts your users will love? ▌</b><br> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/heartbeat.gif" alt="Enquirer Select Prompt with heartbeat example" width="750"> </p> <br> <br> ## ❯ Getting started Get started with Enquirer, the most powerful and easy-to-use Node.js library for creating interactive CLI prompts. * [Install](#-install) * [Usage](#-usage) * [Enquirer](#-enquirer) * [Prompts](#-prompts) - [Built-in Prompts](#-prompts) - [Custom Prompts](#-custom-prompts) * [Key Bindings](#-key-bindings) * [Options](#-options) * [Release History](#-release-history) * [Performance](#-performance) * [About](#-about) <br> ## ❯ Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install enquirer --save ``` Install with [yarn](https://yarnpkg.com/en/): ```sh $ yarn add enquirer ``` <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/npm-install.gif" alt="Install Enquirer with NPM" width="750"> </p> _(Requires Node.js 8.6 or higher. Please let us know if you need support for an earlier version by creating an [issue](../../issues/new).)_ <br> ## ❯ Usage ### Single prompt The easiest way to get started with enquirer is to pass a [question object](#prompt-options) to the `prompt` method. ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); // { username: 'jonschlinkert' } ``` _(Examples with `await` need to be run inside an `async` function)_ ### Multiple prompts Pass an array of ["question" objects](#prompt-options) to run a series of prompts. ```js const response = await prompt([ { type: 'input', name: 'name', message: 'What is your name?' }, { type: 'input', name: 'username', message: 'What is your username?' } ]); console.log(response); // { name: 'Edward Chan', username: 'edwardmchan' } ``` ### Different ways to run enquirer #### 1. By importing the specific `built-in prompt` ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Did you like enquirer?' }); prompt.run() .then(answer => console.log('Answer:', answer)); ``` #### 2. By passing the options to `prompt` ```js const { prompt } = require('enquirer'); prompt({ type: 'confirm', name: 'question', message: 'Did you like enquirer?' }) .then(answer => console.log('Answer:', answer)); ``` **Jump to**: [Getting Started](#-getting-started) · [Prompts](#-prompts) · [Options](#-options) · [Key Bindings](#-key-bindings) <br> ## ❯ Enquirer **Enquirer is a prompt runner** Add Enquirer to your JavaScript project with following line of code. ```js const Enquirer = require('enquirer'); ``` The main export of this library is the `Enquirer` class, which has methods and features designed to simplify running prompts. ```js const { prompt } = require('enquirer'); const question = [ { type: 'input', name: 'username', message: 'What is your username?' }, { type: 'password', name: 'password', message: 'What is your password?' } ]; let answers = await prompt(question); console.log(answers); ``` **Prompts control how values are rendered and returned** Each individual prompt is a class with special features and functionality for rendering the types of values you want to show users in the terminal, and subsequently returning the types of values you need to use in your application. **How can I customize prompts?** Below in this guide you will find information about creating [custom prompts](#-custom-prompts). For now, we'll focus on how to customize an existing prompt. All of the individual [prompt classes](#built-in-prompts) in this library are exposed as static properties on Enquirer. This allows them to be used directly without using `enquirer.prompt()`. Use this approach if you need to modify a prompt instance, or listen for events on the prompt. **Example** ```js const { Input } = require('enquirer'); const prompt = new Input({ name: 'username', message: 'What is your username?' }); prompt.run() .then(answer => console.log('Username:', answer)) .catch(console.error); ``` ### [Enquirer](index.js#L20) Create an instance of `Enquirer`. **Params** * `options` **{Object}**: (optional) Options to use with all prompts. * `answers` **{Object}**: (optional) Answers object to initialize with. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` ### [register()](index.js#L42) Register a custom prompt type. **Params** * `type` **{String}** * `fn` **{Function|Prompt}**: `Prompt` class, or a function that returns a `Prompt` class. * `returns` **{Object}**: Returns the Enquirer instance **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); enquirer.register('customType', require('./custom-prompt')); ``` ### [prompt()](index.js#L78) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const response = await enquirer.prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` ### [use()](index.js#L160) Use an enquirer plugin. **Params** * `plugin` **{Function}**: Plugin function that takes an instance of Enquirer. * `returns` **{Object}**: Returns the Enquirer instance. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const plugin = enquirer => { // do stuff to enquire instance }; enquirer.use(plugin); ``` ### [Enquirer#prompt](index.js#L210) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` <br> ## ❯ Prompts This section is about Enquirer's prompts: what they look like, how they work, how to run them, available options, and how to customize the prompts or create your own prompt concept. **Getting started with Enquirer's prompts** * [Prompt](#prompt) - The base `Prompt` class used by other prompts - [Prompt Options](#prompt-options) * [Built-in prompts](#built-in-prompts) * [Prompt Types](#prompt-types) - The base `Prompt` class used by other prompts * [Custom prompts](#%E2%9D%AF-custom-prompts) - Enquirer 2.0 introduced the concept of prompt "types", with the goal of making custom prompts easier than ever to create and use. ### Prompt The base `Prompt` class is used to create all other prompts. ```js const { Prompt } = require('enquirer'); class MyCustomPrompt extends Prompt {} ``` See the documentation for [creating custom prompts](#-custom-prompts) to learn more about how this works. #### Prompt Options Each prompt takes an options object (aka "question" object), that implements the following interface: ```js { // required type: string | function, name: string | function, message: string | function | async function, // optional skip: boolean | function | async function, initial: string | function | async function, format: function | async function, result: function | async function, validate: function | async function, } ``` Each property of the options object is described below: | **Property** | **Required?** | **Type** | **Description** | | ------------ | ------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `type` | yes | `string\|function` | Enquirer uses this value to determine the type of prompt to run, but it's optional when prompts are run directly. | | `name` | yes | `string\|function` | Used as the key for the answer on the returned values (answers) object. | | `message` | yes | `string\|function` | The message to display when the prompt is rendered in the terminal. | | `skip` | no | `boolean\|function` | If `true` it will not ask that prompt. | | `initial` | no | `string\|function` | The default value to return if the user does not supply a value. | | `format` | no | `function` | Function to format user input in the terminal. | | `result` | no | `function` | Function to format the final submitted value before it's returned. | | `validate` | no | `function` | Function to validate the submitted value before it's returned. This function may return a boolean or a string. If a string is returned it will be used as the validation error message. | **Example usage** ```js const { prompt } = require('enquirer'); const question = { type: 'input', name: 'username', message: 'What is your username?' }; prompt(question) .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` <br> ### Built-in prompts * [AutoComplete Prompt](#autocomplete-prompt) * [BasicAuth Prompt](#basicauth-prompt) * [Confirm Prompt](#confirm-prompt) * [Form Prompt](#form-prompt) * [Input Prompt](#input-prompt) * [Invisible Prompt](#invisible-prompt) * [List Prompt](#list-prompt) * [MultiSelect Prompt](#multiselect-prompt) * [Numeral Prompt](#numeral-prompt) * [Password Prompt](#password-prompt) * [Quiz Prompt](#quiz-prompt) * [Survey Prompt](#survey-prompt) * [Scale Prompt](#scale-prompt) * [Select Prompt](#select-prompt) * [Sort Prompt](#sort-prompt) * [Snippet Prompt](#snippet-prompt) * [Toggle Prompt](#toggle-prompt) ### AutoComplete Prompt Prompt that auto-completes as the user types, and returns the selected value as a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/autocomplete-prompt.gif" alt="Enquirer AutoComplete Prompt" width="750"> </p> **Example Usage** ```js const { AutoComplete } = require('enquirer'); const prompt = new AutoComplete({ name: 'flavor', message: 'Pick your favorite flavor', limit: 10, initial: 2, choices: [ 'Almond', 'Apple', 'Banana', 'Blackberry', 'Blueberry', 'Cherry', 'Chocolate', 'Cinnamon', 'Coconut', 'Cranberry', 'Grape', 'Nougat', 'Orange', 'Pear', 'Pineapple', 'Raspberry', 'Strawberry', 'Vanilla', 'Watermelon', 'Wintergreen' ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **AutoComplete Options** | Option | Type | Default | Description | | ----------- | ---------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `highlight` | `function` | `dim` version of primary style | The color to use when "highlighting" characters in the list that match user input. | | `multiple` | `boolean` | `false` | Allow multiple choices to be selected. | | `suggest` | `function` | Greedy match, returns true if choice message contains input string. | Function that filters choices. Takes user input and a choices array, and returns a list of matching choices. | | `initial` | `number` | 0 | Preselected item in the list of choices. | | `footer` | `function` | None | Function that displays [footer text](https://github.com/enquirer/enquirer/blob/6c2819518a1e2ed284242a99a685655fbaabfa28/examples/autocomplete/option-footer.js#L10) | **Related prompts** * [Select](#select-prompt) * [MultiSelect](#multiselect-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### BasicAuth Prompt Prompt that asks for username and password to authenticate the user. The default implementation of `authenticate` function in `BasicAuth` prompt is to compare the username and password with the values supplied while running the prompt. The implementer is expected to override the `authenticate` function with a custom logic such as making an API request to a server to authenticate the username and password entered and expect a token back. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61570485-7ffd9c00-aaaa-11e9-857a-d47dc7008284.gif" alt="Enquirer BasicAuth Prompt" width="750"> </p> **Example Usage** ```js const { BasicAuth } = require('enquirer'); const prompt = new BasicAuth({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '123', showPassword: true }); prompt .run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Confirm Prompt Prompt that returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/confirm-prompt.gif" alt="Enquirer Confirm Prompt" width="750"> </p> **Example Usage** ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Want to answer?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Form Prompt Prompt that allows the user to enter and submit multiple values on a single terminal screen. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/form-prompt.gif" alt="Enquirer Form Prompt" width="750"> </p> **Example Usage** ```js const { Form } = require('enquirer'); const prompt = new Form({ name: 'user', message: 'Please provide the following information:', choices: [ { name: 'firstname', message: 'First Name', initial: 'Jon' }, { name: 'lastname', message: 'Last Name', initial: 'Schlinkert' }, { name: 'username', message: 'GitHub username', initial: 'jonschlinkert' } ] }); prompt.run() .then(value => console.log('Answer:', value)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Input Prompt Prompt that takes user input and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/input-prompt.gif" alt="Enquirer Input Prompt" width="750"> </p> **Example Usage** ```js const { Input } = require('enquirer'); const prompt = new Input({ message: 'What is your username?', initial: 'jonschlinkert' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.log); ``` You can use [data-store](https://github.com/jonschlinkert/data-store) to store [input history](https://github.com/enquirer/enquirer/blob/master/examples/input/option-history.js) that the user can cycle through (see [source](https://github.com/enquirer/enquirer/blob/8407dc3579123df5e6e20215078e33bb605b0c37/lib/prompts/input.js)). **Related prompts** * [Confirm](#confirm-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Invisible Prompt Prompt that takes user input, hides it from the terminal, and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/invisible-prompt.gif" alt="Enquirer Invisible Prompt" width="750"> </p> **Example Usage** ```js const { Invisible } = require('enquirer'); const prompt = new Invisible({ name: 'secret', message: 'What is your secret?' }); prompt.run() .then(answer => console.log('Answer:', { secret: answer })) .catch(console.error); ``` **Related prompts** * [Password](#password-prompt) * [Input](#input-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### List Prompt Prompt that returns a list of values, created by splitting the user input. The default split character is `,` with optional trailing whitespace. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/list-prompt.gif" alt="Enquirer List Prompt" width="750"> </p> **Example Usage** ```js const { List } = require('enquirer'); const prompt = new List({ name: 'keywords', message: 'Type comma-separated keywords' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Sort](#sort-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### MultiSelect Prompt Prompt that allows the user to select multiple items from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/multiselect-prompt.gif" alt="Enquirer MultiSelect Prompt" width="750"> </p> **Example Usage** ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: ['aqua', 'blue', 'fuchsia'] ``` **Example key-value pairs** Optionally, pass a `result` function and use the `.map` method to return an object of key-value pairs of the selected names and values: [example](./examples/multiselect/option-result.js) ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ], result(names) { return this.map(names); } }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: { aqua: '#00ffff', blue: '#0000ff', fuchsia: '#ff00ff' } ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Numeral Prompt Prompt that takes a number as input. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/numeral-prompt.gif" alt="Enquirer Numeral Prompt" width="750"> </p> **Example Usage** ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ name: 'number', message: 'Please enter a number' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Confirm](#confirm-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Password Prompt Prompt that takes user input and masks it in the terminal. Also see the [invisible prompt](#invisible-prompt) <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/password-prompt.gif" alt="Enquirer Password Prompt" width="750"> </p> **Example Usage** ```js const { Password } = require('enquirer'); const prompt = new Password({ name: 'password', message: 'What is your password?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Invisible](#invisible-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Quiz Prompt Prompt that allows the user to play multiple-choice quiz questions. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61567561-891d4780-aa6f-11e9-9b09-3d504abd24ed.gif" alt="Enquirer Quiz Prompt" width="750"> </p> **Example Usage** ```js const { Quiz } = require('enquirer'); const prompt = new Quiz({ name: 'countries', message: 'How many countries are there in the world?', choices: ['165', '175', '185', '195', '205'], correctChoice: 3 }); prompt .run() .then(answer => { if (answer.correct) { console.log('Correct!'); } else { console.log(`Wrong! Correct answer is ${answer.correctAnswer}`); } }) .catch(console.error); ``` **Quiz Options** | Option | Type | Required | Description | | ----------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | | `choices` | `array` | Yes | The list of possible answers to the quiz question. | | `correctChoice`| `number` | Yes | Index of the correct choice from the `choices` array. | **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Survey Prompt Prompt that allows the user to provide feedback for a list of questions. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"> </p> **Example Usage** ```js const { Survey } = require('enquirer'); const prompt = new Survey({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.' }, { name: 'navigation', message: 'The website is easy to navigate.' }, { name: 'images', message: 'The website usually has good images.' }, { name: 'upload', message: 'The website makes it easy to upload images.' }, { name: 'colors', message: 'The website has a pleasing color palette.' } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [Scale](#scale-prompt) * [Snippet](#snippet-prompt) * [Select](#select-prompt) *** ### Scale Prompt A more compact version of the [Survey prompt](#survey-prompt), the Scale prompt allows the user to quickly provide feedback using a [Likert Scale](https://en.wikipedia.org/wiki/Likert_scale). <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/scale-prompt.gif" alt="Enquirer Scale Prompt" width="750"> </p> **Example Usage** ```js const { Scale } = require('enquirer'); const prompt = new Scale({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.', initial: 2 }, { name: 'navigation', message: 'The website is easy to navigate.', initial: 2 }, { name: 'images', message: 'The website usually has good images.', initial: 2 }, { name: 'upload', message: 'The website makes it easy to upload images.', initial: 2 }, { name: 'colors', message: 'The website has a pleasing color palette.', initial: 2 } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Select Prompt Prompt that allows the user to select from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/select-prompt.gif" alt="Enquirer Select Prompt" width="750"> </p> **Example Usage** ```js const { Select } = require('enquirer'); const prompt = new Select({ name: 'color', message: 'Pick a flavor', choices: ['apple', 'grape', 'watermelon', 'cherry', 'orange'] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [MultiSelect](#multiselect-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Sort Prompt Prompt that allows the user to sort items in a list. **Example** In this [example](https://github.com/enquirer/enquirer/raw/master/examples/sort/prompt.js), custom styling is applied to the returned values to make it easier to see what's happening. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/sort-prompt.gif" alt="Enquirer Sort Prompt" width="750"> </p> **Example Usage** ```js const colors = require('ansi-colors'); const { Sort } = require('enquirer'); const prompt = new Sort({ name: 'colors', message: 'Sort the colors in order of preference', hint: 'Top is best, bottom is worst', numbered: true, choices: ['red', 'white', 'green', 'cyan', 'yellow'].map(n => ({ name: n, message: colors[n](n) })) }); prompt.run() .then(function(answer = []) { console.log(answer); console.log('Your preferred order of colors is:'); console.log(answer.map(key => colors[key](key)).join('\n')); }) .catch(console.error); ``` **Related prompts** * [List](#list-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Snippet Prompt Prompt that allows the user to replace placeholders in a snippet of code or text. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/snippet-prompt.gif" alt="Prompts" width="750"> </p> **Example Usage** ```js const semver = require('semver'); const { Snippet } = require('enquirer'); const prompt = new Snippet({ name: 'username', message: 'Fill out the fields in package.json', required: true, fields: [ { name: 'author_name', message: 'Author Name' }, { name: 'version', validate(value, state, item, index) { if (item && item.name === 'version' && !semver.valid(value)) { return prompt.styles.danger('version should be a valid semver value'); } return true; } } ], template: `{ "name": "\${name}", "description": "\${description}", "version": "\${version}", "homepage": "https://github.com/\${username}/\${name}", "author": "\${author_name} (https://github.com/\${username})", "repository": "\${username}/\${name}", "license": "\${license:ISC}" } ` }); prompt.run() .then(answer => console.log('Answer:', answer.result)) .catch(console.error); ``` **Related prompts** * [Survey](#survey-prompt) * [AutoComplete](#autocomplete-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Toggle Prompt Prompt that allows the user to toggle between two values then returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/toggle-prompt.gif" alt="Enquirer Toggle Prompt" width="750"> </p> **Example Usage** ```js const { Toggle } = require('enquirer'); const prompt = new Toggle({ message: 'Want to answer?', enabled: 'Yep', disabled: 'Nope' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Confirm](#confirm-prompt) * [Input](#input-prompt) * [Sort](#sort-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Prompt Types There are 5 (soon to be 6!) type classes: * [ArrayPrompt](#arrayprompt) - [Options](#options) - [Properties](#properties) - [Methods](#methods) - [Choices](#choices) - [Defining choices](#defining-choices) - [Choice properties](#choice-properties) - [Related prompts](#related-prompts) * [AuthPrompt](#authprompt) * [BooleanPrompt](#booleanprompt) * DatePrompt (Coming Soon!) * [NumberPrompt](#numberprompt) * [StringPrompt](#stringprompt) Each type is a low-level class that may be used as a starting point for creating higher level prompts. Continue reading to learn how. ### ArrayPrompt The `ArrayPrompt` class is used for creating prompts that display a list of choices in the terminal. For example, Enquirer uses this class as the basis for the [Select](#select) and [Survey](#survey) prompts. #### Options In addition to the [options](#options) available to all prompts, Array prompts also support the following options. | **Option** | **Required?** | **Type** | **Description** | | ----------- | ------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------- | | `autofocus` | `no` | `string\|number` | The index or name of the choice that should have focus when the prompt loads. Only one choice may have focus at a time. | | | `stdin` | `no` | `stream` | The input stream to use for emitting keypress events. Defaults to `process.stdin`. | | `stdout` | `no` | `stream` | The output stream to use for writing the prompt to the terminal. Defaults to `process.stdout`. | | | #### Properties Array prompts have the following instance properties and getters. | **Property name** | **Type** | **Description** | | ----------------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `choices` | `array` | Array of choices that have been normalized from choices passed on the prompt options. | | `cursor` | `number` | Position of the cursor relative to the _user input (string)_. | | `enabled` | `array` | Returns an array of enabled choices. | | `focused` | `array` | Returns the currently selected choice in the visible list of choices. This is similar to the concept of focus in HTML and CSS. Focused choices are always visible (on-screen). When a list of choices is longer than the list of visible choices, and an off-screen choice is _focused_, the list will scroll to the focused choice and re-render. | | `focused` | Gets the currently selected choice. Equivalent to `prompt.choices[prompt.index]`. | | `index` | `number` | Position of the pointer in the _visible list (array) of choices_. | | `limit` | `number` | The number of choices to display on-screen. | | `selected` | `array` | Either a list of enabled choices (when `options.multiple` is true) or the currently focused choice. | | `visible` | `string` | | #### Methods | **Method** | **Description** | | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pointer()` | Returns the visual symbol to use to identify the choice that currently has focus. The `❯` symbol is often used for this. The pointer is not always visible, as with the `autocomplete` prompt. | | `indicator()` | Returns the visual symbol that indicates whether or not a choice is checked/enabled. | | `focus()` | Sets focus on a choice, if it can be focused. | #### Choices Array prompts support the `choices` option, which is the array of choices users will be able to select from when rendered in the terminal. **Type**: `string|object` **Example** ```js const { prompt } = require('enquirer'); const questions = [{ type: 'select', name: 'color', message: 'Favorite color?', initial: 1, choices: [ { name: 'red', message: 'Red', value: '#ff0000' }, //<= choice object { name: 'green', message: 'Green', value: '#00ff00' }, //<= choice object { name: 'blue', message: 'Blue', value: '#0000ff' } //<= choice object ] }]; let answers = await prompt(questions); console.log('Answer:', answers.color); ``` #### Defining choices Whether defined as a string or object, choices are normalized to the following interface: ```js { name: string; message: string | undefined; value: string | undefined; hint: string | undefined; disabled: boolean | string | undefined; } ``` **Example** ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: ['Apple', 'Orange', 'Raspberry'] }; ``` Normalizes to the following when the prompt is run: ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: [ { name: 'Apple', message: 'Apple', value: 'Apple' }, { name: 'Orange', message: 'Orange', value: 'Orange' }, { name: 'Raspberry', message: 'Raspberry', value: 'Raspberry' } ] }; ``` #### Choice properties The following properties are supported on `choice` objects. | **Option** | **Type** | **Description** | | ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `name` | `string` | The unique key to identify a choice | | `message` | `string` | The message to display in the terminal. `name` is used when this is undefined. | | `value` | `string` | Value to associate with the choice. Useful for creating key-value pairs from user choices. `name` is used when this is undefined. | | `choices` | `array` | Array of "child" choices. | | `hint` | `string` | Help message to display next to a choice. | | `role` | `string` | Determines how the choice will be displayed. Currently the only role supported is `separator`. Additional roles may be added in the future (like `heading`, etc). Please create a [feature request] | | `enabled` | `boolean` | Enabled a choice by default. This is only supported when `options.multiple` is true or on prompts that support multiple choices, like [MultiSelect](#-multiselect). | | `disabled` | `boolean\|string` | Disable a choice so that it cannot be selected. This value may either be `true`, `false`, or a message to display. | | `indicator` | `string\|function` | Custom indicator to render for a choice (like a check or radio button). | #### Related prompts * [AutoComplete](#autocomplete-prompt) * [Form](#form-prompt) * [MultiSelect](#multiselect-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) *** ### AuthPrompt The `AuthPrompt` is used to create prompts to log in user using any authentication method. For example, Enquirer uses this class as the basis for the [BasicAuth Prompt](#basicauth-prompt). You can also find prompt examples in `examples/auth/` folder that utilizes `AuthPrompt` to create OAuth based authentication prompt or a prompt that authenticates using time-based OTP, among others. `AuthPrompt` has a factory function that creates an instance of `AuthPrompt` class and it expects an `authenticate` function, as an argument, which overrides the `authenticate` function of the `AuthPrompt` class. #### Methods | **Method** | **Description** | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `authenticate()` | Contain all the authentication logic. This function should be overridden to implement custom authentication logic. The default `authenticate` function throws an error if no other function is provided. | #### Choices Auth prompt supports the `choices` option, which is the similar to the choices used in [Form Prompt](#form-prompt). **Example** ```js const { AuthPrompt } = require('enquirer'); function authenticate(value, state) { if (value.username === this.options.username && value.password === this.options.password) { return true; } return false; } const CustomAuthPrompt = AuthPrompt.create(authenticate); const prompt = new CustomAuthPrompt({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '1234567', choices: [ { name: 'username', message: 'username' }, { name: 'password', message: 'password' } ] }); prompt .run() .then(answer => console.log('Authenticated?', answer)) .catch(console.error); ``` #### Related prompts * [BasicAuth Prompt](#basicauth-prompt) *** ### BooleanPrompt The `BooleanPrompt` class is used for creating prompts that display and return a boolean value. ```js const { BooleanPrompt } = require('enquirer'); const prompt = new BooleanPrompt({ header: '========================', message: 'Do you love enquirer?', footer: '========================', }); prompt.run() .then(answer => console.log('Selected:', answer)) .catch(console.error); ``` **Returns**: `boolean` *** ### NumberPrompt The `NumberPrompt` class is used for creating prompts that display and return a numerical value. ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ header: '************************', message: 'Input the Numbers:', footer: '************************', }); prompt.run() .then(answer => console.log('Numbers are:', answer)) .catch(console.error); ``` **Returns**: `string|number` (number, or number formatted as a string) *** ### StringPrompt The `StringPrompt` class is used for creating prompts that display and return a string value. ```js const { StringPrompt } = require('enquirer'); const prompt = new StringPrompt({ header: '************************', message: 'Input the String:', footer: '************************' }); prompt.run() .then(answer => console.log('String is:', answer)) .catch(console.error); ``` **Returns**: `string` <br> ## ❯ Custom prompts With Enquirer 2.0, custom prompts are easier than ever to create and use. **How do I create a custom prompt?** Custom prompts are created by extending either: * Enquirer's `Prompt` class * one of the built-in [prompts](#-prompts), or * low-level [types](#-types). <!-- Example: HaiKarate Custom Prompt --> ```js const { Prompt } = require('enquirer'); class HaiKarate extends Prompt { constructor(options = {}) { super(options); this.value = options.initial || 0; this.cursorHide(); } up() { this.value++; this.render(); } down() { this.value--; this.render(); } render() { this.clear(); // clear previously rendered prompt from the terminal this.write(`${this.state.message}: ${this.value}`); } } // Use the prompt by creating an instance of your custom prompt class. const prompt = new HaiKarate({ message: 'How many sprays do you want?', initial: 10 }); prompt.run() .then(answer => console.log('Sprays:', answer)) .catch(console.error); ``` If you want to be able to specify your prompt by `type` so that it may be used alongside other prompts, you will need to first create an instance of `Enquirer`. ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` Then use the `.register()` method to add your custom prompt. ```js enquirer.register('haikarate', HaiKarate); ``` Now you can do the following when defining "questions". ```js let spritzer = require('cologne-drone'); let answers = await enquirer.prompt([ { type: 'haikarate', name: 'cologne', message: 'How many sprays do you need?', initial: 10, async onSubmit(name, value) { await spritzer.activate(value); //<= activate drone return value; } } ]); ``` <br> ## ❯ Key Bindings ### All prompts These key combinations may be used with all prompts. | **command** | **description** | | -------------------------------- | -------------------------------------- | | <kbd>ctrl</kbd> + <kbd>c</kbd> | Cancel the prompt. | | <kbd>ctrl</kbd> + <kbd>g</kbd> | Reset the prompt to its initial state. | <br> ### Move cursor These combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>left</kbd> | Move the cursor back one character. | | <kbd>right</kbd> | Move the cursor forward one character. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> ### Edit Input These key combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> | **command (Mac)** | **command (Windows)** | **description** | | ----------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | <kbd>delete</kbd> | <kbd>backspace</kbd> | Delete one character to the left. | | <kbd>fn</kbd> + <kbd>delete</kbd> | <kbd>delete</kbd> | Delete one character to the right. | | <kbd>option</kbd> + <kbd>up</kbd> | <kbd>alt</kbd> + <kbd>up</kbd> | Scroll to the previous item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | | <kbd>option</kbd> + <kbd>down</kbd> | <kbd>alt</kbd> + <kbd>down</kbd> | Scroll to the next item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | ### Select choices These key combinations may be used on prompts that support _multiple_ choices, such as the [multiselect prompt](#multiselect-prompt), or the [select prompt](#select-prompt) when the `multiple` options is true. | **command** | **description** | | ----------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>space</kbd> | Toggle the currently selected choice when `options.multiple` is true. | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>a</kbd> | Toggle all choices to be enabled or disabled. | | <kbd>i</kbd> | Invert the current selection of choices. | | <kbd>g</kbd> | Toggle the current choice group. | <br> ### Hide/show choices | **command** | **description** | | ------------------------------- | ---------------------------------------------- | | <kbd>fn</kbd> + <kbd>up</kbd> | Decrease the number of visible choices by one. | | <kbd>fn</kbd> + <kbd>down</kbd> | Increase the number of visible choices by one. | <br> ### Move/lock Pointer | **command** | **description** | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>up</kbd> | Move the pointer up. | | <kbd>down</kbd> | Move the pointer down. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move the pointer to the first _visible_ choice. | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move the pointer to the last _visible_ choice. | | <kbd>shift</kbd> + <kbd>up</kbd> | Scroll up one choice without changing pointer position (locks the pointer while scrolling). | | <kbd>shift</kbd> + <kbd>down</kbd> | Scroll down one choice without changing pointer position (locks the pointer while scrolling). | <br> | **command (Mac)** | **command (Windows)** | **description** | | -------------------------------- | --------------------- | ---------------------------------------------------------- | | <kbd>fn</kbd> + <kbd>left</kbd> | <kbd>home</kbd> | Move the pointer to the first choice in the choices array. | | <kbd>fn</kbd> + <kbd>right</kbd> | <kbd>end</kbd> | Move the pointer to the last choice in the choices array. | <br> ## ❯ Release History Please see [CHANGELOG.md](CHANGELOG.md). ## ❯ Performance ### System specs MacBook Pro, Intel Core i7, 2.5 GHz, 16 GB. ### Load time Time it takes for the module to load the first time (average of 3 runs): ``` enquirer: 4.013ms inquirer: 286.717ms ``` <br> ## ❯ About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Todo We're currently working on documentation for the following items. Please star and watch the repository for updates! * [ ] Customizing symbols * [ ] Customizing styles (palette) * [ ] Customizing rendered input * [ ] Customizing returned values * [ ] Customizing key bindings * [ ] Question validation * [ ] Choice validation * [ ] Skipping questions * [ ] Async choices * [ ] Async timers: loaders, spinners and other animations * [ ] Links to examples </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ```sh $ yarn && yarn test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> #### Contributors | **Commits** | **Contributor** | | --- | --- | | 283 | [jonschlinkert](https://github.com/jonschlinkert) | | 82 | [doowb](https://github.com/doowb) | | 32 | [rajat-sr](https://github.com/rajat-sr) | | 20 | [318097](https://github.com/318097) | | 15 | [g-plane](https://github.com/g-plane) | | 12 | [pixelass](https://github.com/pixelass) | | 5 | [adityavyas611](https://github.com/adityavyas611) | | 5 | [satotake](https://github.com/satotake) | | 3 | [tunnckoCore](https://github.com/tunnckoCore) | | 3 | [Ovyerus](https://github.com/Ovyerus) | | 3 | [sw-yx](https://github.com/sw-yx) | | 2 | [DanielRuf](https://github.com/DanielRuf) | | 2 | [GabeL7r](https://github.com/GabeL7r) | | 1 | [AlCalzone](https://github.com/AlCalzone) | | 1 | [hipstersmoothie](https://github.com/hipstersmoothie) | | 1 | [danieldelcore](https://github.com/danieldelcore) | | 1 | [ImgBotApp](https://github.com/ImgBotApp) | | 1 | [jsonkao](https://github.com/jsonkao) | | 1 | [knpwrs](https://github.com/knpwrs) | | 1 | [yeskunall](https://github.com/yeskunall) | | 1 | [mischah](https://github.com/mischah) | | 1 | [renarsvilnis](https://github.com/renarsvilnis) | | 1 | [sbugert](https://github.com/sbugert) | | 1 | [stephencweiss](https://github.com/stephencweiss) | | 1 | [skellock](https://github.com/skellock) | | 1 | [whxaxes](https://github.com/whxaxes) | #### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) #### Credit Thanks to [derhuerst](https://github.com/derhuerst), creator of prompt libraries such as [prompt-skeleton](https://github.com/derhuerst/prompt-skeleton), which influenced some of the concepts we used in our prompts. #### License Copyright © 2018-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. argparse ======== [![Build Status](https://secure.travis-ci.org/nodeca/argparse.svg?branch=master)](http://travis-ci.org/nodeca/argparse) [![NPM version](https://img.shields.io/npm/v/argparse.svg)](https://www.npmjs.org/package/argparse) CLI arguments parser for node.js. Javascript port of python's [argparse](http://docs.python.org/dev/library/argparse.html) module (original version 3.2). That's a full port, except some very rare options, recorded in issue tracker. **NB. Difference with original.** - Method names changed to camelCase. See [generated docs](http://nodeca.github.com/argparse/). - Use `defaultValue` instead of `default`. - Use `argparse.Const.REMAINDER` instead of `argparse.REMAINDER`, and similarly for constant values `OPTIONAL`, `ZERO_OR_MORE`, and `ONE_OR_MORE` (aliases for `nargs` values `'?'`, `'*'`, `'+'`, respectively), and `SUPPRESS`. Example ======= test.js file: ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse example' }); parser.addArgument( [ '-f', '--foo' ], { help: 'foo bar' } ); parser.addArgument( [ '-b', '--bar' ], { help: 'bar foo' } ); parser.addArgument( '--baz', { help: 'baz bar' } ); var args = parser.parseArgs(); console.dir(args); ``` Display help: ``` $ ./test.js -h usage: example.js [-h] [-v] [-f FOO] [-b BAR] [--baz BAZ] Argparse example Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -f FOO, --foo FOO foo bar -b BAR, --bar BAR bar foo --baz BAZ baz bar ``` Parse arguments: ``` $ ./test.js -f=3 --bar=4 --baz 5 { foo: '3', bar: '4', baz: '5' } ``` More [examples](https://github.com/nodeca/argparse/tree/master/examples). ArgumentParser objects ====================== ``` new ArgumentParser({parameters hash}); ``` Creates a new ArgumentParser object. **Supported params:** - ```description``` - Text to display before the argument help. - ```epilog``` - Text to display after the argument help. - ```addHelp``` - Add a -h/–help option to the parser. (default: true) - ```argumentDefault``` - Set the global default value for arguments. (default: null) - ```parents``` - A list of ArgumentParser objects whose arguments should also be included. - ```prefixChars``` - The set of characters that prefix optional arguments. (default: ‘-‘) - ```formatterClass``` - A class for customizing the help output. - ```prog``` - The name of the program (default: `path.basename(process.argv[1])`) - ```usage``` - The string describing the program usage (default: generated) - ```conflictHandler``` - Usually unnecessary, defines strategy for resolving conflicting optionals. **Not supported yet** - ```fromfilePrefixChars``` - The set of characters that prefix files from which additional arguments should be read. Details in [original ArgumentParser guide](http://docs.python.org/dev/library/argparse.html#argumentparser-objects) addArgument() method ==================== ``` ArgumentParser.addArgument(name or flag or [name] or [flags...], {options}) ``` Defines how a single command-line argument should be parsed. - ```name or flag or [name] or [flags...]``` - Either a positional name (e.g., `'foo'`), a single option (e.g., `'-f'` or `'--foo'`), an array of a single positional name (e.g., `['foo']`), or an array of options (e.g., `['-f', '--foo']`). Options: - ```action``` - The basic type of action to be taken when this argument is encountered at the command line. - ```nargs```- The number of command-line arguments that should be consumed. - ```constant``` - A constant value required by some action and nargs selections. - ```defaultValue``` - The value produced if the argument is absent from the command line. - ```type``` - The type to which the command-line argument should be converted. - ```choices``` - A container of the allowable values for the argument. - ```required``` - Whether or not the command-line option may be omitted (optionals only). - ```help``` - A brief description of what the argument does. - ```metavar``` - A name for the argument in usage messages. - ```dest``` - The name of the attribute to be added to the object returned by parseArgs(). Details in [original add_argument guide](http://docs.python.org/dev/library/argparse.html#the-add-argument-method) Action (some details) ================ ArgumentParser objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them, though most actions simply add an attribute to the object returned by parseArgs(). The action keyword argument specifies how the command-line arguments should be handled. The supported actions are: - ```store``` - Just stores the argument’s value. This is the default action. - ```storeConst``` - Stores value, specified by the const keyword argument. (Note that the const keyword argument defaults to the rather unhelpful None.) The 'storeConst' action is most commonly used with optional arguments, that specify some sort of flag. - ```storeTrue``` and ```storeFalse``` - Stores values True and False respectively. These are special cases of 'storeConst'. - ```append``` - Stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. - ```appendConst``` - Stores a list, and appends value, specified by the const keyword argument to the list. (Note, that the const keyword argument defaults is None.) The 'appendConst' action is typically used when multiple arguments need to store constants to the same list. - ```count``` - Counts the number of times a keyword argument occurs. For example, used for increasing verbosity levels. - ```help``` - Prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser. See ArgumentParser for details of how the output is created. - ```version``` - Prints version information and exit. Expects a `version=` keyword argument in the addArgument() call. Details in [original action guide](http://docs.python.org/dev/library/argparse.html#action) Sub-commands ============ ArgumentParser.addSubparsers() Many programs split their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands like `svn checkout`, `svn update`, and `svn commit`. Splitting up functionality this way can be a particularly good idea when a program performs several different functions which require different kinds of command-line arguments. `ArgumentParser` supports creation of such sub-commands with `addSubparsers()` method. The `addSubparsers()` method is normally called with no arguments and returns an special action object. This object has a single method `addParser()`, which takes a command name and any `ArgumentParser` constructor arguments, and returns an `ArgumentParser` object that can be modified as usual. Example: sub_commands.js ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse examples: sub-commands', }); var subparsers = parser.addSubparsers({ title:'subcommands', dest:"subcommand_name" }); var bar = subparsers.addParser('c1', {addHelp:true}); bar.addArgument( [ '-f', '--foo' ], { action: 'store', help: 'foo3 bar3' } ); var bar = subparsers.addParser( 'c2', {aliases:['co'], addHelp:true} ); bar.addArgument( [ '-b', '--bar' ], { action: 'store', type: 'int', help: 'foo3 bar3' } ); var args = parser.parseArgs(); console.dir(args); ``` Details in [original sub-commands guide](http://docs.python.org/dev/library/argparse.html#sub-commands) Contributors ============ - [Eugene Shkuropat](https://github.com/shkuropat) - [Paul Jacobson](https://github.com/hpaulj) [others](https://github.com/nodeca/argparse/graphs/contributors) License ======= Copyright (c) 2012 [Vitaly Puzrin](https://github.com/puzrin). Released under the MIT license. See [LICENSE](https://github.com/nodeca/argparse/blob/master/LICENSE) for details. # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. [![NPM version](https://img.shields.io/npm/v/esprima.svg)](https://www.npmjs.com/package/esprima) [![npm download](https://img.shields.io/npm/dm/esprima.svg)](https://www.npmjs.com/package/esprima) [![Build Status](https://img.shields.io/travis/jquery/esprima/master.svg)](https://travis-ci.org/jquery/esprima) [![Coverage Status](https://img.shields.io/codecov/c/github/jquery/esprima/master.svg)](https://codecov.io/github/jquery/esprima) **Esprima** ([esprima.org](http://esprima.org), BSD license) is a high performance, standard-compliant [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) parser written in ECMAScript (also popularly known as [JavaScript](https://en.wikipedia.org/wiki/JavaScript)). Esprima is created and maintained by [Ariya Hidayat](https://twitter.com/ariyahidayat), with the help of [many contributors](https://github.com/jquery/esprima/contributors). ### Features - Full support for ECMAScript 2017 ([ECMA-262 8th Edition](http://www.ecma-international.org/publications/standards/Ecma-262.htm)) - Sensible [syntax tree format](https://github.com/estree/estree/blob/master/es5.md) as standardized by [ESTree project](https://github.com/estree/estree) - Experimental support for [JSX](https://facebook.github.io/jsx/), a syntax extension for [React](https://facebook.github.io/react/) - Optional tracking of syntax node location (index-based and line-column) - [Heavily tested](http://esprima.org/test/ci.html) (~1500 [unit tests](https://github.com/jquery/esprima/tree/master/test/fixtures) with [full code coverage](https://codecov.io/github/jquery/esprima)) ### API Esprima can be used to perform [lexical analysis](https://en.wikipedia.org/wiki/Lexical_analysis) (tokenization) or [syntactic analysis](https://en.wikipedia.org/wiki/Parsing) (parsing) of a JavaScript program. A simple example on Node.js REPL: ```javascript > var esprima = require('esprima'); > var program = 'const answer = 42'; > esprima.tokenize(program); [ { type: 'Keyword', value: 'const' }, { type: 'Identifier', value: 'answer' }, { type: 'Punctuator', value: '=' }, { type: 'Numeric', value: '42' } ] > esprima.parseScript(program); { type: 'Program', body: [ { type: 'VariableDeclaration', declarations: [Object], kind: 'const' } ], sourceType: 'script' } ``` For more information, please read the [complete documentation](http://esprima.org/doc). # isobject [![NPM version](https://img.shields.io/npm/v/isobject.svg?style=flat)](https://www.npmjs.com/package/isobject) [![NPM downloads](https://img.shields.io/npm/dm/isobject.svg?style=flat)](https://npmjs.org/package/isobject) [![Build Status](https://img.shields.io/travis/jonschlinkert/isobject.svg?style=flat)](https://travis-ci.org/jonschlinkert/isobject) Returns true if the value is an object and not an array or null. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject --save ``` Use [is-plain-object](https://github.com/jonschlinkert/is-plain-object) if you want only objects that are created by the `Object` constructor. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject ``` Install with [bower](http://bower.io/) ```sh $ bower install isobject ``` ## Usage ```js var isObject = require('isobject'); ``` **True** All of the following return `true`: ```js isObject({}); isObject(Object.create({})); isObject(Object.create(Object.prototype)); isObject(Object.create(null)); isObject({}); isObject(new Foo); isObject(/foo/); ``` **False** All of the following return `false`: ```js isObject(); isObject(function () {}); isObject(1); isObject([]); isObject(undefined); isObject(null); ``` ## Related projects You might also be interested in these projects: [merge-deep](https://www.npmjs.com/package/merge-deep): Recursively merge values in a javascript object. | [homepage](https://github.com/jonschlinkert/merge-deep) * [extend-shallow](https://www.npmjs.com/package/extend-shallow): Extend an object with the properties of additional objects. node.js/javascript util. | [homepage](https://github.com/jonschlinkert/extend-shallow) * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object) * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of) ## Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](https://github.com/jonschlinkert/isobject/issues/new). ## Building docs Generate readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install verb && npm run docs ``` Or, if [verb](https://github.com/verbose/verb) is installed globally: ```sh $ verb ``` ## Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ## Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ## License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/isobject/blob/master/LICENSE). *** _This file was generated by [verb](https://github.com/verbose/verb), v0.9.0, on April 25, 2016._ # lodash.truncate v4.4.2 The [lodash](https://lodash.com/) method `_.truncate` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.truncate ``` In Node.js: ```js var truncate = require('lodash.truncate'); ``` See the [documentation](https://lodash.com/docs#truncate) or [package source](https://github.com/lodash/lodash/blob/4.4.2-npm-packages/lodash.truncate) for more details. # lodash.merge v4.6.2 The [Lodash](https://lodash.com/) method `_.merge` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.merge ``` In Node.js: ```js var merge = require('lodash.merge'); ``` See the [documentation](https://lodash.com/docs#merge) or [package source](https://github.com/lodash/lodash/blob/4.6.2-npm-packages/lodash.merge) for more details. # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![a fun cartoon logo made of glob characters](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. * `fs` File-system object with Node's `fs` API. By default, the built-in `fs` module will be used. Set to a volume provided by a library like `memfs` to avoid using the "real" file-system. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # ESLint Scope ESLint Scope is the [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) scope analyzer used in ESLint. It is a fork of [escope](http://github.com/estools/escope). ## Usage Install: ``` npm i eslint-scope --save ``` Example: ```js var eslintScope = require('eslint-scope'); var espree = require('espree'); var estraverse = require('estraverse'); var ast = espree.parse(code); var scopeManager = eslintScope.analyze(ast); var currentScope = scopeManager.acquire(ast); // global scope estraverse.traverse(ast, { enter: function(node, parent) { // do stuff if (/Function/.test(node.type)) { currentScope = scopeManager.acquire(node); // get current function scope } }, leave: function(node, parent) { if (/Function/.test(node.type)) { currentScope = currentScope.upper; // set to parent scope } // do stuff } }); ``` ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/eslint-scope/issues). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting ## License ESLint Scope is licensed under a permissive BSD 2-clause license. [![npm version](https://img.shields.io/npm/v/eslint.svg)](https://www.npmjs.com/package/eslint) [![Downloads](https://img.shields.io/npm/dm/eslint.svg)](https://www.npmjs.com/package/eslint) [![Build Status](https://github.com/eslint/eslint/workflows/CI/badge.svg)](https://github.com/eslint/eslint/actions) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_shield) <br /> [![Open Collective Backers](https://img.shields.io/opencollective/backers/eslint)](https://opencollective.com/eslint) [![Open Collective Sponsors](https://img.shields.io/opencollective/sponsors/eslint)](https://opencollective.com/eslint) [![Follow us on Twitter](https://img.shields.io/twitter/follow/geteslint?label=Follow&style=social)](https://twitter.com/intent/user?screen_name=geteslint) # ESLint [Website](https://eslint.org) | [Configuring](https://eslint.org/docs/user-guide/configuring) | [Rules](https://eslint.org/docs/rules/) | [Contributing](https://eslint.org/docs/developer-guide/contributing) | [Reporting Bugs](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) | [Code of Conduct](https://eslint.org/conduct) | [Twitter](https://twitter.com/geteslint) | [Mailing List](https://groups.google.com/group/eslint) | [Chat Room](https://eslint.org/chat) ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions: * ESLint uses [Espree](https://github.com/eslint/espree) for JavaScript parsing. * ESLint uses an AST to evaluate patterns in code. * ESLint is completely pluggable, every single rule is a plugin and you can add more at runtime. ## Table of Contents 1. [Installation and Usage](#installation-and-usage) 2. [Configuration](#configuration) 3. [Code of Conduct](#code-of-conduct) 4. [Filing Issues](#filing-issues) 5. [Frequently Asked Questions](#faq) 6. [Releases](#releases) 7. [Security Policy](#security-policy) 8. [Semantic Versioning Policy](#semantic-versioning-policy) 9. [Stylistic Rule Updates](#stylistic-rule-updates) 10. [License](#license) 11. [Team](#team) 12. [Sponsors](#sponsors) 13. [Technology Sponsors](#technology-sponsors) ## <a name="installation-and-usage"></a>Installation and Usage Prerequisites: [Node.js](https://nodejs.org/) (`^10.12.0`, or `>=12.0.0`) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.) You can install ESLint using npm: ``` $ npm install eslint --save-dev ``` You should then set up a configuration file: ``` $ ./node_modules/.bin/eslint --init ``` After that, you can run ESLint on any file or directory like this: ``` $ ./node_modules/.bin/eslint yourfile.js ``` ## <a name="configuration"></a>Configuration After running `eslint --init`, you'll have a `.eslintrc` file in your directory. In it, you'll see some rules configured like this: ```json { "rules": { "semi": ["error", "always"], "quotes": ["error", "double"] } } ``` The names `"semi"` and `"quotes"` are the names of [rules](https://eslint.org/docs/rules) in ESLint. The first value is the error level of the rule and can be one of these values: * `"off"` or `0` - turn the rule off * `"warn"` or `1` - turn the rule on as a warning (doesn't affect exit code) * `"error"` or `2` - turn the rule on as an error (exit code will be 1) The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the [configuration docs](https://eslint.org/docs/user-guide/configuring)). ## <a name="code-of-conduct"></a>Code of Conduct ESLint adheres to the [JS Foundation Code of Conduct](https://eslint.org/conduct). ## <a name="filing-issues"></a>Filing Issues Before filing an issue, please be sure to read the guidelines for what you're reporting: * [Bug Report](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) * [Propose a New Rule](https://eslint.org/docs/developer-guide/contributing/new-rules) * [Proposing a Rule Change](https://eslint.org/docs/developer-guide/contributing/rule-changes) * [Request a Change](https://eslint.org/docs/developer-guide/contributing/changes) ## <a name="faq"></a>Frequently Asked Questions ### I'm using JSCS, should I migrate to ESLint? Yes. [JSCS has reached end of life](https://eslint.org/blog/2016/07/jscs-end-of-life) and is no longer supported. We have prepared a [migration guide](https://eslint.org/docs/user-guide/migrating-from-jscs) to help you convert your JSCS settings to an ESLint configuration. We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process. ### Does Prettier replace ESLint? No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors. ### Why can't ESLint find my plugins? * Make sure your plugins (and ESLint) are both in your project's `package.json` as devDependencies (or dependencies, if your project uses ESLint at runtime). * Make sure you have run `npm install` and all your dependencies are installed. * Make sure your plugins' peerDependencies have been installed as well. You can use `npm view eslint-plugin-myplugin peerDependencies` to see what peer dependencies `eslint-plugin-myplugin` has. ### Does ESLint support JSX? Yes, ESLint natively supports parsing JSX syntax (this must be enabled in [configuration](https://eslint.org/docs/user-guide/configuring)). Please note that supporting JSX syntax *is not* the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn't recognize. We recommend using [eslint-plugin-react](https://www.npmjs.com/package/eslint-plugin-react) if you are using React and want React semantics. ### What ECMAScript versions does ESLint support? ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through [configuration](https://eslint.org/docs/user-guide/configuring). ### What about experimental features? ESLint's parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis. In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the [babel-eslint](https://github.com/babel/babel-eslint) parser and [eslint-plugin-babel](https://github.com/babel/eslint-plugin-babel) to use any option available in Babel. Once a language feature has been adopted into the ECMAScript standard (stage 4 according to the [TC39 process](https://tc39.github.io/process-document/)), we will accept issues and pull requests related to the new feature, subject to our [contributing guidelines](https://eslint.org/docs/developer-guide/contributing). Until then, please use the appropriate parser and plugin(s) for your experimental feature. ### Where to ask for help? Join our [Mailing List](https://groups.google.com/group/eslint) or [Chatroom](https://eslint.org/chat). ### Why doesn't ESLint lock dependency versions? Lock files like `package-lock.json` are helpful for deployed applications. They ensure that dependencies are consistent between environments and across deployments. Packages like `eslint` that get published to the npm registry do not include lock files. `npm install eslint` as a user will respect version constraints in ESLint's `package.json`. ESLint and its dependencies will be included in the user's lock file if one exists, but ESLint's own lock file would not be used. We intentionally don't lock dependency versions so that we have the latest compatible dependency versions in development and CI that our users get when installing ESLint in a project. The Twilio blog has a [deeper dive](https://www.twilio.com/blog/lockfiles-nodejs) to learn more. ## <a name="releases"></a>Releases We have scheduled releases every two weeks on Friday or Saturday. You can follow a [release issue](https://github.com/eslint/eslint/issues?q=is%3Aopen+is%3Aissue+label%3Arelease) for updates about the scheduling of any particular release. ## <a name="security-policy"></a>Security Policy ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## <a name="semantic-versioning-policy"></a>Semantic Versioning Policy ESLint follows [semantic versioning](https://semver.org). However, due to the nature of ESLint as a code quality tool, it's not always clear when a minor or major version bump occurs. To help clarify this for everyone, we've defined the following semantic versioning policy for ESLint: * Patch release (intended to not break your lint build) * A bug fix in a rule that results in ESLint reporting fewer linting errors. * A bug fix to the CLI or core (including formatters). * Improvements to documentation. * Non-user-facing changes such as refactoring code, adding, deleting, or modifying tests, and increasing test coverage. * Re-releasing after a failed release (i.e., publishing a release that doesn't work for anyone). * Minor release (might break your lint build) * A bug fix in a rule that results in ESLint reporting more linting errors. * A new rule is created. * A new option to an existing rule that does not result in ESLint reporting more linting errors by default. * A new addition to an existing rule to support a newly-added language feature (within the last 12 months) that will result in ESLint reporting more linting errors by default. * An existing rule is deprecated. * A new CLI capability is created. * New capabilities to the public API are added (new classes, new methods, new arguments to existing methods, etc.). * A new formatter is created. * `eslint:recommended` is updated and will result in strictly fewer linting errors (e.g., rule removals). * Major release (likely to break your lint build) * `eslint:recommended` is updated and may result in new linting errors (e.g., rule additions, most rule option updates). * A new option to an existing rule that results in ESLint reporting more linting errors by default. * An existing formatter is removed. * Part of the public API is removed or changed in an incompatible way. The public API includes: * Rule schemas * Configuration schema * Command-line options * Node.js API * Rule, formatter, parser, plugin APIs According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (`~`) in `package.json` e.g. `"eslint": "~3.1.0"` to guarantee the results of your builds. ## <a name="stylistic-rule-updates"></a>Stylistic Rule Updates Stylistic rules are frozen according to [our policy](https://eslint.org/blog/2020/05/changes-to-rules-policies) on how we evaluate new rules and rule changes. This means: * **Bug fixes**: We will still fix bugs in stylistic rules. * **New ECMAScript features**: We will also make sure stylistic rules are compatible with new ECMAScript features. * **New options**: We will **not** add any new options to stylistic rules unless an option is the only way to fix a bug or support a newly-added ECMAScript feature. ## <a name="license"></a>License [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=large)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_large) ## <a name="team"></a>Team These folks keep the project moving and are resources for help. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--teamstart--> ### Technical Steering Committee (TSC) The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/nzakas"> <img src="https://github.com/nzakas.png?s=75" width="75" height="75"><br /> Nicholas C. Zakas </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/btmills"> <img src="https://github.com/btmills.png?s=75" width="75" height="75"><br /> Brandon Mills </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/mdjermanovic"> <img src="https://github.com/mdjermanovic.png?s=75" width="75" height="75"><br /> Milos Djermanovic </a> </td></tr></tbody></table> ### Reviewers The people who review and implement new features. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/mysticatea"> <img src="https://github.com/mysticatea.png?s=75" width="75" height="75"><br /> Toru Nagashima </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/aladdin-add"> <img src="https://github.com/aladdin-add.png?s=75" width="75" height="75"><br /> 薛定谔的猫 </a> </td></tr></tbody></table> ### Committers The people who review and fix bugs and help triage issues. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/brettz9"> <img src="https://github.com/brettz9.png?s=75" width="75" height="75"><br /> Brett Zamir </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/bmish"> <img src="https://github.com/bmish.png?s=75" width="75" height="75"><br /> Bryan Mishkin </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/g-plane"> <img src="https://github.com/g-plane.png?s=75" width="75" height="75"><br /> Pig Fang </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/anikethsaha"> <img src="https://github.com/anikethsaha.png?s=75" width="75" height="75"><br /> Anix </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/yeonjuan"> <img src="https://github.com/yeonjuan.png?s=75" width="75" height="75"><br /> YeonJuan </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/snitin315"> <img src="https://github.com/snitin315.png?s=75" width="75" height="75"><br /> Nitin Kumar </a> </td></tr></tbody></table> <!--teamend--> ## <a name="sponsors"></a>Sponsors The following companies, organizations, and individuals support ESLint's ongoing maintenance and development. [Become a Sponsor](https://opencollective.com/eslint) to get your logo on our README and website. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--sponsorsstart--> <h3>Platinum Sponsors</h3> <p><a href="https://automattic.com"><img src="https://images.opencollective.com/photomatt/d0ef3e1/logo.png" alt="Automattic" height="undefined"></a></p><h3>Gold Sponsors</h3> <p><a href="https://nx.dev"><img src="https://images.opencollective.com/nx/0efbe42/logo.png" alt="Nx (by Nrwl)" height="96"></a> <a href="https://google.com/chrome"><img src="https://images.opencollective.com/chrome/dc55bd4/logo.png" alt="Chrome's Web Framework & Tools Performance Fund" height="96"></a> <a href="https://www.salesforce.com"><img src="https://images.opencollective.com/salesforce/ca8f997/logo.png" alt="Salesforce" height="96"></a> <a href="https://www.airbnb.com/"><img src="https://images.opencollective.com/airbnb/d327d66/logo.png" alt="Airbnb" height="96"></a> <a href="https://coinbase.com"><img src="https://avatars.githubusercontent.com/u/1885080?v=4" alt="Coinbase" height="96"></a> <a href="https://substack.com/"><img src="https://avatars.githubusercontent.com/u/53023767?v=4" alt="Substack" height="96"></a></p><h3>Silver Sponsors</h3> <p><a href="https://retool.com/"><img src="https://images.opencollective.com/retool/98ea68e/logo.png" alt="Retool" height="64"></a> <a href="https://liftoff.io/"><img src="https://images.opencollective.com/liftoff/5c4fa84/logo.png" alt="Liftoff" height="64"></a></p><h3>Bronze Sponsors</h3> <p><a href="https://www.crosswordsolver.org/anagram-solver/"><img src="https://images.opencollective.com/anagram-solver/2666271/logo.png" alt="Anagram Solver" height="32"></a> <a href="null"><img src="https://images.opencollective.com/bugsnag-stability-monitoring/c2cef36/logo.png" alt="Bugsnag Stability Monitoring" height="32"></a> <a href="https://mixpanel.com"><img src="https://images.opencollective.com/mixpanel/cd682f7/logo.png" alt="Mixpanel" height="32"></a> <a href="https://www.vpsserver.com"><img src="https://images.opencollective.com/vpsservercom/logo.png" alt="VPS Server" height="32"></a> <a href="https://icons8.com"><img src="https://images.opencollective.com/icons8/7fa1641/logo.png" alt="Icons8: free icons, photos, illustrations, and music" height="32"></a> <a href="https://discord.com"><img src="https://images.opencollective.com/discordapp/f9645d9/logo.png" alt="Discord" height="32"></a> <a href="https://themeisle.com"><img src="https://images.opencollective.com/themeisle/d5592fe/logo.png" alt="ThemeIsle" height="32"></a> <a href="https://www.firesticktricks.com"><img src="https://images.opencollective.com/fire-stick-tricks/b8fbe2c/logo.png" alt="Fire Stick Tricks" height="32"></a> <a href="https://www.practiceignition.com"><img src="https://avatars.githubusercontent.com/u/5753491?v=4" alt="Practice Ignition" height="32"></a></p> <!--sponsorsend--> ## <a name="technology-sponsors"></a>Technology Sponsors * Site search ([eslint.org](https://eslint.org)) is sponsored by [Algolia](https://www.algolia.com) * Hosting for ([eslint.org](https://eslint.org)) is sponsored by [Netlify](https://www.netlify.com) * Password management is sponsored by [1Password](https://www.1password.com) # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 10.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # prelude.ls [![Build Status](https://travis-ci.org/gkz/prelude-ls.png?branch=master)](https://travis-ci.org/gkz/prelude-ls) is a functionally oriented utility library. It is powerful and flexible. Almost all of its functions are curried. It is written in, and is the recommended base library for, <a href="http://livescript.net">LiveScript</a>. See **[the prelude.ls site](http://preludels.com)** for examples, a reference, and more. You can install via npm `npm install prelude-ls` ### Development `make test` to test `make build` to build `lib` from `src` `make build-browser` to build browser versions A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports `pipe()`ing (including multi-`pipe()` and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap-parser) - [treport](http://npm.im/treport) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with node-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i-- > 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { try { // JSON.parse can throw, emit an error on that super.write(JSON.parse(jsonData[i])) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) ### Status: Work in progress Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value = <JSON.Value>valueOrNull; // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let innerString = (<JSON.Str>value).valueOf(); let jsonString = (<JSON.Str>value).stringify(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.stringify(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) [![npm version](https://img.shields.io/npm/v/espree.svg)](https://www.npmjs.com/package/espree) [![Build Status](https://travis-ci.org/eslint/espree.svg?branch=master)](https://travis-ci.org/eslint/espree) [![npm downloads](https://img.shields.io/npm/dm/espree.svg)](https://www.npmjs.com/package/espree) [![Bountysource](https://www.bountysource.com/badge/tracker?tracker_id=9348450)](https://www.bountysource.com/trackers/9348450-eslint?utm_source=9348450&utm_medium=shield&utm_campaign=TRACKER_BADGE) # Espree Espree started out as a fork of [Esprima](http://esprima.org) v1.2.2, the last stable published released of Esprima before work on ECMAScript 6 began. Espree is now built on top of [Acorn](https://github.com/ternjs/acorn), which has a modular architecture that allows extension of core functionality. The goal of Espree is to produce output that is similar to Esprima with a similar API so that it can be used in place of Esprima. ## Usage Install: ``` npm i espree ``` And in your Node.js code: ```javascript const espree = require("espree"); const ast = espree.parse(code); ``` ## API ### `parse()` `parse` parses the given code and returns a abstract syntax tree (AST). It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). ```javascript const espree = require("espree"); const ast = espree.parse(code, options); ``` **Example :** ```js const ast = espree.parse('let foo = "bar"', { ecmaVersion: 6 }); console.log(ast); ``` <details><summary>Output</summary> <p> ``` Node { type: 'Program', start: 0, end: 15, body: [ Node { type: 'VariableDeclaration', start: 0, end: 15, declarations: [Array], kind: 'let' } ], sourceType: 'script' } ``` </p> </details> ### `tokenize()` `tokenize` returns the tokens of a given code. It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). Even if `options` is empty or undefined or `options.tokens` is `false`, it assigns it to `true` in order to get the `tokens` array **Example :** ```js const tokens = espree.tokenize('let foo = "bar"', { ecmaVersion: 6 }); console.log(tokens); ``` <details><summary>Output</summary> <p> ``` Token { type: 'Keyword', value: 'let', start: 0, end: 3 }, Token { type: 'Identifier', value: 'foo', start: 4, end: 7 }, Token { type: 'Punctuator', value: '=', start: 8, end: 9 }, Token { type: 'String', value: '"bar"', start: 10, end: 15 } ``` </p> </details> ### `version` Returns the current `espree` version ### `VisitorKeys` Returns all visitor keys for traversing the AST from [eslint-visitor-keys](https://github.com/eslint/eslint-visitor-keys) ### `latestEcmaVersion` Returns the latest ECMAScript supported by `espree` ### `supportedEcmaVersions` Returns an array of all supported ECMAScript versions ## Options ```js const options = { // attach range information to each node range: false, // attach line/column location information to each node loc: false, // create a top-level comments array containing all comments comment: false, // create a top-level tokens array containing all tokens tokens: false, // Set to 3, 5 (default), 6, 7, 8, 9, 10, 11, or 12 to specify the version of ECMAScript syntax you want to use. // You can also set to 2015 (same as 6), 2016 (same as 7), 2017 (same as 8), 2018 (same as 9), 2019 (same as 10), 2020 (same as 11), or 2021 (same as 12) to use the year-based naming. ecmaVersion: 5, // specify which type of script you're parsing ("script" or "module") sourceType: "script", // specify additional language features ecmaFeatures: { // enable JSX parsing jsx: false, // enable return in global scope globalReturn: false, // enable implied strict mode (if ecmaVersion >= 5) impliedStrict: false } } ``` ## Esprima Compatibility Going Forward The primary goal is to produce the exact same AST structure and tokens as Esprima, and that takes precedence over anything else. (The AST structure being the [ESTree](https://github.com/estree/estree) API with JSX extensions.) Separate from that, Espree may deviate from what Esprima outputs in terms of where and how comments are attached, as well as what additional information is available on AST nodes. That is to say, Espree may add more things to the AST nodes than Esprima does but the overall AST structure produced will be the same. Espree may also deviate from Esprima in the interface it exposes. ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/espree/issues). Espree is licensed under a permissive BSD 2-clause license. ## Security Policy We work hard to ensure that Espree is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting * `npm run browserify` - creates a version of Espree that is usable in a browser ## Differences from Espree 2.x * The `tokenize()` method does not use `ecmaFeatures`. Any string will be tokenized completely based on ECMAScript 6 semantics. * Trailing whitespace no longer is counted as part of a node. * `let` and `const` declarations are no longer parsed by default. You must opt-in by using an `ecmaVersion` newer than `5` or setting `sourceType` to `module`. * The `esparse` and `esvalidate` binary scripts have been removed. * There is no `tolerant` option. We will investigate adding this back in the future. ## Known Incompatibilities In an effort to help those wanting to transition from other parsers to Espree, the following is a list of noteworthy incompatibilities with other parsers. These are known differences that we do not intend to change. ### Esprima 1.2.2 * Esprima counts trailing whitespace as part of each AST node while Espree does not. In Espree, the end of a node is where the last token occurs. * Espree does not parse `let` and `const` declarations by default. * Error messages returned for parsing errors are different. * There are two addition properties on every node and token: `start` and `end`. These represent the same data as `range` and are used internally by Acorn. ### Esprima 2.x * Esprima 2.x uses a different comment attachment algorithm that results in some comments being added in different places than Espree. The algorithm Espree uses is the same one used in Esprima 1.2.2. ## Frequently Asked Questions ### Why another parser [ESLint](http://eslint.org) had been relying on Esprima as its parser from the beginning. While that was fine when the JavaScript language was evolving slowly, the pace of development increased dramatically and Esprima had fallen behind. ESLint, like many other tools reliant on Esprima, has been stuck in using new JavaScript language features until Esprima updates, and that caused our users frustration. We decided the only way for us to move forward was to create our own parser, bringing us inline with JSHint and JSLint, and allowing us to keep implementing new features as we need them. We chose to fork Esprima instead of starting from scratch in order to move as quickly as possible with a compatible API. With Espree 2.0.0, we are no longer a fork of Esprima but rather a translation layer between Acorn and Esprima syntax. This allows us to put work back into a community-supported parser (Acorn) that is continuing to grow and evolve while maintaining an Esprima-compatible parser for those utilities still built on Esprima. ### Have you tried working with Esprima? Yes. Since the start of ESLint, we've regularly filed bugs and feature requests with Esprima and will continue to do so. However, there are some different philosophies around how the projects work that need to be worked through. The initial goal was to have Espree track Esprima and eventually merge the two back together, but we ultimately decided that building on top of Acorn was a better choice due to Acorn's plugin support. ### Why don't you just use Acorn? Acorn is a great JavaScript parser that produces an AST that is compatible with Esprima. Unfortunately, ESLint relies on more than just the AST to do its job. It relies on Esprima's tokens and comment attachment features to get a complete picture of the source code. We investigated switching to Acorn, but the inconsistencies between Esprima and Acorn created too much work for a project like ESLint. We are building on top of Acorn, however, so that we can contribute back and help make Acorn even better. ### What ECMAScript features do you support? Espree supports all ECMAScript 2020 features and partially supports ECMAScript 2021 features. Because ECMAScript 2021 is still under development, we are implementing features as they are finalized. Currently, Espree supports: * [Logical Assignment Operators](https://github.com/tc39/proposal-logical-assignment) * [Numeric Separators](https://github.com/tc39/proposal-numeric-separator) See [finished-proposals.md](https://github.com/tc39/proposals/blob/master/finished-proposals.md) to know what features are finalized. ### How do you determine which experimental features to support? In general, we do not support experimental JavaScript features. We may make exceptions from time to time depending on the maturity of the features. # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # ansi-colors [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/ansi-colors.svg?style=flat)](https://www.npmjs.com/package/ansi-colors) [![NPM monthly downloads](https://img.shields.io/npm/dm/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![NPM total downloads](https://img.shields.io/npm/dt/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![Linux Build Status](https://img.shields.io/travis/doowb/ansi-colors.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/ansi-colors) > Easily add ANSI colors to your text and symbols in the terminal. A faster drop-in replacement for chalk, kleur and turbocolor (without the dependencies and rendering bugs). Please consider following this project's author, [Brian Woodward](https://github.com/doowb), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save ansi-colors ``` ![image](https://user-images.githubusercontent.com/383994/39635445-8a98a3a6-4f8b-11e8-89c1-068c45d4fff8.png) ## Why use this? ansi-colors is _the fastest Node.js library for terminal styling_. A more performant drop-in replacement for chalk, with no dependencies. * _Blazing fast_ - Fastest terminal styling library in node.js, 10-20x faster than chalk! * _Drop-in replacement_ for [chalk](https://github.com/chalk/chalk). * _No dependencies_ (Chalk has 7 dependencies in its tree!) * _Safe_ - Does not modify the `String.prototype` like [colors](https://github.com/Marak/colors.js). * Supports [nested colors](#nested-colors), **and does not have the [nested styling bug](#nested-styling-bug) that is present in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur)**. * Supports [chained colors](#chained-colors). * [Toggle color support](#toggle-color-support) on or off. ## Usage ```js const c = require('ansi-colors'); console.log(c.red('This is a red string!')); console.log(c.green('This is a red string!')); console.log(c.cyan('This is a cyan string!')); console.log(c.yellow('This is a yellow string!')); ``` ![image](https://user-images.githubusercontent.com/383994/39653848-a38e67da-4fc0-11e8-89ae-98c65ebe9dcf.png) ## Chained colors ```js console.log(c.bold.red('this is a bold red message')); console.log(c.bold.yellow.italic('this is a bold yellow italicized message')); console.log(c.green.bold.underline('this is a bold green underlined message')); ``` ![image](https://user-images.githubusercontent.com/383994/39635780-7617246a-4f8c-11e8-89e9-05216cc54e38.png) ## Nested colors ```js console.log(c.yellow(`foo ${c.red.bold('red')} bar ${c.cyan('cyan')} baz`)); ``` ![image](https://user-images.githubusercontent.com/383994/39635817-8ed93d44-4f8c-11e8-8afd-8c3ea35f5fbe.png) ### Nested styling bug `ansi-colors` does not have the nested styling bug found in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur). ```js const { bold, red } = require('ansi-styles'); console.log(bold(`foo ${red.dim('bar')} baz`)); const colorette = require('colorette'); console.log(colorette.bold(`foo ${colorette.red(colorette.dim('bar'))} baz`)); const kleur = require('kleur'); console.log(kleur.bold(`foo ${kleur.red.dim('bar')} baz`)); const chalk = require('chalk'); console.log(chalk.bold(`foo ${chalk.red.dim('bar')} baz`)); ``` **Results in the following** (sans icons and labels) ![image](https://user-images.githubusercontent.com/383994/47280326-d2ee0580-d5a3-11e8-9611-ea6010f0a253.png) ## Toggle color support Easily enable/disable colors. ```js const c = require('ansi-colors'); // disable colors manually c.enabled = false; // or use a library to automatically detect support c.enabled = require('color-support').hasBasic; console.log(c.red('I will only be colored red if the terminal supports colors')); ``` ## Strip ANSI codes Use the `.unstyle` method to strip ANSI codes from a string. ```js console.log(c.unstyle(c.blue.bold('foo bar baz'))); //=> 'foo bar baz' ``` ## Available styles **Note** that bright and bright-background colors are not always supported. | Colors | Background Colors | Bright Colors | Bright Background Colors | | ------- | ----------------- | ------------- | ------------------------ | | black | bgBlack | blackBright | bgBlackBright | | red | bgRed | redBright | bgRedBright | | green | bgGreen | greenBright | bgGreenBright | | yellow | bgYellow | yellowBright | bgYellowBright | | blue | bgBlue | blueBright | bgBlueBright | | magenta | bgMagenta | magentaBright | bgMagentaBright | | cyan | bgCyan | cyanBright | bgCyanBright | | white | bgWhite | whiteBright | bgWhiteBright | | gray | | | | | grey | | | | _(`gray` is the U.S. spelling, `grey` is more commonly used in the Canada and U.K.)_ ### Style modifiers * dim * **bold** * hidden * _italic_ * underline * inverse * ~~strikethrough~~ * reset ## Aliases Create custom aliases for styles. ```js const colors = require('ansi-colors'); colors.alias('primary', colors.yellow); colors.alias('secondary', colors.bold); console.log(colors.primary.secondary('Foo')); ``` ## Themes A theme is an object of custom aliases. ```js const colors = require('ansi-colors'); colors.theme({ danger: colors.red, dark: colors.dim.gray, disabled: colors.gray, em: colors.italic, heading: colors.bold.underline, info: colors.cyan, muted: colors.dim, primary: colors.blue, strong: colors.bold, success: colors.green, underline: colors.underline, warning: colors.yellow }); // Now, we can use our custom styles alongside the built-in styles! console.log(colors.danger.strong.em('Error!')); console.log(colors.warning('Heads up!')); console.log(colors.info('Did you know...')); console.log(colors.success.bold('It worked!')); ``` ## Performance **Libraries tested** * ansi-colors v3.0.4 * chalk v2.4.1 ### Mac > MacBook Pro, Intel Core i7, 2.3 GHz, 16 GB. **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.915ms` * chalk - `12.437ms` **Benchmarks** ``` # All Colors ansi-colors x 173,851 ops/sec ±0.42% (91 runs sampled) chalk x 9,944 ops/sec ±2.53% (81 runs sampled))) # Chained colors ansi-colors x 20,791 ops/sec ±0.60% (88 runs sampled) chalk x 2,111 ops/sec ±2.34% (83 runs sampled) # Nested colors ansi-colors x 59,304 ops/sec ±0.98% (92 runs sampled) chalk x 4,590 ops/sec ±2.08% (82 runs sampled) ``` ### Windows > Windows 10, Intel Core i7-7700k CPU @ 4.2 GHz, 32 GB **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.494ms` * chalk - `11.523ms` **Benchmarks** ``` # All Colors ansi-colors x 193,088 ops/sec ±0.51% (95 runs sampled)) chalk x 9,612 ops/sec ±3.31% (77 runs sampled))) # Chained colors ansi-colors x 26,093 ops/sec ±1.13% (94 runs sampled) chalk x 2,267 ops/sec ±2.88% (80 runs sampled)) # Nested colors ansi-colors x 67,747 ops/sec ±0.49% (93 runs sampled) chalk x 4,446 ops/sec ±3.01% (82 runs sampled)) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [ansi-wrap](https://www.npmjs.com/package/ansi-wrap): Create ansi colors by passing the open and close codes. | [homepage](https://github.com/jonschlinkert/ansi-wrap "Create ansi colors by passing the open and close codes.") * [strip-color](https://www.npmjs.com/package/strip-color): Strip ANSI color codes from a string. No dependencies. | [homepage](https://github.com/jonschlinkert/strip-color "Strip ANSI color codes from a string. No dependencies.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 48 | [jonschlinkert](https://github.com/jonschlinkert) | | 42 | [doowb](https://github.com/doowb) | | 6 | [lukeed](https://github.com/lukeed) | | 2 | [Silic0nS0ldier](https://github.com/Silic0nS0ldier) | | 1 | [dwieeb](https://github.com/dwieeb) | | 1 | [jorgebucaran](https://github.com/jorgebucaran) | | 1 | [madhavarshney](https://github.com/madhavarshney) | | 1 | [chapterjason](https://github.com/chapterjason) | ### Author **Brian Woodward** * [GitHub Profile](https://github.com/doowb) * [Twitter Profile](https://twitter.com/doowb) * [LinkedIn Profile](https://linkedin.com/in/woodwardbrian) ### License Copyright © 2019, [Brian Woodward](https://github.com/doowb). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on July 01, 2019._ # Optionator <a name="optionator" /> Optionator is a JavaScript/Node.js option parsing and help generation library used by [eslint](http://eslint.org), [Grasp](http://graspjs.com), [LiveScript](http://livescript.net), [esmangle](https://github.com/estools/esmangle), [escodegen](https://github.com/estools/escodegen), and [many more](https://www.npmjs.com/browse/depended/optionator). For an online demo, check out the [Grasp online demo](http://www.graspjs.com/#demo). [About](#about) &middot; [Usage](#usage) &middot; [Settings Format](#settings-format) &middot; [Argument Format](#argument-format) ## Why? The problem with other option parsers, such as `yargs` or `minimist`, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application. $ cmd --halp Invalid option '--halp' - perhaps you meant '--help'? $ cmd --count str Invalid value for option 'count' - expected type Int, received value: str. Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier. ## About Optionator uses [type-check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) behind the scenes to cast and verify input according the specified types. MIT license. Version 0.9.1 npm install optionator For updates on Optionator, [follow me on twitter](https://twitter.com/gkzahariev). Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify. ## Usage `require('optionator');` returns a function. It has one property, `VERSION`, the current version of the library as a string. This function is called with an object specifying your options and other information, see the [settings format section](#settings-format). This in turn returns an object with three properties, `parse`, `parseArgv`, `generateHelp`, and `generateHelpForOption`, which are all functions. ```js var optionator = require('optionator')({ prepend: 'Usage: cmd [options]', append: 'Version 1.0.0', options: [{ option: 'help', alias: 'h', type: 'Boolean', description: 'displays help' }, { option: 'count', alias: 'c', type: 'Int', description: 'number of things', example: 'cmd --count 2' }] }); var options = optionator.parseArgv(process.argv); if (options.help) { console.log(optionator.generateHelp()); } ... ``` ### parse(input, parseOptions) `parse` processes the `input` according to your settings, and returns an object with the results. ##### arguments * input - `[String] | Object | String` - the input you wish to parse * parseOptions - `{slice: Int}` - all options optional - `slice` specifies how much to slice away from the beginning if the input is an array or string - by default `0` for string, `2` for array (works with `process.argv`) ##### returns `Object` - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the `_` key. ##### example ```js parse(['node', 't.js', '--count', '2', 'positional']); // {count: 2, _: ['positional']} parse('--count 2 positional'); // {count: 2, _: ['positional']} parse({count: 2, _:['positional']}); // {count: 2, _: ['positional']} ``` ### parseArgv(input) `parseArgv` works exactly like `parse`, but only for array input and it slices off the first two elements. ##### arguments * input - `[String]` - the input you wish to parse ##### returns See "returns" section in "parse" ##### example ```js parseArgv(process.argv); ``` ### generateHelp(helpOptions) `generateHelp` produces help text based on your settings. ##### arguments * helpOptions - `{showHidden: Boolean, interpolate: Object}` - all options optional - `showHidden` specifies whether to show options with `hidden: true` specified, by default it is `false` - `interpolate` specify data to be interpolated in `prepend` and `append` text, `{{key}}` is the format - eg. `generateHelp({interpolate:{version: '0.4.2'}})`, will change this `append` text: `Version {{version}}` to `Version 0.4.2` ##### returns `String` - the generated help text ##### example ```js generateHelp(); /* "Usage: cmd [options] positional -h, --help displays help -c, --count Int number of things Version 1.0.0 "*/ ``` ### generateHelpForOption(optionName) `generateHelpForOption` produces expanded help text for the specified with `optionName` option. If an `example` was specified for the option, it will be displayed, and if a `longDescription` was specified, it will display that instead of the `description`. ##### arguments * optionName - `String` - the name of the option to display ##### returns `String` - the generated help text for the option ##### example ```js generateHelpForOption('count'); /* "-c, --count Int description: number of things example: cmd --count 2 "*/ ``` ## Settings Format When your `require('optionator')`, you get a function that takes in a settings object. This object has the type: { prepend: String, append: String, options: [{heading: String} | { option: String, alias: [String] | String, type: String, enum: [String], default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: [String] | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: [String] | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | [String]]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object } All of the properties are optional (the `Maybe` has been excluded for brevities sake), except for having either `heading: String` or `option: String` in each object in the `options` array. ### Top Level Properties * `prepend` is an optional string to be placed before the options in the help text * `append` is an optional string to be placed after the options in the help text * `options` is a required array specifying your options and headings, the options and headings will be displayed in the order specified * `helpStyle` is an optional object which enables you to change the default appearance of some aspects of the help text * `mutuallyExclusive` is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is present * `concatRepeatedArrays` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `mergeRepeatedObjects` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `positionalAnywhere` is an optional boolean (defaults to `true`) - when `true` it allows positional arguments anywhere, when `false`, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with `positionalAnywhere: false`, the arguments `--flag --boom 12 --crack` would have two positional arguments: `12` and `--crack` * `typeAliases` is an optional object, it allows you to set aliases for types, eg. `{Path: 'String'}` would allow you to use the type `Path` as an alias for the type `String` * `defaults` is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do `default: { type: "String" }` to set the default type of all options to `String`, and then override that default in an individual option by setting the `type` property #### Heading Properties * `heading` a required string, the name of the heading #### Option Properties * `option` the required name of the option - use dash-case, without the leading dashes * `alias` is an optional string or array of strings which specify any aliases for the option * `type` is a required string in the [type check](https://github.com/gkz/type-check) [format](https://github.com/gkz/type-check#type-format), this will be used to cast the inputted value and validate it * `enum` is an optional array of strings, each string will be parsed by [levn](https://github.com/gkz/levn) - the argument value must be one of the resulting values - each potential value must validate against the specified `type` * `default` is a optional string, which will be parsed by [levn](https://github.com/gkz/levn) and used as the default value if none is set - the value must validate against the specified `type` * `restPositional` is an optional boolean - if set to `true`, everything after the option will be taken to be a positional argument, even if it looks like a named argument * `required` is an optional boolean - if set to `true`, the option parsing will fail if the option is not defined * `overrideRequired` is a optional boolean - if set to `true` and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use `--help` or `--version` flags * `concatRepeatedArrays` is an optional boolean or tuple with boolean and options object (defaults to `false`) - when set to `true` and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option `g` of type `[String]`: `-g a -g b -g c,d` will result in `['a','b','c','d']` You can supply an options object by giving the following value: `[true, options]`. The one currently supported option is `oneValuePerFlag`, this only allows one array value per flag. This is useful if your potential values contain a comma. * `mergeRepeatedObjects` is an optional boolean (defaults to `false`) - when set to `true` and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option `g` of type `Object`: `-g a:1 -g b:2 -g c:3,d:4` will result in `{a: 1, b: 2, c: 3, d: 4}` * `dependsOn` is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether `'and'` or `'or'` is first, it will either check whether all (`['and', 'option-a', 'option-b']`), or at least one (`['or', 'option-a', 'option-b']`) other options are set * `description` is an optional string, which will be displayed next to the option in the help text * `longDescription` is an optional string, it will be displayed instead of the `description` when `generateHelpForOption` is used * `example` is an optional string or array of strings with example(s) for the option - these will be displayed when `generateHelpForOption` is used #### Help Style Properties * `aliasSeparator` is an optional string, separates multiple names from each other - default: ' ,' * `typeSeparator` is an optional string, separates the type from the names - default: ' ' * `descriptionSeparator` is an optional string , separates the description from the padded name and type - default: ' ' * `initialIndent` is an optional int - the amount of indent for options - default: 2 * `secondaryIndent` is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4 * `maxPadFactor` is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5 ## Argument Format At the highest level there are two types of arguments: named, and positional. Name arguments of any length are prefixed with `--` (eg. `--go`), and those of one character may be prefixed with either `--` or `-` (eg. `-g`). There are two types of named arguments: boolean flags (eg. `--problemo`, `-p`) which take no value and result in a `true` if they are present, the falsey `undefined` if they are not present, or `false` if present and explicitly prefixed with `no` (eg. `--no-problemo`). Named arguments with values (eg. `--tseries 800`, `-t 800`) are the other type. If the option has a type `Boolean` it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value. For more information about how to properly set types to get the value you want, take a look at the [type check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) pages. You can group single character arguments that use a single `-`, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. `-ba 2` is equivalent to `-b -a 2`. Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in `cmd -b one -a 2 two` where `b` is a boolean flag, and `a` has the type `Number`, there are two positional arguments, `one` and `two`. Everything after an `--` is positional, even if it looks like a named argument. You may optionally use `=` to separate option names from values, for example: `--count=2`. If you specify the option `NUM`, then any argument using a single `-` followed by a number will be valid and will set the value of `NUM`. Eg. `-2` will be parsed into `NUM: 2`. If duplicate named arguments are present, the last one will be taken. ## Technical About `optionator` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [levn](https://github.com/gkz/levn) to cast arguments to their specified type, and uses [type-check](https://github.com/gkz/type-check) to validate values. It also uses the [prelude.ls](http://preludels.com/) library. discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # type-check [![Build Status](https://travis-ci.org/gkz/type-check.png?branch=master)](https://travis-ci.org/gkz/type-check) <a name="type-check" /> `type-check` is a library which allows you to check the types of JavaScript values at runtime with a Haskell like type syntax. It is great for checking external input, for testing, or even for adding a bit of safety to your internal code. It is a major component of [levn](https://github.com/gkz/levn). MIT license. Version 0.4.0. Check out the [demo](http://gkz.github.io/type-check/). For updates on `type-check`, [follow me on twitter](https://twitter.com/gkzahariev). npm install type-check ## Quick Examples ```js // Basic types: var typeCheck = require('type-check').typeCheck; typeCheck('Number', 1); // true typeCheck('Number', 'str'); // false typeCheck('Error', new Error); // true typeCheck('Undefined', undefined); // true // Comment typeCheck('count::Number', 1); // true // One type OR another type: typeCheck('Number | String', 2); // true typeCheck('Number | String', 'str'); // true // Wildcard, matches all types: typeCheck('*', 2) // true // Array, all elements of a single type: typeCheck('[Number]', [1, 2, 3]); // true typeCheck('[Number]', [1, 'str', 3]); // false // Tuples, or fixed length arrays with elements of different types: typeCheck('(String, Number)', ['str', 2]); // true typeCheck('(String, Number)', ['str']); // false typeCheck('(String, Number)', ['str', 2, 5]); // false // Object properties: typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true typeCheck('{x: Number, y: Boolean}', {x: 2}); // false typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true // A particular type AND object properties: typeCheck('RegExp{source: String, ...}', /re/i); // true typeCheck('RegExp{source: String, ...}', {source: 're'}); // false // Custom types: var opt = {customTypes: {Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}}; typeCheck('Even', 2, opt); // true // Nested: var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}' typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true ``` Check out the [type syntax format](#syntax) and [guide](#guide). ## Usage `require('type-check');` returns an object that exposes four properties. `VERSION` is the current version of the library as a string. `typeCheck`, `parseType`, and `parsedTypeCheck` are functions. ```js // typeCheck(type, input, options); typeCheck('Number', 2); // true // parseType(type); var parsedType = parseType('Number'); // object // parsedTypeCheck(parsedType, input, options); parsedTypeCheck(parsedType, 2); // true ``` ### typeCheck(type, input, options) `typeCheck` checks a JavaScript value `input` against `type` written in the [type format](#type-format) (and taking account the optional `options`) and returns whether the `input` matches the `type`. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js typeCheck('Number', 2); // true ``` ### parseType(type) `parseType` parses string `type` written in the [type format](#type-format) into an object representing the parsed type. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to parse ##### returns `Object` - an object in the parsed type format representing the parsed type ##### example ```js parseType('Number'); // [{type: 'Number'}] ``` ### parsedTypeCheck(parsedType, input, options) `parsedTypeCheck` checks a JavaScript value `input` against parsed `type` in the parsed type format (and taking account the optional `options`) and returns whether the `input` matches the `type`. Use this in conjunction with `parseType` if you are going to use a type more than once. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js parsedTypeCheck([{type: 'Number'}], 2); // true var parsedType = parseType('String'); parsedTypeCheck(parsedType, 'str'); // true ``` <a name="type-format" /> ## Type Format ### Syntax White space is ignored. The root node is a __Types__. * __Identifier__ = `[\$\w]+` - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. `String` * __Type__ = an `Identifier`, an `Identifier` followed by a `Structure`, just a `Structure`, or a wildcard `*` - eg. `String`, `Object{x: Number}`, `{x: Number}`, `Array{0: String, 1: Boolean, length: Number}`, `*` * __Types__ = optionally a comment (an `Identifier` followed by a `::`), optionally the identifier `Maybe`, one or more `Type`, separated by `|` - eg. `Number`, `String | Date`, `Maybe Number`, `Maybe Boolean | String` * __Structure__ = `Fields`, or a `Tuple`, or an `Array` - eg. `{x: Number}`, `(String, Number)`, `[Date]` * __Fields__ = a `{`, followed one or more `Field` separated by a comma `,` (trailing comma `,` is permitted), optionally an `...` (always preceded by a comma `,`), followed by a `}` - eg. `{x: Number, y: String}`, `{k: Function, ...}` * __Field__ = an `Identifier`, followed by a colon `:`, followed by `Types` - eg. `x: Date | String`, `y: Boolean` * __Tuple__ = a `(`, followed by one or more `Types` separated by a comma `,` (trailing comma `,` is permitted), followed by a `)` - eg `(Date)`, `(Number, Date)` * __Array__ = a `[` followed by exactly one `Types` followed by a `]` - eg. `[Boolean]`, `[Boolean | Null]` ### Guide `type-check` uses `Object.toString` to find out the basic type of a value. Specifically, ```js {}.toString.call(VALUE).slice(8, -1) {}.toString.call(true).slice(8, -1) // 'Boolean' ``` A basic type, eg. `Number`, uses this check. This is much more versatile than using `typeof` - for example, with `document`, `typeof` produces `'object'` which isn't that useful, and our technique produces `'HTMLDocument'`. You may check for multiple types by separating types with a `|`. The checker proceeds from left to right, and passes if the value is any of the types - eg. `String | Boolean` first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false. Adding a `Maybe` in front of a list of multiple types is the same as also checking for `Null` and `Undefined` - eg. `Maybe String` is equivalent to `Undefined | Null | String`. You may add a comment to remind you of what the type is for by following an identifier with a `::` before a type (or multiple types). The comment is simply thrown out. The wildcard `*` matches all types. There are three types of structures for checking the contents of a value: 'fields', 'tuple', and 'array'. If used by itself, a 'fields' structure will pass with any type of object as long as it is an instance of `Object` and the properties pass - this allows for duck typing - eg. `{x: Boolean}`. To check if the properties pass, and the value is of a certain type, you can specify the type - eg. `Error{message: String}`. If you want to make a field optional, you can simply use `Maybe` - eg. `{x: Boolean, y: Maybe String}` will still pass if `y` is undefined (or null). If you don't care if the value has properties beyond what you have specified, you can use the 'etc' operator `...` - eg. `{x: Boolean, ...}` will match an object with an `x` property that is a boolean, and with zero or more other properties. For an array, you must specify one or more types (separated by `|`) - it will pass for something of any length as long as each element passes the types provided - eg. `[Number]`, `[Number | String]`. A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. `(String, Number)`. An array and tuple structure check that the value is of type `Array` by default, but if another type is specified, they will check for that instead - eg. `Int32Array[Number]`. You can use the wildcard `*` to search for any type at all. Check out the [type precedence](https://github.com/zaboco/type-precedence) library for type-check. ## Options Options is an object. It is an optional parameter to the `typeCheck` and `parsedTypeCheck` functions. The only current option is `customTypes`. <a name="custom-types" /> ### Custom Types __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; } } } }; typeCheck('Even', 2, options); // true typeCheck('Even', 3, options); // false ``` `customTypes` allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a `typeOf` property - a string, and `validate` property - a function. The `typeOf` property is the type the value should be (optional - if not set only `validate` will be used), and `validate` is a function which should return true if the value is of that type. `validate` receives one parameter, which is the value that we are checking. ## Technical About `type-check` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It also uses the [prelude.ls](http://preludels.com/) library. # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![build](https://github.com/epoberezkin/json-schema-traverse/workflows/build/badge.svg)](https://github.com/epoberezkin/json-schema-traverse/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/json-schema-traverse)](https://www.npmjs.com/package/json-schema-traverse) [![coverage](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## Enterprise support json-schema-traverse package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-json-schema-traverse?utm_source=npm-json-schema-traverse&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # `asbuild` [![Stars](https://img.shields.io/github/stars/AssemblyScript/asbuild.svg?style=social&maxAge=3600&label=Star)](https://github.com/AssemblyScript/asbuild/stargazers) *A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc.* ## 🚩 Table of Contents - [Installing](#-installing) - [Usage](#-usage) - [`asb init`](#asb-init---create-an-empty-project) - [`asb test`](#asb-test---run-as-pect-tests) - [`asb fmt`](#asb-fmt---format-as-files-using-eslint) - [`asb run`](#asb-run---run-a-wasi-binary) - [`asb build`](#asb-build---compile-the-project-using-asc) - [Background](#-background) ## 🔧 Installing Install it globally ``` npm install -g asbuild ``` Or, locally as dev dependencies ``` npm install --save-dev asbuild ``` ## 💡 Usage ``` Build tool for AssemblyScript projects. Usage: asb [command] [options] Commands: asb Alias of build command, to maintain back-ward compatibility [default] asb build Compile a local package and all of its dependencies [aliases: compile, make] asb init [baseDir] Create a new AS package in an given directory asb test Run as-pect tests asb fmt [paths..] This utility formats current module using eslint. [aliases: format, lint] Options: --version Show version number [boolean] --help Show help [boolean] ``` ### `asb init` - Create an empty project ``` asb init [baseDir] Create a new AS package in an given directory Positionals: baseDir Create a sample AS project in this directory [string] [default: "."] Options: --version Show version number [boolean] --help Show help [boolean] --yes Skip the interactive prompt [boolean] [default: false] ``` ### `asb test` - Run as-pect tests ``` asb test Run as-pect tests USAGE: asb test [options] -- [aspect_options] Options: --version Show version number [boolean] --help Show help [boolean] --verbose, --vv Print out arguments passed to as-pect [boolean] [default: false] ``` ### `asb fmt` - Format AS files using ESlint ``` asb fmt [paths..] This utility formats current module using eslint. Positionals: paths Paths to format [array] [default: ["."]] Initialisation: --init Generates recommended eslint config for AS Projects [boolean] Miscellaneous --lint, --dry-run Tries to fix problems without saving the changes to the file system [boolean] [default: false] Options: --version Show version number [boolean] --help Show help ``` ### `asb run` - Run a WASI binary ``` asb run Run a WASI binary USAGE: asb run [options] [binary path] -- [binary options] Positionals: binary path to Wasm binary [string] [required] Options: --version Show version number [boolean] --help Show help [boolean] --preopen, -p comma separated list of directories to open. [default: "."] ``` ### `asb build` - Compile the project using asc ``` asb build Compile a local package and all of its dependencies USAGE: asb build [entry_file] [options] -- [asc_options] Options: --version Show version number [boolean] --help Show help [boolean] --baseDir, -d Base directory of project. [string] [default: "."] --config, -c Path to asconfig file [string] [default: "./asconfig.json"] --wat Output wat file to outDir [boolean] [default: false] --outDir Directory to place built binaries. Default "./build/<target>/" [string] --target Target for compilation [string] [default: "release"] --verbose Print out arguments passed to asc [boolean] [default: false] Examples: asb build Build release of 'assembly/index.ts to build/release/packageName.wasm asb build --target release Build a release binary asb build -- --measure Pass argument to 'asc' ``` #### Defaults ##### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ##### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./tests/build_test) ## 📖 Background Asbuild started as wrapper around `asc` to provide an easier CLI interface and now has been extened to support other commands like `init`, `test` and `fmt` just like `cargo` to become a one stop build tool for AS Projects. ## 📜 License This library is provided under the open-source [MIT license](https://choosealicense.com/licenses/mit/). # word-wrap [![NPM version](https://img.shields.io/npm/v/word-wrap.svg?style=flat)](https://www.npmjs.com/package/word-wrap) [![NPM monthly downloads](https://img.shields.io/npm/dm/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![NPM total downloads](https://img.shields.io/npm/dt/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/word-wrap.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/word-wrap) > Wrap words to a specified length. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save word-wrap ``` ## Usage ```js var wrap = require('word-wrap'); wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'); ``` Results in: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. ``` ## Options ![image](https://cloud.githubusercontent.com/assets/383994/6543728/7a381c08-c4f6-11e4-8b7d-b6ba197569c9.png) ### options.width Type: `Number` Default: `50` The width of the text before wrapping to a new line. **Example:** ```js wrap(str, {width: 60}); ``` ### options.indent Type: `String` Default: `` (two spaces) The string to use at the beginning of each line. **Example:** ```js wrap(str, {indent: ' '}); ``` ### options.newline Type: `String` Default: `\n` The string to use at the end of each line. **Example:** ```js wrap(str, {newline: '\n\n'}); ``` ### options.escape Type: `function` Default: `function(str){return str;}` An escape function to run on each line after splitting them. **Example:** ```js var xmlescape = require('xml-escape'); wrap(str, { escape: function(string){ return xmlescape(string); } }); ``` ### options.trim Type: `Boolean` Default: `false` Trim trailing whitespace from the returned string. This option is included since `.trim()` would also strip the leading indentation from the first line. **Example:** ```js wrap(str, {trim: true}); ``` ### options.cut Type: `Boolean` Default: `false` Break a word between any two letters when the word is longer than the specified width. **Example:** ```js wrap(str, {cut: true}); ``` ## About ### Related projects * [common-words](https://www.npmjs.com/package/common-words): Updated list (JSON) of the 100 most common words in the English language. Useful for… [more](https://github.com/jonschlinkert/common-words) | [homepage](https://github.com/jonschlinkert/common-words "Updated list (JSON) of the 100 most common words in the English language. Useful for excluding these words from arrays.") * [shuffle-words](https://www.npmjs.com/package/shuffle-words): Shuffle the words in a string and optionally the letters in each word using the… [more](https://github.com/jonschlinkert/shuffle-words) | [homepage](https://github.com/jonschlinkert/shuffle-words "Shuffle the words in a string and optionally the letters in each word using the Fisher-Yates algorithm. Useful for creating test fixtures, benchmarking samples, etc.") * [unique-words](https://www.npmjs.com/package/unique-words): Return the unique words in a string or array. | [homepage](https://github.com/jonschlinkert/unique-words "Return the unique words in a string or array.") * [wordcount](https://www.npmjs.com/package/wordcount): Count the words in a string. Support for english, CJK and Cyrillic. | [homepage](https://github.com/jonschlinkert/wordcount "Count the words in a string. Support for english, CJK and Cyrillic.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Contributors | **Commits** | **Contributor** | | --- | --- | | 43 | [jonschlinkert](https://github.com/jonschlinkert) | | 2 | [lordvlad](https://github.com/lordvlad) | | 2 | [hildjj](https://github.com/hildjj) | | 1 | [danilosampaio](https://github.com/danilosampaio) | | 1 | [2fd](https://github.com/2fd) | | 1 | [toddself](https://github.com/toddself) | | 1 | [wolfgang42](https://github.com/wolfgang42) | | 1 | [zachhale](https://github.com/zachhale) | ### Building docs _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` ### Running tests Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](https://twitter.com/jonschlinkert) ### License Copyright © 2017, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 02, 2017._ iMurmurHash.js ============== An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on [Gary Court's implementation](https://github.com/garycourt/murmurhash-js) with [kazuyukitanimura's modifications](https://github.com/kazuyukitanimura/murmurhash-js). This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing. Installation ------------ To use iMurmurHash in the browser, [download the latest version](https://raw.github.com/jensyt/imurmurhash-js/master/imurmurhash.min.js) and include it as a script on your site. ```html <script type="text/javascript" src="/scripts/imurmurhash.min.js"></script> <script> // Your code here, access iMurmurHash using the global object MurmurHash3 </script> ``` --- To use iMurmurHash in Node.js, install the module using NPM: ```bash npm install imurmurhash ``` Then simply include it in your scripts: ```javascript MurmurHash3 = require('imurmurhash'); ``` Quick Example ------------- ```javascript // Create the initial hash var hashState = MurmurHash3('string'); // Incrementally add text hashState.hash('more strings'); hashState.hash('even more strings'); // All calls can be chained if desired hashState.hash('and').hash('some').hash('more'); // Get a result hashState.result(); // returns 0xe4ccfe6b ``` Functions --------- ### MurmurHash3 ([string], [seed]) Get a hash state object, optionally initialized with the given _string_ and _seed_. _Seed_ must be a positive integer if provided. Calling this function without the `new` keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use `new` to create a new state object. For example: ```javascript // Use the cached object, calling the function again will return the same // object (but reset, so the current state would be lost) hashState = MurmurHash3(); ... // Create a new object that can be safely used however you wish. Calling the // function again will simply return a new state object, and no state loss // will occur, at the cost of creating more objects. hashState = new MurmurHash3(); ``` Both methods can be mixed however you like if you have different use cases. --- ### MurmurHash3.prototype.hash (string) Incrementally add _string_ to the hash. This can be called as many times as you want for the hash state object, including after a call to `result()`. Returns `this` so calls can be chained. --- ### MurmurHash3.prototype.result () Get the result of the hash as a 32-bit positive integer. This performs the tail and finalizer portions of the algorithm, but does not store the result in the state object. This means that it is perfectly safe to get results and then continue adding strings via `hash`. ```javascript // Do the whole string at once MurmurHash3('this is a test string').result(); // 0x70529328 // Do part of the string, get a result, then the other part var m = MurmurHash3('this is a'); m.result(); // 0xbfc4f834 m.hash(' test string').result(); // 0x70529328 (same as above) ``` --- ### MurmurHash3.prototype.reset ([seed]) Reset the state object for reuse, optionally using the given _seed_ (defaults to 0 like the constructor). Returns `this` so calls can be chained. --- License (MIT) ------------- Copyright (c) 2013 Gary Court, Jens Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # Acorn-JSX [![Build Status](https://travis-ci.org/acornjs/acorn-jsx.svg?branch=master)](https://travis-ci.org/acornjs/acorn-jsx) [![NPM version](https://img.shields.io/npm/v/acorn-jsx.svg)](https://www.npmjs.org/package/acorn-jsx) This is plugin for [Acorn](http://marijnhaverbeke.nl/acorn/) - a tiny, fast JavaScript parser, written completely in JavaScript. It was created as an experimental alternative, faster [React.js JSX](http://facebook.github.io/react/docs/jsx-in-depth.html) parser. Later, it replaced the [official parser](https://github.com/facebookarchive/esprima) and these days is used by many prominent development tools. ## Transpiler Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out [Babel](https://babeljs.io/) and [Buble](https://buble.surge.sh/) transpilers which use `acorn-jsx` under the hood. ## Usage Requiring this module provides you with an Acorn plugin that you can use like this: ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');"); ``` Note that official spec doesn't support mix of XML namespaces and object-style access in tag names (#27) like in `<namespace:Object.Property />`, so it was deprecated in `acorn-jsx@3.0`. If you still want to opt-in to support of such constructions, you can pass the following option: ```javascript acorn.Parser.extend(jsx({ allowNamespacedObjects: true })) ``` Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely: ```javascript acorn.Parser.extend(jsx({ allowNamespaces: false })) ``` Note that by default `allowNamespaces` is enabled for spec compliancy. ## License This plugin is issued under the [MIT license](./LICENSE). # has > Object.prototype.hasOwnProperty.call shortcut ## Installation ```sh npm install --save has ``` ## Usage ```js var has = require('has'); has({}, 'hasOwnProperty'); // false has(Object.prototype, 'hasOwnProperty'); // true ``` # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows command prompt notes ##### CMD On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Example: ```cmd set DEBUG=* & node app.js ``` ##### PowerShell (VS Code default) PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Example: ```cmd $env:DEBUG='app';node app.js ``` Then, run the program to be debugged as usual. npm script example: ```js "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js", ``` ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Extend You can simply extend debugger ```js const log = require('debug')('auth'); //creates new debug instance with extended namespace const logSign = log.extend('sign'); const logLogin = log.extend('login'); log('hello'); // auth hello logSign('hello'); //auth:sign hello logLogin('hello'); //auth:login hello ``` ## Set dynamically You can also enable debug dynamically by calling the `enable()` method : ```js let debug = require('debug'); console.log(1, debug.enabled('test')); debug.enable('test'); console.log(2, debug.enabled('test')); debug.disable(); console.log(3, debug.enabled('test')); ``` print : ``` 1 false 2 true 3 false ``` Usage : `enable(namespaces)` `namespaces` can include modes separated by a colon and wildcards. Note that calling `enable()` completely overrides previously set DEBUG variable : ``` $ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))' => false ``` `disable()` Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with. For example: ```js let debug = require('debug'); debug.enable('foo:*,-foo:bar'); let namespaces = debug.disable(); debug.enable(namespaces); ``` Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical. ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Shims used when bundling asc for browser usage. # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. ## Test Strategy - tests are copied from the [polyfill implementation](https://github.com/tc39/proposal-temporal/tree/main/polyfill/test) - tests should be removed if they relate to features that do not make sense for TS/AS, i.e. tests that validate the shape of an object do not make sense in a language with compile-time type checking - tests that fail because a feature has not been implemented yet should be left as failures. # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. [![build status](https://app.travis-ci.com/dankogai/js-base64.svg)](https://app.travis-ci.com/github/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.encode(latin, true)); // ZGFua29nYWk skips padding Base64.encodeURI(latin)); // ZGFua29nYWk Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.decode( 'ZGFua29nYWk'); // dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend Uint8Array.prototype Base64.extendUint8Array(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IEs before 11. Do the following in your shell. ```shell $ make base64.es5.js ``` ## Brief History * Since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. * Since version 3.7 `base64.js` is ES5-compatible again (hence IE11-compabile). * Since 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see above) # isarray `Array#isArray` for older browsers. [![build status](https://secure.travis-ci.org/juliangruber/isarray.svg)](http://travis-ci.org/juliangruber/isarray) [![downloads](https://img.shields.io/npm/dm/isarray.svg)](https://www.npmjs.org/package/isarray) [![browser support](https://ci.testling.com/juliangruber/isarray.png) ](https://ci.testling.com/juliangruber/isarray) ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
ketyung_tm_ticket_mints_contract
Cargo.toml LICENSE.md build.sh readme.md src lib.rs mints_manage.rs mints_view.rs models.rs tests.rs term_tests1.txt
metaguild_nearbox
README.md contract Cargo.toml build.js src lib.rs index.html jsconfig.json package.json postcss.config.js public font inter.css logo-ceramic.svg near-logo-bug.svg near-logo.svg near_token_icon.svg src main.js routes.js store actions.js getters.js index.js mutations.js state.js tailwind.css tailwind.config.js vite.config.js
<br /> <br /> <p> <img src="https://near.org/wp-content/themes/near-19/assets/img/neue/logo.svg?t=1600963474" width="200"> </p> <p> <img src="https://ceramic.network/images/ceramic-logo-p-500.png" width="200"> </p> <br /> <br /> # Near Ceramic Vue A quick start setup for a dApp in NEAR, Ceramic, Vue 3, Tailwind CSS 2 This starter template also includes: - [Vue-Near](https://www.npmjs.com/package/vue-near) Easy NEAR Blockchain methods using `$near` - [Vue Router 4.x](https://github.com/vuejs/vue-router-next) - [Inter var font](https://github.com/rsms/inter) (self hosted, woff2, version 3.15) - First-party plugins needed for Tailwind UI. Uncomment them in `tailwind.config.js` to enable. * [@tailwindcss/forms](https://github.com/tailwindlabs/tailwindcss-forms) * [@tailwindcss/typography](https://github.com/tailwindlabs/tailwindcss-typography) * [@tailwindcss/aspect-ratio](https://github.com/tailwindlabs/tailwindcss-aspect-ratio) ### Example UI: <img src="./public/docs/demo.png" width="400"> ---- ### Getting Started 🚀 npm: ```sh npm install npm run dev npm run build npm run contract:build npm run contract:dev:deploy npm run contract:deploy npm run contract:test ``` yarn: ```sh yarn yarn dev yarn build yarn contract:build yarn contract:dev:deploy yarn contract:deploy yarn contract:test ``` ### Bonus - [vue-tailwindcss-cdn](https://github.com/web2033/vue-tailwindcss-cdn) (a single HTML file with CDN links) - [CodePen Template](https://codepen.io/web2033/pen/QWNbwxY) with a similar stack (Vue 3.x + Tailwind 2.x + Inter var font) ## Showcase Did you use this template? If so, send a PR so it can be showcased here! 🎉 ## Tests I would love help writing tests. ❤️ ## License [MIT](LICENSE.txt) License ---- ### Refill My ☕️? If you feel this helped you in some way, you can tip `tjtc.near`
Mhezron_blood-donation-dApp-on-NEAR-protocol
.gitpod.yml README.md contract Cargo.toml README.md neardev dev-account.env src lib.rs target .rustc_info.json debug .fingerprint Inflector-4a8c8a7e9339386c lib-inflector.json ahash-45d0474d3a50e669 lib-ahash.json ahash-72508542b05bb4d5 build-script-build-script-build.json ahash-a44a5ee76b1fc569 run-build-script-build-script-build.json ahash-edda4797fae2a9f7 lib-ahash.json arrayref-83b5762287b368f2 lib-arrayref.json arrayref-ac2ab478ab84d752 lib-arrayref.json arrayvec-46e0d6b4216b4a42 lib-arrayvec.json arrayvec-6118444670fa2fc7 lib-arrayvec.json arrayvec-7a569ef838c53ea3 lib-arrayvec.json arrayvec-d2f5dc538f55e3d4 lib-arrayvec.json autocfg-3aefa914eeb411d0 lib-autocfg.json base64-7720e877595822cd lib-base64.json base64-8bcb81a96746c276 lib-base64.json base64-8f213d801c8b1621 lib-base64.json base64-b63434a4515c738b lib-base64.json bitvec-eb8932db86452ff4 lib-bitvec.json bitvec-f4e7076da8b78e10 lib-bitvec.json blake2-114dc8ca8e18058e lib-blake2.json blake2-7efa928110d247f0 lib-blake2.json block-buffer-209158feac067af1 lib-block-buffer.json block-buffer-410ba5980f6fbb82 lib-block-buffer.json block-buffer-b17260e0e77714da lib-block-buffer.json block-buffer-ba8de063c85affcd lib-block-buffer.json blood-6876aaf2b38d610b test-lib-blood.json blood-d40916a6a49b2603 lib-blood.json blood_donation-5b70350872f3d77e lib-blood_donation.json blood_donation-8b64c3f260e32816 lib-blood_donation.json blood_donation-ae7aad16eed61b19 test-lib-blood_donation.json blood_donation-fc5737d58c80f2af test-lib-blood_donation.json borsh-9bb65bb46a3fa398 lib-borsh.json borsh-derive-b6c81d569395d7cc lib-borsh-derive.json borsh-derive-internal-aaa5dadce4e63f26 lib-borsh-derive-internal.json borsh-e4d4d55dddc2f2ad lib-borsh.json borsh-schema-derive-internal-7059e633ec4d2e58 lib-borsh-schema-derive-internal.json bs58-470bfdec2460c771 lib-bs58.json bs58-74ebdf3302c8b177 lib-bs58.json byte-slice-cast-6a9bf03441b6f423 lib-byte-slice-cast.json byte-slice-cast-cbace4e31e23c17c lib-byte-slice-cast.json byteorder-3b2815624401495d lib-byteorder.json byteorder-b0e216414bfe58b6 lib-byteorder.json bytesize-23f214f2db4f1cdb lib-bytesize.json bytesize-c018ac408f1093e5 lib-bytesize.json c2-chacha-a481db35d06d382f lib-c2-chacha.json c2-chacha-b147d504f492a8f5 lib-c2-chacha.json cc-b3a34eae63a32363 lib-cc.json cfg-if-3a11b9a876968294 lib-cfg-if.json cfg-if-87828489ba99165a lib-cfg-if.json cfg-if-9e5685c44d76ca0a lib-cfg-if.json cfg-if-dcce77079dd8fe1e lib-cfg-if.json chrono-e0f12f95fd8b6191 lib-chrono.json chrono-f81b1a10427dca64 lib-chrono.json cipher-4a76cdc846189bdd lib-cipher.json cipher-c0d612c848390324 lib-cipher.json convert_case-eafa7e0ee60e7899 lib-convert_case.json cpufeatures-8aacd22bd207db59 lib-cpufeatures.json cpufeatures-98c9452ebeaba894 lib-cpufeatures.json crunchy-64d7c01442c6fd25 build-script-build-script-build.json crunchy-79a03fbc3917996b lib-crunchy.json crunchy-8bbe7e8e70917188 run-build-script-build-script-build.json crunchy-a68159a88f4bb0f4 lib-crunchy.json crypto-common-4f4b71aa97092312 lib-crypto-common.json crypto-common-984d23fb648201fe lib-crypto-common.json crypto-mac-4946cad3301bf4d8 lib-crypto-mac.json crypto-mac-7de313f6a2985db9 lib-crypto-mac.json curve25519-dalek-887f06ea94a98546 lib-curve25519-dalek.json curve25519-dalek-fb2d75d036f7759c lib-curve25519-dalek.json derive_more-d7bca12c5398a3fa lib-derive_more.json digest-4b4005105cfd72e3 lib-digest.json digest-8bc15e03ae79c9b0 lib-digest.json digest-8df07ecfa9ce5e71 lib-digest.json digest-f3dd665a04a6dd57 lib-digest.json easy-ext-150bd4d63cc9ab3d lib-easy-ext.json ed25519-4b21e8c6b52d0494 lib-ed25519.json ed25519-dalek-6cd0433029ad29fc lib-ed25519-dalek.json ed25519-dalek-96aaa98710ca6c46 lib-ed25519-dalek.json ed25519-fdaf0f0c1a6e25a6 lib-ed25519.json fixed-hash-0c8397e5ce0715a9 lib-fixed-hash.json fixed-hash-dbf7fe3e0e990417 lib-fixed-hash.json funty-69a60b18dba28c55 lib-funty.json funty-e8b4a735dede79d3 lib-funty.json generic-array-3461042ac24cc513 lib-generic_array.json generic-array-5e8a976bf2a9c545 build-script-build-script-build.json generic-array-784c2fecc4efa3b0 run-build-script-build-script-build.json generic-array-cf9fb17626b617b3 lib-generic_array.json getrandom-10bca283e3fc7ad7 build-script-build-script-build.json getrandom-17e259cc072f6edf lib-getrandom.json getrandom-666d5cf62fd1a94f lib-getrandom.json getrandom-9a229dff48be8e3d lib-getrandom.json getrandom-ac9655b2098e3bfd lib-getrandom.json getrandom-c39ec226ad3c31b7 run-build-script-build-script-build.json greeter-3913729e91c92d05 test-lib-greeter.json greeter-6013c1c62c238566 lib-greeter.json hashbrown-820e8c84e140a2c3 lib-hashbrown.json hashbrown-a6c50994543d4f1a lib-hashbrown.json heck-101bfbbdccd014e6 lib-heck.json hex-ca898f32910be3e9 lib-hex.json hex-e6e12260d7fb4e79 lib-hex.json impl-codec-a3f26c937d193bea lib-impl-codec.json impl-codec-de4154d6eb297fc9 lib-impl-codec.json impl-trait-for-tuples-0b1e457ece6a4c8b lib-impl-trait-for-tuples.json itoa-52ce8f5645ecf1ec lib-itoa.json itoa-9980eb29dd4d9bf3 lib-itoa.json keccak-9bc3a354d3c6bcbe lib-keccak.json keccak-a50e869ed66f3d46 lib-keccak.json libc-3c942367bfcc27e1 run-build-script-build-script-build.json libc-4e90763dd2291129 lib-libc.json libc-676f6e5c87aa1f0c build-script-build-script-build.json libc-6797570d26064b09 lib-libc.json memory_units-0ac8b3294301aa9d lib-memory_units.json memory_units-0f1aebfab2078f57 lib-memory_units.json near-account-id-eaf8c2ccfc8bea26 lib-near-account-id.json near-account-id-ebdadc6eaa6e0d28 lib-near-account-id.json near-crypto-47b4141367133156 lib-near-crypto.json near-crypto-dc10e22748fb1f52 lib-near-crypto.json near-primitives-43e47602cf3642d6 lib-near-primitives.json near-primitives-515030fd7a576d37 lib-near-primitives.json near-primitives-core-2a2d4df9d5d81626 lib-near-primitives-core.json near-primitives-core-f3197f93929a33e5 lib-near-primitives-core.json near-rpc-error-core-74e547108c828615 lib-near-rpc-error-core.json near-rpc-error-macro-9ea00aa388a062d7 lib-near-rpc-error-macro.json near-sdk-aa47b4150a0db20c lib-near-sdk.json near-sdk-e28ae476205387b2 lib-near-sdk.json near-sdk-macros-32dd40cef0598203 lib-near-sdk-macros.json near-sys-670f99a1d0b460ce lib-near-sys.json near-sys-f1e9d6ff35a98073 lib-near-sys.json near-vm-errors-2cb6bc1622689247 lib-near-vm-errors.json near-vm-errors-65f4ae23844bf673 lib-near-vm-errors.json near-vm-logic-66403a756b9c2f98 lib-near-vm-logic.json near-vm-logic-9f911ddf1b597e75 lib-near-vm-logic.json num-bigint-117c17448bfd7d93 lib-num-bigint.json num-bigint-913468ce16779663 build-script-build-script-build.json num-bigint-94d01626aa947045 run-build-script-build-script-build.json num-bigint-eb3093c240d52994 lib-num-bigint.json num-integer-27f78d9f45764006 lib-num-integer.json num-integer-83c9ee775e4407a1 lib-num-integer.json num-integer-874fca12ad84f60b run-build-script-build-script-build.json num-integer-cd3511630de519f7 build-script-build-script-build.json num-rational-262cbfee7803970d run-build-script-build-script-build.json num-rational-3cb00400d63c5b4e build-script-build-script-build.json num-rational-45da2fb72ab6b071 lib-num-rational.json num-rational-e0f98574bf8a52c7 lib-num-rational.json num-traits-66c47baf3b53b996 build-script-build-script-build.json num-traits-7f3bd35b4f32e55d lib-num-traits.json num-traits-89fcd74c92840390 run-build-script-build-script-build.json num-traits-d58f98b6c605014a lib-num-traits.json once_cell-58ce2c539dbc1068 lib-once_cell.json once_cell-f09fca34c233de76 lib-once_cell.json opaque-debug-a33df30b36d2534f lib-opaque-debug.json opaque-debug-f97acab98ac29741 lib-opaque-debug.json parity-scale-codec-379e3d56c9d02457 lib-parity-scale-codec.json parity-scale-codec-541eae735c658947 lib-parity-scale-codec.json parity-scale-codec-derive-abee192d9a73bda6 lib-parity-scale-codec-derive.json parity-secp256k1-16f9b075eda4dd81 lib-secp256k1.json parity-secp256k1-91cc18d5dbf56192 build-script-build-script-build.json parity-secp256k1-b0b7d07851c10bb4 run-build-script-build-script-build.json parity-secp256k1-f7c0304659be21e3 lib-secp256k1.json ppv-lite86-3b7ca02da013c4a7 lib-ppv-lite86.json ppv-lite86-4d5a436bddd9e3c2 lib-ppv-lite86.json primitive-types-258ab971a4518c2c lib-primitive-types.json primitive-types-9bafb1a92629a0e8 lib-primitive-types.json proc-macro-crate-d46ecc8076c72daa lib-proc-macro-crate.json proc-macro-crate-e9231e11a84d4881 lib-proc-macro-crate.json proc-macro2-07c58a00e0604d94 run-build-script-build-script-build.json proc-macro2-6350eb4a30e0bd21 build-script-build-script-build.json proc-macro2-9a37b41b1fcfff72 lib-proc-macro2.json quote-dc3adb69f55702af lib-quote.json radium-580b99a292f53c73 run-build-script-build-script-build.json radium-6b491e8d20c65e47 lib-radium.json radium-8b5f2718ddffaf3f lib-radium.json radium-cf16a20f7432d409 build-script-build-script-build.json rand-4fc2fa8c03c5d379 lib-rand.json rand-9c41ed148fcaac6a lib-rand.json rand-a5c1623852f65e66 lib-rand.json rand-c33b34120f36eef2 lib-rand.json rand_chacha-0ac569f0e51243f3 lib-rand_chacha.json rand_chacha-35cac9f5df1c559e lib-rand_chacha.json rand_chacha-582b7598e2441724 lib-rand_chacha.json rand_chacha-d5f2fc30a4e21f54 lib-rand_chacha.json rand_core-5699ef64d4ada154 lib-rand_core.json rand_core-877fee5d0cb8c2e7 lib-rand_core.json rand_core-addc3d3167a64c80 lib-rand_core.json rand_core-cabe612401a30d9a lib-rand_core.json reed-solomon-erasure-575200bcd1efb4dd lib-reed-solomon-erasure.json reed-solomon-erasure-777de3c72a9d9ac2 build-script-build-script-build.json reed-solomon-erasure-955fef9cb9b42215 run-build-script-build-script-build.json reed-solomon-erasure-c6daa00ff609d8d8 lib-reed-solomon-erasure.json ripemd-3731bef2bdd3f0f9 lib-ripemd.json ripemd-a1e732c59e20890b lib-ripemd.json rustc-hex-d1dc429c6ecb8162 lib-rustc-hex.json rustc-hex-e88220b9b0d28cca lib-rustc-hex.json rustversion-6b4eb35d700939c0 lib-rustversion.json rustversion-9bd986959c2f4e58 build-script-build-script-build.json rustversion-c8d62a3e51d75e70 run-build-script-build-script-build.json ryu-fb8d6b70596f26ca lib-ryu.json ryu-fc7235f674042769 lib-ryu.json serde-121de75e5e47f32f build-script-build-script-build.json serde-3dfb24a81c007c35 lib-serde.json serde-a556ac02361daacf build-script-build-script-build.json serde-b67da5e38157686a lib-serde.json serde-ec562b21e2ab03cd run-build-script-build-script-build.json serde-f038be8eb36902bf run-build-script-build-script-build.json serde-f0fb6d1b805624f3 lib-serde.json serde_derive-0dbd5e4f0564a9e3 build-script-build-script-build.json serde_derive-165f2a45a2206b07 run-build-script-build-script-build.json serde_derive-f246e39be46e314c lib-serde_derive.json serde_json-1a73b2c3614dc119 build-script-build-script-build.json serde_json-402282015c947d63 lib-serde_json.json serde_json-7156fdd183557d48 lib-serde_json.json serde_json-aac164a28adfa0af run-build-script-build-script-build.json sha2-118c0a9627c3ba4a lib-sha2.json sha2-58f04f8b6aac7864 lib-sha2.json sha2-9e0f5c8a51d23dc1 lib-sha2.json sha2-f6db679707793e28 lib-sha2.json sha3-27b8af0b4e8bf4a9 lib-sha3.json sha3-73360e6392316b68 lib-sha3.json signature-8f22e743bcee6c38 lib-signature.json signature-bd512b397e95cd5e lib-signature.json smallvec-1d5297fdf9e8ed94 lib-smallvec.json smallvec-2ae57e853cbc92a7 lib-smallvec.json smart-default-cfdc9693b52f502d lib-smart-default.json static_assertions-6d7a84aae4b08dfb lib-static_assertions.json static_assertions-df62a6794b74038e lib-static_assertions.json strum-71204c1abc984173 lib-strum.json strum-d6da55ad2fc37f89 lib-strum.json strum_macros-30ac8b86653b273d lib-strum_macros.json subtle-3f8675275b046251 lib-subtle.json subtle-999679ce737cde54 lib-subtle.json syn-666ac124717af69d build-script-build-script-build.json syn-a45e76ce8956c2d1 run-build-script-build-script-build.json syn-e4701eb1a5a22f1b lib-syn.json synstructure-bb07c7d0ed20bfb5 lib-synstructure.json tap-a647024b64f0a495 lib-tap.json tap-f49e81b6ea3702a5 lib-tap.json thiserror-1a9fc17850f231c7 lib-thiserror.json thiserror-a5b57bd3d3b221be lib-thiserror.json thiserror-impl-8b47db78c30aade4 lib-thiserror-impl.json time-285394d25e220858 lib-time.json time-bff62dbca259aa11 lib-time.json toml-c7516a0df0780132 lib-toml.json typenum-06990ca3c69e0bc0 lib-typenum.json typenum-14c211d159c81055 run-build-script-build-script-main.json typenum-d5d797e647ec61d1 lib-typenum.json typenum-ea108c0ae8f4577a build-script-build-script-main.json uint-6da175de8e21fbb0 lib-uint.json uint-ecc4a9ec26ae024d lib-uint.json unicode-ident-53e66da7b391b858 lib-unicode-ident.json unicode-xid-eaabcfe028a1e324 lib-unicode-xid.json version_check-afed7610b43b8d3f lib-version_check.json wee_alloc-6f0efb2b688bef7d lib-wee_alloc.json wee_alloc-97a4eaa49c6e99bc run-build-script-build-script-build.json wee_alloc-9a481a6809867666 lib-wee_alloc.json wee_alloc-b712e2093fdffcfb build-script-build-script-build.json wyz-5e69b8adcb2388cd lib-wyz.json wyz-ab2ef4bc617a0cf2 lib-wyz.json zeroize-a36259fa914faecf lib-zeroize.json zeroize-e4e30353195a5f4a lib-zeroize.json zeroize_derive-b098b8849e61c0ec lib-zeroize_derive.json build crunchy-8bbe7e8e70917188 out lib.rs num-bigint-94d01626aa947045 out radix_bases.rs parity-secp256k1-b0b7d07851c10bb4 out flag_check.c reed-solomon-erasure-955fef9cb9b42215 out table.rs rustversion-c8d62a3e51d75e70 out version.rs typenum-14c211d159c81055 out consts.rs op.rs tests.rs wee_alloc-97a4eaa49c6e99bc out wee_alloc_static_array_backend_size_bytes.txt release .fingerprint Inflector-ab51d28d9bb8a1e1 lib-inflector.json ahash-c94c70c676929396 build-script-build-script-build.json borsh-derive-a0bc776946c227ef lib-borsh-derive.json borsh-derive-internal-6f5806bdf4d7ca88 lib-borsh-derive-internal.json borsh-schema-derive-internal-f7eebe6bfdfcdacd lib-borsh-schema-derive-internal.json crunchy-a59d1a552bd01b4f build-script-build-script-build.json near-sdk-macros-6f300c8a02755eb5 lib-near-sdk-macros.json proc-macro-crate-a3dcd4c9f7604487 lib-proc-macro-crate.json proc-macro2-11f5edd1a31293a2 build-script-build-script-build.json proc-macro2-62bf330c92f1fda2 lib-proc-macro2.json proc-macro2-b486352647e7aab4 run-build-script-build-script-build.json quote-d9b0296627c057ed lib-quote.json serde-0c1c0482f884d120 run-build-script-build-script-build.json serde-3aba2866a71f946f build-script-build-script-build.json serde-4ebe6c6573716196 build-script-build-script-build.json serde-a74605ce5f264056 lib-serde.json serde_derive-6b5d416a4d3b5408 lib-serde_derive.json serde_derive-9d36459c5bf2fe2d build-script-build-script-build.json serde_derive-bd4cd45cacd07b1a run-build-script-build-script-build.json serde_json-fdf97b18e9b6f7b2 build-script-build-script-build.json syn-ad8acc3ffa0e68bb build-script-build-script-build.json syn-ef2bf8fb7b53c4d5 run-build-script-build-script-build.json syn-f2e84af0d0d0534e lib-syn.json toml-8fdd18d74102b7ae lib-toml.json unicode-ident-5246cc5de32b1aa2 lib-unicode-ident.json version_check-f46325ceee7e6047 lib-version_check.json wee_alloc-b109b8206de3730d build-script-build-script-build.json wasm32-unknown-unknown release .fingerprint ahash-053ee982ea576fa2 run-build-script-build-script-build.json ahash-f6b2b59a01c60ac5 lib-ahash.json base64-6998116a7dcb83c1 lib-base64.json blood_donation-5b70350872f3d77e lib-blood_donation.json borsh-24d749ad0b35b699 lib-borsh.json bs58-6604fbdc96c06d27 lib-bs58.json byteorder-99f5b66d6a1db1d4 lib-byteorder.json cfg-if-4ca2b295e9614f97 lib-cfg-if.json crunchy-d7f980d381089223 run-build-script-build-script-build.json crunchy-e4cd843bd799bdcb lib-crunchy.json hashbrown-46f975476978fe1f lib-hashbrown.json hex-f27d7c3e22e85d28 lib-hex.json itoa-e49bd9614fc2d291 lib-itoa.json memory_units-cfdb43266b7e98fa lib-memory_units.json near-sdk-0627d142f5e52482 lib-near-sdk.json near-sys-ca566f8da38677d6 lib-near-sys.json once_cell-70d507a516c24ac6 lib-once_cell.json ryu-0c2bf78aac0be487 lib-ryu.json serde-09ba79d65b1b9867 lib-serde.json serde-62e0ca16bb24cb31 run-build-script-build-script-build.json serde_json-0667668a7bf36809 run-build-script-build-script-build.json serde_json-39c3fb74fbb58ea5 lib-serde_json.json static_assertions-2af248c1ff3208a9 lib-static_assertions.json uint-7a7cee271030acd4 lib-uint.json wee_alloc-ad344306902c0435 run-build-script-build-script-build.json wee_alloc-e2a34779dd8e3e36 lib-wee_alloc.json build crunchy-d7f980d381089223 out lib.rs wee_alloc-ad344306902c0435 out wee_alloc_static_array_backend_size_bytes.txt integration-tests rs Cargo.toml src tests.rs ts package.json src main.ava.ts package-lock.json package.json
NEAR BLOOD DONATION APP ================== The smart contract was written in Rust About =========== The blood donation application that makes it easy for patients who are in need of blood donations to be able to make blood appeal. The application ensures the process is easy , fast and secure. A user logs in can create a blood donation appeal after creating the blood drive it is listed on the main platform where other users can see a list of all the available donations. A willing user can decide to donate to a certain patient, the details of the donor are sent to the recipient who will ensure contact. A user who does have not the required blood can also vote for the blood drive to give it a higher rating which will push the blood drive to the top of the list therefore attract more donors. Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html # Blood Donation dApp on NEAR Protocol The Blood Donation dApp is designed to streamline the blood donation process by connecting blood donors with blood recipients in a decentralized manner. It utilizes the NEAR Protocol, a blockchain platform, to ensure transparency, security, and immutability. Key features of the Blood Donation dApp include: - User registration for both blood donors and recipients. - Ability for blood donors to record their donation history and available blood types. - Blood recipients can search for donors based on specific blood types and location. - A trust system to build credibility and encourage active participation. ## Prerequisites Before running the Blood Donation dApp locally, make sure you have the following prerequisites: - Node.js: [https://nodejs.org](https://nodejs.org) (version 14 or above) - NEAR CLI: Installation instructions can be found at [https://docs.near.org/docs/tools/near-cli](https://docs.near.org/docs/tools/near-cli) ## Getting Started Follow these steps to set up and run the Blood Donation dApp locally: 1. Clone the repository or download the source code as a ZIP file. 2. Open a terminal and navigate to the project directory. 3. Install the project dependencies by running the following command: ``` npm install ``` 4. Once the installation is complete, start the local development server using the command: ``` npm run dev ``` 5. The dApp will be accessible in your web browser at `http://localhost:1234`. ## Project Structure The project structure is organized as follows: - `src/`: This directory contains the application source code. - `components/`: This directory contains the React components used in the dApp. - `pages/`: This directory contains the different pages of the dApp. - `contracts/`: This directory contains the smart contracts written in Rust. - `public/`: This directory contains the public assets for the dApp. - `neardev/`: This directory contains the compiled smart contracts and configuration files. ## Usage To use the Blood Donation dApp, follow these steps: 1. Access the dApp through your web browser at `http://localhost:1234` or the deployed URL. 2. Register as a blood donor or recipient by providing the required information. 3. Blood donors can record their donation history and specify their available blood types. 4. Blood recipients can search for donors based on specific blood types and location. 5. The trust system helps establish credibility and encourages active participation. ## Deployment To deploy the Blood Donation dApp to a live network, follow the NEAR Protocol deployment process. The detailed deployment instructions can be found at [https://docs.near.org/docs/develop/deploy/js/deploy-contract](https://docs.near.org/docs/develop/deploy/js/deploy-contract). ## Contributing Contributions to the Blood Donation dApp are welcome! If you would like to contribute, please follow these guidelines: 1. Fork the repository and create a new branch for your contribution. 2. Make your changes and ensure that the code adheres
near_dev-calendar
.github ISSUE_TEMPLATE BOUNTY.yml README.md index.html package.json public vite.svg src App.css assets react.svg index.css vite.config.js
# React + Vite This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules. Currently, two official plugins are available: - [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react/README.md) uses [Babel](https://babeljs.io/) for Fast Refresh - [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
mattlockyer_create-account-example
.dependabot config.yml .gitpod.yml .theia settings.json .travis.yml README-Gitpod.md README.md contract Cargo.toml src lib.rs neardev dev-account.env shared-test-staging test.near.json shared-test test.near.json package.json src config.js index.html loader.html main.js test-setup.js test.js
Counter example in Rust ================================= [![Open in Gitpod!](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/near-examples/rust-counter) <!-- MAGIC COMMENT: DO NOT DELETE! Everything above this line is hidden on NEAR Examples page --> # For Create Account Example Flow Steps: 1. generate seed phrase 2. create account with new accountId and new publicKey 3. redirects to wallet 4. on return check the account was created 5. use the secretKey from the credentials var created in step 1 to create a new account instance 6. deploy contract to the new account 7. user is in complete control and has a seed phrase as a recovery method 8. (optional) keep the new secretKey in localStorage so you can recreate the account instance, or prompt user for seed phrase when they want to access their contract ## Description This contract implements simple counter backed by storage on blockchain. Contract in `contract/src/lib.rs` provides methods to increment / decrement counter and get it's current value or reset. Plus and minus buttons increase and decrease value correspondingly. When button L is toggled, a little light turns on, just for fun. RS button is for reset. LE and RE buttons to let the robot wink at you. ## To Run Open in the Gitpod link above or clone the repository. ``` git clone https://github.com/near-examples/rust-counter ``` ## Setup [Or skip to Login if in Gitpod](#login) Install dependencies: ``` yarn ``` If you don't have `Rust` installed, complete the following 3 steps: 1) Install Rustup by running: ``` curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` ([Taken from official installation guide](https://www.rust-lang.org/tools/install)) 2) Configure your current shell by running: ``` source $HOME/.cargo/env ``` 3) Add wasm target to your toolchain by running: ``` rustup target add wasm32-unknown-unknown ``` Next, make sure you have `near-cli` by running: ``` near --version ``` If you need to install `near-cli`: ``` npm install near-cli -g ``` ## Login If you do not have a NEAR account, please create one with [NEAR Wallet](https://wallet.testnet.near.org). In the project root, login with `near-cli` by following the instructions after this command: ``` near login ``` Modify the top of `src/config.js`, changing the `CONTRACT_NAME` to be the NEAR account that was just used to log in. ```javascript … const CONTRACT_NAME = 'YOUR_ACCOUNT_NAME_HERE'; /* TODO: fill this in! */ … ``` Start the example! ``` yarn start ``` ## To Test ``` cd contract cargo test -- --nocapture ``` ## To Explore - `contract/src/lib.rs` for the contract code - `src/index.html` for the front-end HTML - `src/main.js` for the JavaScript front-end code and how to integrate contracts - `src/test.js` for the JS tests for the contract ## To Build the Documentation ``` cd contract cargo doc --no-deps --open ```
nearprotocol_assemblyscript-bson
.travis.yml README.md assembly decoder.ts encoder.ts index.ts tsconfig.json index.js package-lock.json package.json tests assembly decoder.spec.as.ts encoder.spec.as.ts tsconfig.json decoder.spec.ts encoder.spec.ts types webassembly index.d.ts package.json utils helpers.ts spec.ts
# assemblyscript-bson BSON encoder / decoder for AssemblyScript somewhat based on https://github.com/mpaland/bsonfy. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. # Limitations This is developed for use in smart contracts written in AssemblyScript for https://github.com/nearprotocol/nearcore. This imposes such limitations: - Only limited data types are supported: - arrays - objects - 32-bit integers - strings - booleans - null - `Uint8Array` - We assume that memory never needs to be deallocated (cause these contracts are short-lived). Note that this mostly just defines the way it's currently implemented. Contributors are welcome to fix limitations. # Usage ## Encoding BSON ```ts // Make sure memory allocator is available import "allocator/arena"; // Import encoder import { BSONEncoder } from "path/to/module"; // Create encoder let encoder = new BSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let bson: Uint8Array = encoder.serialize(); ``` ## Parsing BSON ```ts // Make sure memory allocator is available import "allocator/arena"; // Import decoder import { BSONDecoder, BSONHandler } from "path/to/module"; // Events need to be received by custom object extending BSONHandler. // NOTE: All methods are optional to implement. class MyBSONEventsHandler extends BSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i32): void { // Handle field } setUint8Array(name: string, value: Uint8Array): void { // Handle field } pushArray(name: string): bool { // Handle array start return true; // true means that nested object needs to be traversed, false otherwise } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start return true; // true means that nested object needs to be traversed, false otherwise } popObject(): void { // Handle object end } } // Create decoder let decoder = new BSONDecoder<MyBSONEventsHandler>(new MyBSONEventsHandler()); // Let's assume BSON data is available in this variable let bson: Uint8Array = ...; // Parse BSON decoder.deserialize(bson); // This will send events to MyBSONEventsHandler ```
nearprotocol_react-template
.travis.yml README.md assembly main.ts model.ts tsconfig.json dist index.html public gray_near_logo.svg gulpfile.js neardev devkey.json out main.near.ts model.near.ts package-lock.json package.json src config.js frontend.css main.js test.js webpack.config.js
<br /> <br /> <p align="center"> <img src="https://nearprotocol.com/wp-content/themes/near-19/assets/img/logo.svg?t=1553011311" width="240"> </p> <br /> <br /> ## Template for NEAR dapps * Create NEAR dapps with a React frontend 🐲 * We got Webpack! 💥 * We got Gulp! 💦 * We got Corgis? [🐶](https://near.ai/corgis) ## To Run on local node Step 1: Create account for the contract and deploy the contract. ``` npm install near create_account --account_id id near deploy --account_id id ``` Step 2: modify src/config.js line that sets the contractName. Set it to id from step 1. ``` const contractName = "contractId"; /* TODO: fill this in! */ ``` Step 3: Finally, run the command in your terminal. ``` npm start ``` The server that starts is for static assets and by default serves them to localhost:5000. Navigate there in your browser to see the app running! ## To Test ``` npm install npm run-script build npm test ``` ## To Explore - `assembly/main.ts` for the contract code - `src/index.html` for the front-end HTML - `src/main.js` for the JavaScript front-end code and how to integrate contracts - `src/test.js` for the JS tests for the contract - `src/frontend.jsx` for the first react component - `src/frontend.css` for styles Note: that these files can all be moved around and customized. The point of this template is to get you up to speed as quickly as possible without needing to hastle with all the config.
jester91_SnapCash
.github workflows node.js.yml rust.yml .gitpod.yml README.md actions-runner bin Runner.Common.deps.json Runner.Listener.deps.json Runner.Listener.runtimeconfig.json Runner.PluginHost.deps.json Runner.PluginHost.runtimeconfig.json Runner.Plugins.deps.json Runner.Sdk.deps.json Runner.Worker.deps.json Runner.Worker.runtimeconfig.json Sdk.deps.json checkScripts downloadCert.js makeWebRequest.js hashFiles index.js installdependencies.sh macos-run-invoker.js runsvc.sh config.cmd run.cmd snapcash _PipelineMapping jester91 SnapCash PipelineFolder.json _tool node 14.19.0 x64 CHANGELOG.md README.md corepack.cmd install_tools.bat node_modules corepack LICENSE.md README.md dist corepack.js npm.js npx.js pnpm.js pnpx.js vcc.js yarn.js yarnpkg.js package.json shims corepack.cmd corepack.ps1 nodewin corepack.cmd corepack.ps1 npm.cmd npm.ps1 npx.cmd npx.ps1 pnpm.cmd pnpm.ps1 pnpx.cmd pnpx.ps1 vcc.cmd vcc.ps1 yarn.cmd yarn.ps1 yarnpkg.cmd yarnpkg.ps1 npm.cmd npm.ps1 npx.cmd npx.ps1 pnpm.cmd pnpm.ps1 pnpx.cmd pnpx.ps1 yarn.cmd yarn.ps1 yarnpkg.cmd yarnpkg.ps1 npm .licensee.json .travis.yml CHANGELOG.md CONTRIBUTING.md README.md bin node-gyp-bin node-gyp.cmd npm-cli.js npm.cmd npx-cli.js npx.cmd changelogs CHANGELOG-1.md CHANGELOG-2.md CHANGELOG-3.md CHANGELOG-4.md CHANGELOG-5.md docs content cli-commands npm-access.md npm-adduser.md npm-audit.md npm-bin.md npm-bugs.md npm-build.md npm-bundle.md npm-cache.md npm-ci.md npm-completion.md npm-config.md npm-dedupe.md npm-deprecate.md npm-dist-tag.md npm-docs.md npm-doctor.md npm-edit.md npm-explore.md npm-fund.md npm-help-search.md npm-help.md npm-hook.md npm-init.md npm-install-ci-test.md npm-install-test.md npm-install.md npm-link.md npm-logout.md npm-ls.md npm-org.md npm-outdated.md npm-owner.md npm-pack.md npm-ping.md npm-prefix.md npm-profile.md npm-prune.md npm-publish.md npm-rebuild.md npm-repo.md npm-restart.md npm-root.md npm-run-script.md npm-search.md npm-shrinkwrap.md npm-star.md npm-stars.md npm-start.md npm-stop.md npm-team.md npm-test.md npm-token.md npm-uninstall.md npm-unpublish.md npm-update.md npm-version.md npm-view.md npm-whoami.md npm.md configuring-npm folders.md install.md npmrc.md package-json.md package-lock-json.md package-locks.md shrinkwrap-json.md using-npm config.md developers.md disputes.md orgs.md registry.md removal.md scope.md scripts.md semver.md gatsby-browser.js gatsby-config.js gatsby-node.js gatsby-ssr.js package-lock.json package.json public cli-commands npm-access index.html npm-adduser index.html npm-audit index.html npm-bin index.html npm-bugs index.html npm-build index.html npm-bundle index.html npm-cache index.html npm-ci index.html npm-completion index.html npm-config index.html npm-dedupe index.html npm-deprecate index.html npm-dist-tag index.html npm-docs index.html npm-doctor index.html npm-edit index.html npm-explore index.html npm-fund index.html npm-help-search index.html npm-help index.html npm-hook index.html npm-init index.html npm-install-ci-test index.html npm-install-test index.html npm-install index.html npm-link index.html npm-logout index.html npm-ls index.html npm-org index.html npm-outdated index.html npm-owner index.html npm-pack index.html npm-ping index.html npm-prefix index.html npm-profile index.html npm-prune index.html npm-publish index.html npm-rebuild index.html npm-repo index.html npm-restart index.html npm-root index.html npm-run-script index.html npm-search index.html npm-shrinkwrap index.html npm-star index.html npm-stars index.html npm-start index.html npm-stop index.html npm-team index.html npm-test index.html npm-token index.html npm-uninstall index.html npm-unpublish index.html npm-update index.html npm-version index.html npm-view index.html npm-whoami index.html npm index.html configuring-npm folders index.html install index.html npmrc index.html package-json index.html package-lock-json index.html package-locks index.html shrinkwrap-json index.html index.html static d 2215187023.json 2417117884.json network-icon-f659855f70bb0e12addd96250807c241.svg styles.e93b5499b63484750fba.css using-npm config index.html developers index.html disputes index.html orgs index.html registry index.html removal index.html scope index.html scripts index.html semver index.html src components Accordion.js Button.js DocLinks.js FoundTypo.js MobileSidebar.js Sidebar.js home DarkBlock.js FeatureCard.js Features.js Footer.js Terminal.js Windows.js cubes.js hero.js layout.js links.js navbar.js scripts.js seo.js images background-boxes.svg background-cubes.svg background-rectangles.svg bracket.svg cli-logo.svg down-carrot.svg hamburger-close.svg hamburger.svg manager-icon.svg network-icon.svg orange-cube.svg pink-gradient-cube.svg purple-cube.svg purple-gradient-cube.svg red-cube.svg right-shadow-box.svg terminal-icon.svg test-icon.svg up-carrot.svg x.svg main.css pages 404.js index.js templates Page.js theme.js lib access.js adduser.js audit.js auth legacy.js oauth.js saml.js sso.js bin.js bugs.js build.js cache.js ci.js completion.js config.js config bin-links.js clear-credentials-by-uri.js cmd-list.js core.js defaults.js figgy-config.js gentle-fs.js get-credentials-by-uri.js lifecycle.js load-cafile.js load-prefix.js nerf-dart.js set-credentials-by-uri.js set-user.js dedupe.js deprecate.js dist-tag.js docs.js doctor.js doctor check-files-permission.js check-ping.js get-git-path.js get-latest-nodejs-version.js get-latest-npm-version.js verify-cached-files.js edit.js explore.js fetch-package-metadata.js fetch-package-metadata.md fund.js get.js help-search.js help.js hook.js init.js install-ci-test.js install-test.js install.js install access-error.js action build.js extract-worker.js extract.js fetch.js finalize.js global-install.js global-link.js install.js move.js postinstall.js preinstall.js prepare.js refresh-package-json.js remove.js unbuild.js actions.js and-add-parent-to-errors.js and-finish-tracker.js and-ignore-errors.js audit.js check-permissions.js copy-tree.js decompose-actions.js deps.js diff-trees.js exists.js flatten-tree.js fund.js get-requested.js has-modern-meta.js inflate-bundled.js inflate-shrinkwrap.js is-dev-dep.js is-extraneous.js is-fs-access-available.js is-only-dev.js is-only-optional.js is-opt-dep.js is-prod-dep.js module-staging-path.js mutate-into-logical-tree.js node.js read-shrinkwrap.js realize-shrinkwrap-specifier.js report-optional-failure.js save.js update-package-json.js validate-args.js validate-tree.js writable.js link.js logout.js ls.js npm.js org.js outdated.js owner.js pack.js ping.js prefix.js profile.js prune.js publish.js rebuild.js repo.js restart.js root.js run-script.js search.js search all-package-metadata.js all-package-search.js format-package-stream.js package-filter.js set.js shrinkwrap.js star.js stars.js start.js stop.js substack.js team.js test.js token.js unbuild.js uninstall.js unpublish.js update.js utils ansi-trim.js cache-file.js child-path.js completion.sh completion file-completion.js installed-deep.js installed-shallow.js correct-mkdir.js deep-sort-object.js depr-check.js did-you-mean.js error-handler.js error-message.js escape-arg.js escape-exec-path.js funding.js gently-rm.js git.js gunzip-maybe.js is-registry.js is-windows-bash.js is-windows-shell.js is-windows.js lifecycle-cmd.js lifecycle.js link.js locker.js metrics-launch.js metrics.js module-name.js move.js no-progress-while-running.js open-url.js otplease.js output.js package-id.js parse-json.js perf.js pick-manifest-from-registry-metadata.js pulse-till-done.js read-local-package.js read-user-info.js replace-info.js save-stack.js spawn.js temp-filename.js umask.js unix-format-path.js unsupported.js usage.js warn-deprecated.js version.js view.js visnup.js whoami.js xmas.js make.bat node_modules JSONStream .travis.yml bin.js examples all_docs.js index.js package.json abbrev README.md abbrev.js package.json agent-base .travis.yml History.md README.md index.d.ts index.js package.json patch-core.js agentkeepalive History.md README.md browser.js index.d.ts index.js lib _http_agent.js agent.js https_agent.js package.json ansi-align CHANGELOG.md README.md index.js package.json ansi-regex index.js package.json readme.md ansi-styles index.js package.json readme.md ansicolors README.md ansicolors.js package.json ansistyles README.md ansistyles.js package.json aproba CHANGELOG.md README.md index.js package.json archy .travis.yml examples beep.js multi_line.js index.js package.json are-we-there-yet CHANGES.md README.md index.js node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json tracker-base.js tracker-group.js tracker-stream.js tracker.js asap CHANGES.md LICENSE.md README.md asap.js browser-asap.js browser-raw.js package.json raw.js asn1 README.md lib ber errors.js index.js reader.js types.js writer.js index.js package.json assert-plus CHANGES.md README.md assert.js package.json asynckit README.md bench.js index.js lib abort.js async.js defer.js iterate.js readable_asynckit.js readable_parallel.js readable_serial.js readable_serial_ordered.js state.js streamify.js terminator.js package.json parallel.js serial.js serialOrdered.js stream.js aws-sign2 README.md index.js package.json aws4 .travis.yml README.md aws4.js lru.js package.json balanced-match LICENSE.md README.md index.js package.json bcrypt-pbkdf README.md index.js package.json bin-links CHANGELOG.md README.md index.js package.json bluebird README.md changelog.md js browser bluebird.core.js bluebird.core.min.js bluebird.js bluebird.min.js release any.js assert.js async.js bind.js bluebird.js call_get.js cancel.js catch_filter.js context.js debuggability.js direct_resolve.js each.js errors.js es5.js filter.js finally.js generators.js join.js map.js method.js nodeback.js nodeify.js promise.js promise_array.js promisify.js props.js queue.js race.js reduce.js schedule.js settle.js some.js synchronous_inspection.js thenables.js timers.js using.js util.js package.json boxen index.js package.json readme.md brace-expansion README.md index.js package.json buffer-from index.js package.json readme.md test.js builtins .travis.yml History.md Readme.md builtins.json package.json test.js byline README.md lib byline.js package.json byte-size README.md dist index.js package.json cacache CHANGELOG.md LICENSE.md README.es.md README.md en.js es.js get.js index.js lib content path.js read.js rm.js write.js entry-index.js memoization.js util fix-owner.js hash-to-segments.js move-file.js tmp.js y.js verify.js locales en.js en.json es.js es.json ls.js package.json put.js rm.js verify.js call-limit CHANGELOG.md README.md call-limit.js package.json camelcase index.js package.json readme.md capture-stack-trace index.js package.json readme.md caseless README.md index.js package.json test.js chalk index.js package.json readme.md templates.js types index.d.ts chownr README.md chownr.js package.json ci-info CHANGELOG.md README.md index.js package.json vendors.json cidr-regex README.md index.js package.json cli-boxes boxes.json index.js package.json readme.md cli-columns README.md color.js index.js package.json test.js cli-table3 CHANGELOG.md README.md index.d.ts index.js package.json src cell.js layout-manager.js table.js utils.js cliui CHANGELOG.md LICENSE.txt README.md index.js node_modules ansi-regex index.js package.json readme.md is-fullwidth-code-point index.js package.json readme.md string-width index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md package.json clone README.md clone.js package.json cmd-shim README.md index.js lib to-batch-syntax.js package.json code-point-at index.js package.json readme.md color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name .eslintrc.json README.md index.js package.json test.js colors README.md examples normal-usage.js safe-string.js index.d.ts lib colors.js custom trap.js zalgo.js extendStringPrototype.js index.js maps america.js rainbow.js random.js zebra.js styles.js system has-flag.js supports-colors.js package.json safe.d.ts safe.js themes generic-logging.js columnify Readme.md columnify.js index.js package.json utils.js width.js combined-stream Readme.md lib combined_stream.js defer.js package.json concat-map .travis.yml example map.js index.js package.json concat-stream index.js node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json readme.md config-chain index.js package.json configstore index.js package.json readme.md console-control-strings README.md index.js package.json copy-concurrently README.md copy.js is-windows.js node_modules aproba README.md index.js package.json iferr README.md index.coffee index.js package.json package.json core-util-is README.md lib util.js package.json test.js create-error-class index.js package.json readme.md cross-spawn CHANGELOG.md README.md index.js lib enoent.js parse.js util escapeArgument.js escapeCommand.js hasEmptyArgumentBug.js readShebang.js resolveCommand.js node_modules lru-cache README.md index.js package.json yallist README.md iterator.js package.json yallist.js package.json crypto-random-string index.js package.json readme.md cyclist README.md index.js package.json dashdash CHANGES.md LICENSE.txt README.md lib dashdash.js package.json debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js node_modules ms index.js license.md package.json readme.md package.json src browser.js debug.js index.js node.js debuglog README.md debuglog.js package.json decamelize index.js package.json readme.md decode-uri-component index.js package.json readme.md deep-extend CHANGELOG.md README.md index.js lib deep-extend.js package.json defaults README.md index.js package.json test.js define-properties .jscs.json .travis.yml CHANGELOG.md README.md index.js package.json delayed-stream Readme.md lib delayed_stream.js package.json delegates History.md Readme.md index.js package.json detect-indent index.js package.json readme.md detect-newline index.js package.json readme.md dezalgo .travis.yml README.md dezalgo.js package.json dot-prop index.js package.json readme.md dotenv CHANGELOG.md README.md appveyor.yml config.js lib main.js package.json duplexer3 LICENSE.md README.md index.js package.json duplexify .travis.yml README.md example.js index.js node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json test.js ecc-jsbn README.md index.js lib ec.js sec.js package.json test.js editor example beep.json edit.js index.js package.json emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js encoding .travis.yml README.md lib encoding.js iconv-loader.js package.json end-of-stream README.md index.js package.json env-paths index.d.ts index.js package.json readme.md err-code .eslintrc.json .travis.yml README.md bower.json index.js index.umd.js package.json errno .travis.yml README.md build.js cli.js custom.js errno.js package.json test.js es-abstract .jscs.json .travis.yml CHANGELOG.md GetIntrinsic.js README.md es2015.js es2016.js es2017.js es5.js es6.js es7.js helpers assign.js isFinite.js isNaN.js isPrimitive.js mod.js sign.js index.js operations 2015.js 2016.js 2017.js es5.js package.json es-to-primitive .jscs.json .travis.yml CHANGELOG.md README.md es2015.js es5.js es6.js helpers isPrimitive.js index.js package.json es6-promise CHANGELOG.md README.md auto.js dist es6-promise.auto.js es6-promise.auto.min.js es6-promise.js es6-promise.min.js es6-promise.d.ts lib es6-promise.auto.js es6-promise.js es6-promise -internal.js asap.js enumerator.js polyfill.js promise.js promise all.js race.js reject.js resolve.js then.js utils.js package.json es6-promisify README.md dist promise.js promisify.js package.json escape-string-regexp index.js package.json readme.md execa index.js lib errname.js stdio.js node_modules get-stream buffer-stream.js index.js package.json readme.md package.json readme.md extend .jscs.json .travis.yml CHANGELOG.md README.md component.json index.js package.json extsprintf README.md lib extsprintf.js package.json fast-json-stable-stringify .eslintrc.yml .travis.yml README.md benchmark index.js test.json example key_cmp.js nested.js str.js value_cmp.js index.js package.json figgy-pudding CHANGELOG.md LICENSE.md README.md index.js package.json find-npm-prefix README.md find-prefix.js package.json flush-write-stream .travis.yml README.md example.js index.js node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json test.js forever-agent README.md index.js package.json form-data README.md lib browser.js form_data.js populate.js package.json from2 .travis.yml LICENSE.md README.md index.js node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json test.js fs-minipass README.md index.js node_modules minipass README.md index.js package.json package.json fs-vacuum .travis.yml README.md package.json vacuum.js fs-write-stream-atomic .travis.yml README.md index.js node_modules iferr README.md index.coffee index.js package.json readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json fs.realpath README.md index.js old.js package.json function-bind .jscs.json .travis.yml README.md implementation.js index.js package.json gauge CHANGELOG.md README.md base-theme.js error.js has-color.js index.js node_modules aproba README.md index.js package.json string-width index.js package.json readme.md package.json plumbing.js process.js progress-bar.js render-template.js set-immediate.js set-interval.js spin.js template-item.js theme-set.js themes.js wide-truncate.js genfun CHANGELOG.md README.md lib genfun.js method.js role.js util.js package.json gentle-fs CHANGELOG.md README.md index.js lib bin-link.js chown.js link.js mkdir.js rm.js node_modules aproba README.md index.js package.json iferr README.md index.coffee index.js package.json package.json get-caller-file LICENSE.md README.md index.d.ts index.js package.json get-stream buffer-stream.js index.js package.json readme.md getpass .travis.yml README.md lib index.js package.json glob README.md changelog.md common.js glob.js package.json sync.js global-dirs index.js package.json readme.md got index.js node_modules get-stream buffer-stream.js index.js package.json readme.md package.json readme.md graceful-fs README.md clone.js graceful-fs.js legacy-streams.js package.json polyfills.js har-schema README.md lib afterRequest.json beforeRequest.json browser.json cache.json content.json cookie.json creator.json entry.json har.json header.json index.js log.json page.json pageTimings.json postData.json query.json request.json response.json timings.json package.json har-validator README.md lib async.js error.js promise.js node_modules ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js fast-deep-equal README.md es6 index.d.ts index.js react.d.ts react.js index.d.ts index.js package.json react.d.ts react.js json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json has-flag index.js package.json readme.md has-symbols .travis.yml CHANGELOG.md README.md index.js package.json shams.js has-unicode README.md index.js package.json has README.md package.json src index.js hosted-git-info CHANGELOG.md README.md git-host-info.js git-host.js index.js package.json http-cache-semantics README.md node4 index.js package.json http-proxy-agent .travis.yml History.md README.md index.js package.json http-signature CHANGES.md README.md http_signing.md lib index.js parser.js signer.js utils.js verify.js package.json https-proxy-agent .eslintrc.js History.md README.md index.d.ts index.js package.json humanize-ms History.md README.md index.js package.json iconv-lite .travis.yml Changelog.md README.md encodings dbcs-codec.js dbcs-data.js index.js internal.js sbcs-codec.js sbcs-data-generated.js sbcs-data.js tables big5-added.json cp936.json cp949.json cp950.json eucjp.json gb18030-ranges.json gbk-added.json shiftjis.json utf16.js utf7.js lib bom-handling.js extend-node.js index.d.ts index.js streams.js package.json iferr README.md iferr.js package.json ignore-walk README.md index.js package.json import-lazy index.js package.json readme.md imurmurhash README.md imurmurhash.js imurmurhash.min.js package.json infer-owner README.md index.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json ini README.md ini.js package.json init-package-json CHANGELOG.md README.md default-input.js init-package-json.js package.json ip-regex index.js package.json readme.md ip .travis.yml README.md lib ip.js package.json is-callable .istanbul.yml .jscs.json .travis.yml CHANGELOG.md README.md index.js package.json test.js is-ci README.md bin.js index.js node_modules ci-info CHANGELOG.md README.md index.js package.json vendors.json package.json is-cidr README.md index.js package.json is-date-object .jscs.json .travis.yml CHANGELOG.md README.md index.js package.json test.js is-fullwidth-code-point index.js package.json readme.md is-installed-globally index.js package.json readme.md is-npm index.js package.json readme.md is-obj index.js package.json readme.md is-path-inside index.js package.json readme.md is-redirect index.js package.json readme.md is-regex .jscs.json .travis.yml CHANGELOG.md README.md index.js package.json test.js is-retry-allowed index.js package.json readme.md is-stream index.js package.json readme.md is-symbol .jscs.json .travis.yml CHANGELOG.md README.md index.js package.json is-typedarray LICENSE.md README.md index.js package.json test.js isarray .travis.yml README.md component.json index.js package.json test.js isexe README.md index.js mode.js package.json windows.js isstream .travis.yml LICENSE.md README.md isstream.js package.json test.js jsbn README.md example.html example.js index.js package.json json-parse-better-errors CHANGELOG.md LICENSE.md README.md index.js package.json json-schema README.md lib links.js validate.js package.json json-stringify-safe CHANGELOG.md README.md package.json stringify.js jsonparse bench.js examples twitterfeed.js jsonparse.js package.json samplejson basic.json basic2.json jsprim CHANGES.md CONTRIBUTING.md README.md lib jsprim.js package.json latest-version index.js package.json readme.md lazy-property README.md component.json lazyProperty.js package.json libcipm CHANGELOG.md LICENSE.md README.md index.js lib config npm-config.js extract.js silentlog.js worker.js package.json libnpm CHANGELOG.md LICENSE.md README.md access.js adduser.js config.js extract.js fetch.js get-prefix.js hook.js index.js link-bin.js log.js logical-tree.js login.js manifest.js org.js package.json packument.js parse-arg.js profile.js publish.js read-json.js run-script.js search.js stringify-package.js tarball.js team.js unpublish.js verify-lock.js libnpmaccess .travis.yml CHANGELOG.md README.md appveyor.yml index.js package.json libnpmconfig CHANGELOG.md README.md index.js node_modules find-up index.js package.json readme.md locate-path index.js package.json readme.md p-limit index.d.ts index.js package.json readme.md p-locate index.js package.json readme.md p-try index.d.ts index.js package.json readme.md package.json libnpmhook CHANGELOG.md LICENSE.md README.md index.js package.json libnpmorg .travis.yml CHANGELOG.md README.md appveyor.yml index.js package.json libnpmpublish .travis.yml CHANGELOG.md README.md appveyor.yml index.js package.json publish.js unpublish.js libnpmsearch .travis.yml CHANGELOG.md README.md appveyor.yml index.js package.json libnpmteam .travis.yml CHANGELOG.md README.md appveyor.yml index.js package.json libnpx CHANGELOG.md LICENSE.md README.md auto-fallback.js child.js get-prefix.js index.js locales ca.json cs.json de.json en.json es.json fr.json id.json it.json ja.json ko.json nb.json nl.json nn.json no.json pl.json pt_BR.json ro.json ru.json sr.json tr.json uk.json zh_CN.json zh_TW.json package.json parse-args.js util.js y.js lock-verify README.md index.js package.json lockfile .travis.yml CHANGELOG.md README.md gen-changelog.sh lockfile.js package.json sockets.md speedtest.js lodash._baseindexof LICENSE.txt README.md index.js package.json lodash._baseuniq README.md index.js package.json lodash._bindcallback LICENSE.txt README.md index.js package.json lodash._cacheindexof LICENSE.txt README.md index.js package.json lodash._createcache README.md index.js package.json lodash._createset README.md index.js package.json lodash._getnative README.md index.js package.json lodash._root README.md index.js package.json lodash.clonedeep README.md index.js package.json lodash.restparam LICENSE.txt README.md index.js package.json lodash.union README.md index.js package.json lodash.uniq README.md index.js package.json lodash.without README.md index.js package.json lowercase-keys index.js package.json readme.md lru-cache README.md index.js package.json make-dir index.js package.json readme.md make-fetch-happen CHANGELOG.md README.md agent.js cache.js index.js package.json warning.js meant .github workflows ci.yml CHANGELOG.md README.md index.js package.json test.js mime-db HISTORY.md README.md db.json index.js package.json mime-types HISTORY.md README.md index.js package.json minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json minizlib README.md constants.js index.js node_modules minipass README.md index.js package.json package.json mississippi changelog.md index.js package.json readme.md mkdirp bin cmd.js usage.txt index.js node_modules minimist .travis.yml example parse.js index.js package.json package.json move-concurrently README.md move.js node_modules aproba README.md index.js package.json package.json ms index.js license.md package.json readme.md mute-stream .travis.yml README.md coverage lcov-report __root__ index.html mute.js.html base.css index.html prettify.css prettify.js sorter.js mute.js package.json node-fetch-npm CHANGELOG.md LICENSE.md README.md package.json src blob.js body.js common.js fetch-error.js headers.js index.js request.js response.js node-gyp .github ISSUE_TEMPLATE.md PULL_REQUEST_TEMPLATE.md workflows Python_tests.yml .travis.yml CHANGELOG.md README.md bin node-gyp.js gyp PRESUBMIT.py gyp.bat gyp_main.py pylib gyp MSVSNew.py MSVSProject.py MSVSSettings.py MSVSSettings_test.py MSVSToolFile.py MSVSUserFile.py MSVSUtil.py MSVSVersion.py __init__.py common.py common_test.py easy_xml.py easy_xml_test.py flock_tool.py generator __init__.py analyzer.py android.py cmake.py compile_commands_json.py dump_dependency_json.py eclipse.py gypd.py gypsh.py make.py msvs.py msvs_test.py ninja.py ninja_test.py xcode.py xcode_test.py input.py input_test.py mac_tool.py msvs_emulation.py ninja_syntax.py simple_copy.py win_tool.py xcode_emulation.py xcode_ninja.py xcodeproj_file.py xml_fix.py samples samples.bat setup.py tools emacs run-unit-tests.sh graphviz.py pretty_gyp.py pretty_sln.py pretty_vcproj.py lib Find-VisualStudio.cs build.js clean.js configure.js find-node-directory.js find-python.js find-visualstudio.js install.js list.js node-gyp.js process-release.js proxy.js rebuild.js remove.js util.js macOS_Catalina.md package.json nopt CHANGELOG.md README.md bin nopt.js lib nopt.js package.json normalize-package-data README.md lib extract_description.js fixer.js make_warning.js normalize.js safe_format.js typos.json warning_messages.json node_modules resolve .travis.yml CHANGELOG.md appveyor.yml example async.js sync.js index.js lib async.js caller.js core.js core.json node-modules-paths.js normalize-options.js sync.js package.json package.json npm-audit-report CHANGELOG.md README.md index.js lib utils.js package.json reporters detail.js install.js json.js parseable.js quiet.js npm-bundled README.md index.js package.json npm-cache-filename README.md index.js package.json test.js npm-install-checks CHANGELOG.md README.md index.js package.json npm-lifecycle CHANGELOG.md README.md index.js lib spawn.js node-gyp-bin node-gyp.cmd package.json npm-logical-tree CHANGELOG.md LICENSE.md README.md index.js package.json npm-normalize-package-bin .github settings.yml README.md index.js package-lock.json package.json npm-package-arg CHANGELOG.md README.md npa.js package.json npm-packlist README.md index.js package.json npm-pick-manifest CHANGELOG.md LICENSE.md README.md index.js package.json npm-profile CHANGELOG.md README.md index.js package.json npm-registry-fetch CHANGELOG.md LICENSE.md README.md auth.js check-response.js config.js errors.js index.js node_modules safe-buffer README.md index.d.ts index.js package.json package.json silentlog.js npm-run-path index.js package.json readme.md npm-user-validate README.md npm-user-validate.js package.json npmlog CHANGELOG.md README.md log.js package.json number-is-nan index.js package.json readme.md oauth-sign README.md index.js package.json object-assign index.js package.json readme.md object-keys .jscs.json .travis.yml CHANGELOG.md README.md index.js isArguments.js package.json object.getownpropertydescriptors .jscs.json .travis.yml CHANGELOG.md README.md implementation.js index.js package.json polyfill.js shim.js once README.md once.js package.json opener LICENSE.txt README.md bin opener-bin.js lib opener.js package.json os-homedir index.js package.json readme.md os-tmpdir index.js package.json readme.md osenv README.md osenv.js package.json p-finally index.js package.json readme.md package-json index.js package.json readme.md pacote CHANGELOG.md README.md extract.js index.js lib extract-stream.js fetch.js fetchers alias.js directory.js file.js git.js hosted.js range.js registry index.js manifest.js packument.js tarball.js remote.js tag.js version.js finalize-manifest.js util cache-key.js finished.js git.js opt-check.js pack-dir.js proclog.js read-json.js with-tarball-stream.js manifest.js node_modules minipass README.md index.js package.json package.json packument.js prefetch.js tarball.js parallel-transform README.md index.js node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json path-exists index.js package.json readme.md path-is-absolute index.js package.json readme.md path-is-inside LICENSE.txt lib path-is-inside.js package.json path-key index.js package.json readme.md path-parse README.md index.js package.json performance-now .travis.yml README.md lib performance-now.js license.txt package.json src index.d.ts performance-now.coffee pify index.js package.json readme.md prepend-http index.js package.json readme.md process-nextick-args index.js license.md package.json readme.md promise-inflight README.md inflight.js package.json promise-retry .travis.yml README.md index.js node_modules retry README.md example dns.js stop.js index.js lib retry.js retry_operation.js package.json package.json promzard README.md example buffer.js index.js npm-init README.md init-input.js init.js package.json substack-input.js package.json promzard.js proto-list README.md package.json proto-list.js protoduck CHANGELOG.md README.md index.js package.json prr .travis.yml LICENSE.md README.md package.json prr.js test.js pseudomap README.md map.js package.json pseudomap.js psl .travis.yml README.md data rules.json dist psl.js psl.min.js index.js karma.conf.js package.json pump .travis.yml README.md index.js package.json test-browser.js test-node.js pumpify .travis.yml README.md index.js node_modules pump .travis.yml README.md index.js package.json test-browser.js test-node.js package.json test.js punycode LICENSE-MIT.txt README.md package.json punycode.js qrcode-terminal .travis.yml README.md bin qrcode-terminal.js example basic.js callback.js small-qrcode.js lib main.js package.json vendor QRCode QR8bitByte.js QRBitBuffer.js QRErrorCorrectLevel.js QRMaskPattern.js QRMath.js QRMode.js QRPolynomial.js QRRSBlock.js QRUtil.js index.js qs CHANGELOG.md README.md dist qs.js lib formats.js index.js parse.js stringify.js utils.js package.json query-string index.d.ts index.js package.json readme.md qw README.md package.json qw.js rc README.md browser.js cli.js index.js lib utils.js package.json read-cmd-shim README.md index.js package.json read-installed .travis.yml README.md package.json read-installed.js read-package-json CHANGELOG.md README.md package.json read-json.js read-package-tree README.md package.json realpath.js rpt.js read README.md lib read.js package.json readable-stream GOVERNANCE.md README.md errors-browser.js errors.js experimentalWarning.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams async_iterator.js buffer_list.js destroy.js end-of-stream.js from-browser.js from.js pipeline.js state.js stream-browser.js stream.js package.json readable-browser.js readable.js readdir-scoped-modules README.md package.json readdir.js registry-auth-token CHANGELOG.md README.md base64.js index.js package.json registry-url.js registry-url index.js package.json readme.md request CHANGELOG.md README.md index.js lib auth.js cookies.js getProxyFromURI.js har.js hawk.js helpers.js multipart.js oauth.js querystring.js redirect.js tunnel.js package.json request.js require-directory .travis.yml index.js package.json require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json resolve-from index.js package.json readme.md retry .travis.yml Readme.md example dns.js stop.js index.js lib retry.js retry_operation.js package.json rimraf README.md bin.js package.json rimraf.js run-queue README.md node_modules aproba README.md index.js package.json package.json queue.js safe-buffer README.md index.d.ts index.js package.json safer-buffer Porting-Buffer.md Readme.md dangerous.js package.json safer.js tests.js semver-diff index.js package.json readme.md semver CHANGELOG.md README.md package.json semver.js set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json sha README.md index.js package.json shebang-command index.js package.json readme.md shebang-regex index.js package.json readme.md signal-exit CHANGELOG.md LICENSE.txt README.md index.js package.json signals.js slide README.md index.js lib async-map-ordered.js async-map.js bind-actor.js chain.js slide.js package.json smart-buffer .travis.yml README.md build smartbuffer.js utils.js docs CHANGELOG.md README_v3.md ROADMAP.md package.json typings smartbuffer.d.ts utils.d.ts socks-proxy-agent .travis.yml History.md README.md index.js node_modules agent-base .travis.yml History.md README.md index.js package.json patch-core.js package.json socks .travis.yml README.md build client socksclient.js common constants.js helpers.js receivebuffer.js util.js index.js docs examples index.md javascript associateExample.md bindExample.md connectExample.md typescript associateExample.md bindExample.md connectExample.md index.md migratingFromV1.md package.json typings client socksclient.d.ts common constants.d.ts helpers.d.ts receiveBuffer.d.ts util.d.ts index.d.ts sorted-object LICENSE.txt lib sorted-object.js package.json sorted-union-stream .travis.yml README.md example.js index.js node_modules from2 LICENSE.md README.md index.js package.json test.js isarray README.md build build.js component.json index.js package.json readable-stream README.md duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js package.json passthrough.js readable.js transform.js writable.js string_decoder README.md index.js package.json package.json test.js spdx-correct README.md index.js package.json spdx-exceptions README.md index.json package.json spdx-expression-parse README.md index.js package.json parse.js scan.js spdx-license-ids README.md deprecated.json index.json package.json split-on-first index.d.ts index.js package.json readme.md sshpk .travis.yml README.md lib algs.js certificate.js dhe.js ed-compat.js errors.js fingerprint.js formats auto.js dnssec.js openssh-cert.js pem.js pkcs1.js pkcs8.js rfc4253.js ssh-private.js ssh.js x509-pem.js x509.js identity.js index.js key.js private-key.js signature.js ssh-buffer.js utils.js package.json ssri CHANGELOG.md LICENSE.md README.md index.js package.json stream-each .travis.yml README.md collaborators.md index.js package.json test.js stream-iterate .travis.yml README.md index.js node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json test.js stream-shift .travis.yml README.md index.js package.json test.js strict-uri-encode index.js package.json readme.md string-width index.js node_modules ansi-regex index.js package.json readme.md is-fullwidth-code-point index.js package.json readme.md strip-ansi index.js package.json readme.md package.json readme.md string_decoder README.md lib string_decoder.js node_modules safe-buffer README.md index.d.ts index.js package.json package.json stringify-package CHANGELOG.md README.md index.js package.json strip-ansi index.js package.json readme.md strip-eof index.js package.json readme.md strip-json-comments index.js package.json readme.md supports-color browser.js index.js package.json readme.md tar README.md index.js lib buffer.js create.js extract.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js normalize-windows-path.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js strip-absolute-path.js strip-trailing-slashes.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js node_modules minipass README.md index.js package.json safe-buffer README.md index.d.ts index.js package.json yallist README.md iterator.js package.json yallist.js package.json term-size index.js package.json readme.md text-table .travis.yml example align.js center.js dotalign.js doubledot.js table.js index.js package.json through .travis.yml index.js package.json through2 LICENSE.html LICENSE.md README.md node_modules readable-stream .travis.yml GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js string_decoder .travis.yml README.md lib string_decoder.js package.json package.json through2.js timed-out index.js package.json readme.md tiny-relative-date LICENSE.md README.md lib factory.js index.js package.json src factory.js index.js translations da.js de.js en-short.js en.js es.js tough-cookie README.md lib cookie.js memstore.js pathMatch.js permuteDomain.js pubsuffix-psl.js store.js package.json tunnel-agent README.md index.js package.json tweetnacl AUTHORS.md CHANGELOG.md PULL_REQUEST_TEMPLATE.md README.md nacl-fast.js nacl-fast.min.js nacl.d.ts nacl.js nacl.min.js package.json typedarray .travis.yml example tarray.js index.js package.json uid-number README.md get-uid-gid.js package.json uid-number.js umask README.md index.js package.json unique-filename README.md coverage __root__ index.html index.js.html base.css index.html prettify.css prettify.js sorter.js index.js package.json unique-slug .travis.yml README.md index.js package.json unique-string index.js package.json readme.md unpipe HISTORY.md README.md index.js package.json unzip-response index.js package.json readme.md update-notifier check.js index.js package.json readme.md uri-js README.md dist es5 uri.all.d.ts uri.all.js uri.all.min.d.ts uri.all.min.js esnext index.d.ts index.js regexps-iri.d.ts regexps-iri.js regexps-uri.d.ts regexps-uri.js schemes http.d.ts http.js https.d.ts https.js mailto.d.ts mailto.js urn-uuid.d.ts urn-uuid.js urn.d.ts urn.js ws.d.ts ws.js wss.d.ts wss.js uri.d.ts uri.js util.d.ts util.js node_modules punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js package.json url-parse-lax index.js package.json readme.md util-deprecate History.md README.md browser.js node.js package.json util-extend README.md extend.js package.json test.js util-promisify .travis.yml README.md index.js package.json uuid CHANGELOG.md LICENSE.md README.md index.js lib bytesToUuid.js md5-browser.js md5.js rng-browser.js rng.js sha1-browser.js sha1.js v35.js package.json v1.js v3.js v4.js v5.js validate-npm-package-license README.md index.js package.json validate-npm-package-name .travis.yml README.md index.js package.json verror CHANGES.md README.md lib verror.js package.json wcwidth Readme.md combining.js docs index.md index.js package.json which-module CHANGELOG.md README.md index.js package.json which CHANGELOG.md README.md package.json which.js wide-align README.md align.js node_modules string-width index.js package.json readme.md package.json widest-line index.js package.json readme.md worker-farm .travis.yml LICENSE.md README.md examples basic child.js index.js pi calc.js index.js index.d.ts lib child index.js farm.js fork.js index.js package.json tests child.js debug.js index.js wrap-ansi index.js node_modules ansi-regex index.js package.json readme.md is-fullwidth-code-point index.js package.json readme.md string-width index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md package.json readme.md wrappy README.md package.json wrappy.js write-file-atomic CHANGELOG.md README.md index.js package.json xdg-basedir index.js package.json readme.md xtend README.md immutable.js mutable.js package.json test.js y18n CHANGELOG.md README.md index.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js node_modules camelcase index.d.ts index.js package.json readme.md package.json yargs CHANGELOG.md README.md index.js lib apply-extends.js argsert.js command.js completion-templates.js completion.js is-promise.js levenshtein.js middleware.js obj-filter.js usage.js validation.js yerror.js locales be.json de.json en.json es.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json node_modules ansi-regex index.js package.json readme.md find-up index.js package.json readme.md is-fullwidth-code-point index.js package.json readme.md locate-path index.js package.json readme.md p-limit index.d.ts index.js package.json readme.md p-locate index.js package.json readme.md p-try index.d.ts index.js package.json readme.md string-width index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md package.json yargs.js package.json scripts changelog.js clean-old.sh docs-build.js gen-dev-ignores.js install.sh publish-tag.js release.sh relocate.sh update-authors.sh update-dist-tags.js tap-snapshots test-tap-fund.js-TAP.test.js test-tap-repo.js-TAP.test.js nodevars.bat npm.cmd npx.cmd setup.ps1 | babel.config.js contract Cargo.toml README.md compile.js src lib.rs package.json src App.js Components MoneyMemo.js Transaction.js __mocks__ fileMock.js assets jester_logo.svg logo-black.svg logo-white.svg config.js global.css index.html index.js jest.init.js main.test.js utils.js wallet login index.html
# path-parse [![Build Status](https://travis-ci.org/jbgutierrez/path-parse.svg?branch=master)](https://travis-ci.org/jbgutierrez/path-parse) > Node.js [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) [ponyfill](https://ponyfill.com). ## Install ``` $ npm install --save path-parse ``` ## Usage ```js var pathParse = require('path-parse'); pathParse('/home/user/dir/file.txt'); //=> { // root : "/", // dir : "/home/user/dir", // base : "file.txt", // ext : ".txt", // name : "file" // } ``` ## API See [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) docs. ### pathParse(path) ### pathParse.posix(path) The Posix specific version. ### pathParse.win32(path) The Windows specific version. ## License MIT © [Javier Blanco](http://jbgutierrez.info) <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/images/ajv_logo.png"> # Ajv: Another JSON Schema Validator The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07. [![Build Status](https://travis-ci.org/ajv-validator/ajv.svg?branch=master)](https://travis-ci.org/ajv-validator/ajv) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm (beta)](https://img.shields.io/npm/v/ajv/beta)](https://www.npmjs.com/package/ajv/v/7.0.0-beta.0) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv v7 beta is released [Ajv version 7.0.0-beta.0](https://github.com/ajv-validator/ajv/tree/v7-beta) is released with these changes: - to reduce the mistakes in JSON schemas and unexpected validation results, [strict mode](./docs/strict-mode.md) is added - it prohibits ignored or ambiguous JSON Schema elements. - to make code injection from untrusted schemas impossible, [code generation](./docs/codegen.md) is fully re-written to be safe. - to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas. - schemas are compiled to ES6 code (ES5 code generation is supported with an option). - to improve reliability and maintainability the code is migrated to TypeScript. **Please note**: - the support for JSON-Schema draft-04 is removed - if you have schemas using "id" attributes you have to replace them with "\$id" (or continue using version 6 that will be supported until 02/28/2021). - all formats are separated to ajv-formats package - they have to be explicitely added if you use them. See [release notes](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) for the details. To install the new version: ```bash npm install ajv@beta ``` See [Getting started with v7](https://github.com/ajv-validator/ajv/tree/v7-beta#usage) for code example. ## Mozilla MOSS grant and OpenJS Foundation [<img src="https://www.poberezkin.com/images/mozilla.png" width="240" height="68">](https://www.mozilla.org/en-US/moss/) &nbsp;&nbsp;&nbsp; [<img src="https://www.poberezkin.com/images/openjs.png" width="220" height="68">](https://openjsf.org/blog/2020/08/14/ajv-joins-openjs-foundation-as-an-incubation-project/) Ajv has been awarded a grant from Mozilla’s [Open Source Support (MOSS) program](https://www.mozilla.org/en-US/moss/) in the “Foundational Technology” track! It will sponsor the development of Ajv support of [JSON Schema version 2019-09](https://tools.ietf.org/html/draft-handrews-json-schema-02) and of [JSON Type Definition](https://tools.ietf.org/html/draft-ucarion-json-type-definition-04). Ajv also joined [OpenJS Foundation](https://openjsf.org/) – having this support will help ensure the longevity and stability of Ajv for all its users. This [blog post](https://www.poberezkin.com/posts/2020-08-14-ajv-json-validator-mozilla-open-source-grant-openjs-foundation.html) has more details. I am looking for the long term maintainers of Ajv – working with [ReadySet](https://www.thereadyset.co/), also sponsored by Mozilla, to establish clear guidelines for the role of a "maintainer" and the contribution standards, and to encourage a wider, more inclusive, contribution from the community. ## Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Using version 6 [JSON Schema draft-07](http://json-schema.org/latest/json-schema-validation.html) is published. [Ajv version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0) that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes). __Please note__: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance: ```javascript ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json')); ``` To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId: ```javascript var ajv = new Ajv({schemaId: 'id'}); // If you want to use both draft-04 and draft-06/07 schemas: // var ajv = new Ajv({schemaId: 'auto'}); ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json')); ``` ## Contents - [Performance](#performance) - [Features](#features) - [Getting started](#getting-started) - [Frequently Asked Questions](https://github.com/ajv-validator/ajv/blob/master/FAQ.md) - [Using in browser](#using-in-browser) - [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) - [Command line interface](#command-line-interface) - Validation - [Keywords](#validation-keywords) - [Annotation keywords](#annotation-keywords) - [Formats](#formats) - [Combining schemas with $ref](#ref) - [$data reference](#data-reference) - NEW: [$merge and $patch keywords](#merge-and-patch-keywords) - [Defining custom keywords](#defining-custom-keywords) - [Asynchronous schema compilation](#asynchronous-schema-compilation) - [Asynchronous validation](#asynchronous-validation) - [Security considerations](#security-considerations) - [Security contact](#security-contact) - [Untrusted schemas](#untrusted-schemas) - [Circular references in objects](#circular-references-in-javascript-objects) - [Trusted schemas](#security-risks-of-trusted-schemas) - [ReDoS attack](#redos-attack) - Modifying data during validation - [Filtering data](#filtering-data) - [Assigning defaults](#assigning-defaults) - [Coercing data types](#coercing-data-types) - API - [Methods](#api) - [Options](#options) - [Validation errors](#validation-errors) - [Plugins](#plugins) - [Related packages](#related-packages) - [Some packages using Ajv](#some-packages-using-ajv) - [Tests, Contributing, Changes history](#tests) - [Support, Code of conduct, License](#open-source-software-support) ## Performance Ajv generates code using [doT templates](https://github.com/olado/doT) to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=32,4,1&chs=600x416&chxl=-1:|djv|ajv|json-schema-validator-generator|jsen|is-my-json-valid|themis|z-schema|jsck|skeemas|json-schema-library|tv4&chd=t:100,98,72.1,66.8,50.1,15.1,6.1,3.8,1.2,0.7,0.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements full JSON Schema [draft-06/07](http://json-schema.org/) and draft-04 standards: - all validation keywords (see [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md)) - full support of remote refs (remote schemas have to be added with `addSchema` or compiled to be available) - support of circular references between schemas - correct string lengths for strings with unicode pairs (can be turned off) - [formats](#formats) defined by JSON Schema draft-07 standard and custom formats (can be turned off) - [validates schemas against meta-schema](#api-validateschema) - supports [browsers](#using-in-browser) and Node.js 0.10-14.x - [asynchronous loading](#asynchronous-schema-compilation) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](#options) - [error messages with parameters](#validation-errors) describing error reasons to allow creating custom error messages - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [filtering data](#filtering-data) from additional properties - [assigning defaults](#assigning-defaults) to missing properties and items - [coercing data](#coercing-data-types) to the types specified in `type` keywords - [custom keywords](#defining-custom-keywords) - draft-06/07 keywords `const`, `contains`, `propertyNames` and `if/then/else` - draft-06 boolean schemas (`true`/`false` as a schema to always pass/fail). - keywords `switch`, `patternRequired`, `formatMaximum` / `formatMinimum` and `formatExclusiveMaximum` / `formatExclusiveMinimum` from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [$data reference](#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](#asynchronous-validation) of custom formats and keywords ## Install ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://tonicdev.com/npm/ajv The fastest validation call: ```javascript // Node.js require: var Ajv = require('ajv'); // or ESM/TypeScript import import Ajv from 'ajv'; var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true} var validate = ajv.compile(schema); var valid = validate(data); if (!valid) console.log(validate.errors); ``` or with less code ```javascript // ... var valid = ajv.validate(schema, data); if (!valid) console.log(ajv.errors); // ... ``` or ```javascript // ... var valid = ajv.addSchema(schema, 'mySchema') .validate('mySchema', data); if (!valid) console.log(ajv.errorsText()); // ... ``` See [API](#api) and [Options](#options) for more details. Ajv compiles schemas to functions and caches them in all cases (using schema serialized with [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again. The best performance is achieved when using compiled functions returned by `compile` or `getSchema` methods (there is no additional function call). __Please note__: every time a validation function or `ajv.validate` are called `errors` property is overwritten. You need to copy `errors` array reference to another variable if you want to use it later (e.g., in the callback). See [Validation errors](#validation-errors) __Note for TypeScript users__: `ajv` provides its own TypeScript declarations out of the box, so you don't need to install the deprecated `@types/ajv` module. ## Using in browser You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle. If you need to use Ajv in several bundles you can create a separate UMD bundle using `npm run bundle` script (thanks to [siddo420](https://github.com/siddo420)). Then you need to load Ajv in the browser: ```html <script src="ajv.min.js"></script> ``` This bundle can be used with different module systems; it creates global `Ajv` if no module system is found. The browser bundle is available on [cdnjs](https://cdnjs.com/libraries/ajv). Ajv is tested with these browsers: [![Sauce Test Status](https://saucelabs.com/browser-matrix/epoberezkin.svg)](https://saucelabs.com/u/epoberezkin) __Please note__: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue [#234](https://github.com/ajv-validator/ajv/issues/234)). ### Ajv and Content Security Policies (CSP) If you're using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a `script-src` directive that includes the value `'unsafe-eval'`. :warning: NOTE, however, that `unsafe-eval` is NOT recommended in a secure CSP[[1]](https://developer.chrome.com/extensions/contentSecurityPolicy#relaxing-eval), as it has the potential to open the document to cross-site scripting (XSS) attacks. In order to make use of Ajv without easing your CSP, you can [pre-compile a schema using the CLI](https://github.com/ajv-validator/ajv-cli#compile-schemas). This will transpile the schema JSON into a JavaScript file that exports a `validate` function that works simlarly to a schema compiled at runtime. Note that pre-compilation of schemas is performed using [ajv-pack](https://github.com/ajv-validator/ajv-pack) and there are [some limitations to the schema features it can compile](https://github.com/ajv-validator/ajv-pack#limitations). A successfully pre-compiled schema is equivalent to the same schema compiled at runtime. ## Command line interface CLI is available as a separate npm package [ajv-cli](https://github.com/ajv-validator/ajv-cli). It supports: - compiling JSON Schemas to test their validity - BETA: generating standalone module exporting a validation function to be used without Ajv (using [ajv-pack](https://github.com/ajv-validator/ajv-pack)) - migrate schemas to draft-07 (using [json-schema-migrate](https://github.com/epoberezkin/json-schema-migrate)) - validating data file(s) against JSON Schema - testing expected validity of data against JSON Schema - referenced schemas - custom meta-schemas - files in JSON, JSON5, YAML, and JavaScript format - all Ajv options - reporting changes in data after validation in [JSON-patch](https://tools.ietf.org/html/rfc6902) format ## Validation keywords Ajv supports all validation keywords from draft-07 of JSON Schema standard: - [type](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#type) - [for numbers](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-numbers) - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf - [for strings](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-strings) - maxLength, minLength, pattern, format - [for arrays](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-arrays) - maxItems, minItems, uniqueItems, items, additionalItems, [contains](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#contains) - [for objects](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-objects) - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, [propertyNames](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#propertynames) - [for all types](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-all-types) - enum, [const](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#const) - [compound keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#compound-keywords) - not, oneOf, anyOf, allOf, [if/then/else](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#ifthenelse) With [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package Ajv also supports validation keywords from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) for JSON Schema standard: - [patternRequired](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#patternrequired-proposed) - like `required` but with patterns that some property should match. - [formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#formatmaximum--formatminimum-and-exclusiveformatmaximum--exclusiveformatminimum-proposed) - setting limits for date, time, etc. See [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md) for more details. ## Annotation keywords JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation. - `title` and `description`: information about the data represented by that schema - `$comment` (NEW in draft-07): information for developers. With option `$comment` Ajv logs or passes the comment string to the user-supplied function. See [Options](#options). - `default`: a default value of the data instance, see [Assigning defaults](#assigning-defaults). - `examples` (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema. - `readOnly` and `writeOnly` (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.). - `contentEncoding`: [RFC 2045](https://tools.ietf.org/html/rfc2045#section-6.1 ), e.g., "base64". - `contentMediaType`: [RFC 2046](https://tools.ietf.org/html/rfc2046), e.g., "image/png". __Please note__: Ajv does not implement validation of the keywords `examples`, `contentEncoding` and `contentMediaType` but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance. ## Formats Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use "format" keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see [ReDoS attack](#redos-attack). __Please note__: if you need to use "format" keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios. The following formats are implemented for string validation with "format" keyword: - _date_: full-date according to [RFC3339](http://tools.ietf.org/html/rfc3339#section-5.6). - _time_: time with optional time-zone. - _date-time_: date-time from the same source (time-zone is mandatory). `date`, `time` and `date-time` validate ranges in `full` mode and only regexp in `fast` mode (see [options](#options)). - _uri_: full URI. - _uri-reference_: URI reference, including full and relative URIs. - _uri-template_: URI template according to [RFC6570](https://tools.ietf.org/html/rfc6570) - _url_ (deprecated): [URL record](https://url.spec.whatwg.org/#concept-url). - _email_: email address. - _hostname_: host name according to [RFC1034](http://tools.ietf.org/html/rfc1034#section-3.5). - _ipv4_: IP address v4. - _ipv6_: IP address v6. - _regex_: tests whether a string is a valid regular expression by passing it to RegExp constructor. - _uuid_: Universally Unique IDentifier according to [RFC4122](http://tools.ietf.org/html/rfc4122). - _json-pointer_: JSON-pointer according to [RFC6901](https://tools.ietf.org/html/rfc6901). - _relative-json-pointer_: relative JSON-pointer according to [this draft](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00). __Please note__: JSON Schema draft-07 also defines formats `iri`, `iri-reference`, `idn-hostname` and `idn-email` for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here. There are two modes of format validation: `fast` and `full`. This mode affects formats `date`, `time`, `date-time`, `uri`, `uri-reference`, and `email`. See [Options](#options) for details. You can add additional formats and replace any of the formats above using [addFormat](#api-addformat) method. The option `unknownFormats` allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See [Options](#options) for details. You can find regular expressions used for format validation and the sources that were used in [formats.js](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js). ## <a name="ref"></a>Combining schemas with $ref You can structure your validation logic across multiple schema files and have schemas reference each other using `$ref` keyword. Example: ```javascript var schema = { "$id": "http://example.com/schemas/schema.json", "type": "object", "properties": { "foo": { "$ref": "defs.json#/definitions/int" }, "bar": { "$ref": "defs.json#/definitions/str" } } }; var defsSchema = { "$id": "http://example.com/schemas/defs.json", "definitions": { "int": { "type": "integer" }, "str": { "type": "string" } } }; ``` Now to compile your schema you can either pass all schemas to Ajv instance: ```javascript var ajv = new Ajv({schemas: [schema, defsSchema]}); var validate = ajv.getSchema('http://example.com/schemas/schema.json'); ``` or use `addSchema` method: ```javascript var ajv = new Ajv; var validate = ajv.addSchema(defsSchema) .compile(schema); ``` See [Options](#options) and [addSchema](#api) method. __Please note__: - `$ref` is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of `addSchema` method or as a property name in `schemas` option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using `compileAsync` method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See [Asynchronous schema compilation](#asynchronous-schema-compilation). ## $data reference With `$data` option you can use values from the validated data as the values for the schema keywords. See [proposal](https://github.com/json-schema-org/json-schema-spec/issues/51) for more information about how it works. `$data` reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems. The value of "$data" should be a [JSON-pointer](https://tools.ietf.org/html/rfc6901) to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a [relative JSON-pointer](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00) (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema). Examples. This schema requires that the value in property `smaller` is less or equal than the value in the property larger: ```javascript var ajv = new Ajv({$data: true}); var schema = { "properties": { "smaller": { "type": "number", "maximum": { "$data": "1/larger" } }, "larger": { "type": "number" } } }; var validData = { smaller: 5, larger: 7 }; ajv.validate(schema, validData); // true ``` This schema requires that the properties have the same format as their field names: ```javascript var schema = { "additionalProperties": { "type": "string", "format": { "$data": "0#" } } }; var validData = { 'date-time': '1963-06-19T08:30:06.283185Z', email: 'joe.bloggs@example.com' } ``` `$data` reference is resolved safely - it won't throw even if some property is undefined. If `$data` resolves to `undefined` the validation succeeds (with the exclusion of `const` keyword). If `$data` resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails. ## $merge and $patch keywords With the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) you can use the keywords `$merge` and `$patch` that allow extending JSON Schemas with patches using formats [JSON Merge Patch (RFC 7396)](https://tools.ietf.org/html/rfc7396) and [JSON Patch (RFC 6902)](https://tools.ietf.org/html/rfc6902). To add keywords `$merge` and `$patch` to Ajv instance use this code: ```javascript require('ajv-merge-patch')(ajv); ``` Examples. Using `$merge`: ```json { "$merge": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": { "properties": { "q": { "type": "number" } } } } } ``` Using `$patch`: ```json { "$patch": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": [ { "op": "add", "path": "/properties/q", "value": { "type": "number" } } ] } } ``` The schemas above are equivalent to this schema: ```json { "type": "object", "properties": { "p": { "type": "string" }, "q": { "type": "number" } }, "additionalProperties": false } ``` The properties `source` and `with` in the keywords `$merge` and `$patch` can use absolute or relative `$ref` to point to other schemas previously added to the Ajv instance or to the fragments of the current schema. See the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) for more information. ## Defining custom keywords The advantages of using custom keywords are: - allow creating validation scenarios that cannot be expressed using JSON Schema - simplify your schemas - help bringing a bigger part of the validation logic to your schemas - make your schemas more expressive, less verbose and closer to your application domain - implement custom data processors that modify your data (`modifying` option MUST be used in keyword definition) and/or create side effects while the data is being validated If a keyword is used only for side-effects and its validation result is pre-defined, use option `valid: true/false` in keyword definition to simplify both generated code (no error handling in case of `valid: true`) and your keyword functions (no need to return any validation result). The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas. You can define custom keywords with [addKeyword](#api-addkeyword) method. Keywords are defined on the `ajv` instance level - new instances will not have previously defined keywords. Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema. Example. `range` and `exclusiveRange` keywords using compiled schema: ```javascript ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) { var min = sch[0]; var max = sch[1]; return parentSchema.exclusiveRange === true ? function (data) { return data > min && data < max; } : function (data) { return data >= min && data <= max; } } }); var schema = { "range": [2, 4], "exclusiveRange": true }; var validate = ajv.compile(schema); console.log(validate(2.01)); // true console.log(validate(3.99)); // true console.log(validate(2)); // false console.log(validate(4)); // false ``` Several custom keywords (typeof, instanceof, range and propertyNames) are defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - they can be used for your schemas and as a starting point for your own custom keywords. See [Defining custom keywords](https://github.com/ajv-validator/ajv/blob/master/CUSTOM.md) for more details. ## Asynchronous schema compilation During asynchronous compilation remote references are loaded using supplied function. See `compileAsync` [method](#api-compileAsync) and `loadSchema` [option](#options). Example: ```javascript var ajv = new Ajv({ loadSchema: loadSchema }); ajv.compileAsync(schema).then(function (validate) { var valid = validate(data); // ... }); function loadSchema(uri) { return request.json(uri).then(function (res) { if (res.statusCode >= 400) throw new Error('Loading error: ' + res.statusCode); return res.body; }); } ``` __Please note__: [Option](#options) `missingRefs` should NOT be set to `"ignore"` or `"fail"` for asynchronous compilation to work. ## Asynchronous validation Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add `async: true` in the keyword or format definition (see [addFormat](#api-addformat), [addKeyword](#api-addkeyword) and [Defining custom keywords](#defining-custom-keywords)). If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have `"$async": true` keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without `$async` keyword Ajv will throw an exception during schema compilation. __Please note__: all asynchronous subschemas that are referenced from the current or other schemas should have `"$async": true` keyword as well, otherwise the schema compilation will fail. Validation function for an asynchronous custom format/keyword should return a promise that resolves with `true` or `false` (or rejects with `new Ajv.ValidationError(errors)` if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to [es7 async functions](http://tc39.github.io/ecmascript-asyncawait/) that can optionally be transpiled with [nodent](https://github.com/MatAtBread/nodent). Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via `processCode` option. See [Options](#options). The compiled validation function has `$async: true` property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas. Validation result will be a promise that resolves with validated data or rejects with an exception `Ajv.ValidationError` that contains the array of validation errors in `errors` property. Example: ```javascript var ajv = new Ajv; // require('ajv-async')(ajv); ajv.addKeyword('idExists', { async: true, type: 'number', validate: checkIdExists }); function checkIdExists(schema, data) { return knex(schema.table) .select('id') .where('id', data) .then(function (rows) { return !!rows.length; // true if record is found }); } var schema = { "$async": true, "properties": { "userId": { "type": "integer", "idExists": { "table": "users" } }, "postId": { "type": "integer", "idExists": { "table": "posts" } } } }; var validate = ajv.compile(schema); validate({ userId: 1, postId: 19 }) .then(function (data) { console.log('Data is valid', data); // { userId: 1, postId: 19 } }) .catch(function (err) { if (!(err instanceof Ajv.ValidationError)) throw err; // data is invalid console.log('Validation errors:', err.errors); }); ``` ### Using transpilers with asynchronous validation functions. [ajv-async](https://github.com/ajv-validator/ajv-async) uses [nodent](https://github.com/MatAtBread/nodent) to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser). #### Using nodent ```javascript var ajv = new Ajv; require('ajv-async')(ajv); // in the browser if you want to load ajv-async bundle separately you can: // window.ajvAsync(ajv); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` #### Using other transpilers ```javascript var ajv = new Ajv({ processCode: transpileFunc }); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` See [Options](#options). ## Security considerations JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider. ##### Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ##### Untrusted schemas Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application. If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. [#557](https://github.com/ajv-validator/ajv/issues/557)) - validating certain data can be slow It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in `pattern` and `patternProperties` keywords. Regardless the measures you take, using untrusted schemas increases security risks. ##### Circular references in JavaScript objects Ajv does not support schemas and validated data that have circular references in objects. See [issue #802](https://github.com/ajv-validator/ajv/issues/802). An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references. ##### Security risks of trusted schemas Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to): - `pattern` and `format` for large strings - in some cases using `maxLength` can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see [ReDoS attack](#redos-attack)). - `patternProperties` for large property names - use `propertyNames` to mitigate, but some regular expressions can have exponential evaluation time as well. - `uniqueItems` for large non-scalar arrays - use `maxItems` to mitigate __Please note__: The suggestions above to prevent slow validation would only work if you do NOT use `allErrors: true` in production code (using it would continue validation after validation errors). You can validate your JSON schemas against [this meta-schema](https://github.com/ajv-validator/ajv/blob/master/lib/refs/json-schema-secure.json) to check that these recommendations are followed: ```javascript const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json')); const schema1 = {format: 'email'}; isSchemaSecure(schema1); // false const schema2 = {format: 'email', maxLength: MAX_LENGTH}; isSchemaSecure(schema2); // true ``` __Please note__: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results. ##### Content Security Policies (CSP) See [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) ## ReDoS attack Certain regular expressions can lead to the exponential evaluation time even with relatively short strings. Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see [safe-regex](https://github.com/substack/safe-regex), for example. __Please note__: some formats that Ajv implements use [regular expressions](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js) that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources __it is strongly recommended__ to consider the following: - making assessment of "format" implementations in Ajv. - using `format: 'fast'` option that simplifies some of the regular expressions (although it does not guarantee that they are safe). - replacing format implementations provided by Ajv with your own implementations of "format" keyword that either uses different regular expressions or another approach to format validation. Please see [addFormat](#api-addformat) method. - disabling format validation by ignoring "format" keyword with option `format: false` Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios. ## Filtering data With [option `removeAdditional`](#options) (added by [andyscott](https://github.com/andyscott)) you can filter data during the validation. This option modifies original data. Example: ```javascript var ajv = new Ajv({ removeAdditional: true }); var schema = { "additionalProperties": false, "properties": { "foo": { "type": "number" }, "bar": { "additionalProperties": { "type": "number" }, "properties": { "baz": { "type": "string" } } } } } var data = { "foo": 0, "additional1": 1, // will be removed; `additionalProperties` == false "bar": { "baz": "abc", "additional2": 2 // will NOT be removed; `additionalProperties` != false }, } var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 } ``` If `removeAdditional` option in the example above were `"all"` then both `additional1` and `additional2` properties would have been removed. If the option were `"failing"` then property `additional1` would have been removed regardless of its value and property `additional2` would have been removed only if its value were failing the schema in the inner `additionalProperties` (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed). __Please note__: If you use `removeAdditional` option with `additionalProperties` keyword inside `anyOf`/`oneOf` keywords your validation can fail with this schema, for example: ```json { "type": "object", "oneOf": [ { "properties": { "foo": { "type": "string" } }, "required": [ "foo" ], "additionalProperties": false }, { "properties": { "bar": { "type": "integer" } }, "required": [ "bar" ], "additionalProperties": false } ] } ``` The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties. With the option `removeAdditional: true` the validation will pass for the object `{ "foo": "abc"}` but will fail for the object `{"bar": 1}`. It happens because while the first subschema in `oneOf` is validated, the property `bar` is removed because it is an additional property according to the standard (because it is not included in `properties` keyword in the same schema). While this behaviour is unexpected (issues [#129](https://github.com/ajv-validator/ajv/issues/129), [#134](https://github.com/ajv-validator/ajv/issues/134)), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way: ```json { "type": "object", "properties": { "foo": { "type": "string" }, "bar": { "type": "integer" } }, "additionalProperties": false, "oneOf": [ { "required": [ "foo" ] }, { "required": [ "bar" ] } ] } ``` The schema above is also more efficient - it will compile into a faster function. ## Assigning defaults With [option `useDefaults`](#options) Ajv will assign values from `default` keyword in the schemas of `properties` and `items` (when it is the array of schemas) to the missing properties and items. With the option value `"empty"` properties and items equal to `null` or `""` (empty string) will be considered missing and assigned defaults. This option modifies original data. __Please note__: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema. Example 1 (`default` in `properties`): ```javascript var ajv = new Ajv({ useDefaults: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "string", "default": "baz" } }, "required": [ "foo", "bar" ] }; var data = { "foo": 1 }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": "baz" } ``` Example 2 (`default` in `items`): ```javascript var schema = { "type": "array", "items": [ { "type": "number" }, { "type": "string", "default": "foo" } ] } var data = [ 1 ]; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // [ 1, "foo" ] ``` `default` keywords in other cases are ignored: - not in `properties` or `items` subschemas - in schemas inside `anyOf`, `oneOf` and `not` (see [#42](https://github.com/ajv-validator/ajv/issues/42)) - in `if` subschema of `switch` keyword - in schemas generated by custom macro keywords The [`strictDefaults` option](#options) customizes Ajv's behavior for the defaults that Ajv ignores (`true` raises an error, and `"log"` outputs a warning). ## Coercing data types When you are validating user inputs all your data properties are usually strings. The option `coerceTypes` allows you to have your data types coerced to the types specified in your schema `type` keywords, both to pass the validation and to use the correctly typed data afterwards. This option modifies original data. __Please note__: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value. Example 1: ```javascript var ajv = new Ajv({ coerceTypes: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "boolean" } }, "required": [ "foo", "bar" ] }; var data = { "foo": "1", "bar": "false" }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": false } ``` Example 2 (array coercions): ```javascript var ajv = new Ajv({ coerceTypes: 'array' }); var schema = { "properties": { "foo": { "type": "array", "items": { "type": "number" } }, "bar": { "type": "boolean" } } }; var data = { "foo": "1", "bar": ["false"] }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": [1], "bar": false } ``` The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords). See [Coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md) for details. ## API ##### new Ajv(Object options) -&gt; Object Create Ajv instance. ##### .compile(Object schema) -&gt; Function&lt;Object data&gt; Generate validating function and cache the compiled schema for future use. Validating function returns a boolean value. This function has properties `errors` and `schema`. Errors encountered during the last validation are assigned to `errors` property (it is assigned `null` if there was no errors). `schema` property contains the reference to the original schema. The schema passed to this method will be validated against meta-schema unless `validateSchema` option is false. If schema is invalid, an error will be thrown. See [options](#options). ##### <a name="api-compileAsync"></a>.compileAsync(Object schema [, Boolean meta] [, Function callback]) -&gt; Promise Asynchronous version of `compile` method that loads missing remote schemas using asynchronous function in `options.loadSchema`. This function returns a Promise that resolves to a validation function. An optional callback passed to `compileAsync` will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when: - missing schema can't be loaded (`loadSchema` returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved. - schema (or some loaded/referenced schema) is invalid. The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded. You can asynchronously compile meta-schema by passing `true` as the second parameter. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### .validate(Object schema|String key|String ref, data) -&gt; Boolean Validate data using passed schema (it will be compiled and cached). Instead of the schema you can use the key that was previously passed to `addSchema`, the schema id if it was present in the schema or any previously resolved reference. Validation errors will be available in the `errors` property of Ajv instance (`null` if there were no errors). __Please note__: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later. If the schema is asynchronous (has `$async` keyword on the top level) this method returns a Promise. See [Asynchronous validation](#asynchronous-validation). ##### .addSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole. Array of schemas can be passed (schemas should have ids), the second parameter will be ignored. Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key. Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data. Although `addSchema` does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time. By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by `validateSchema` option. __Please note__: Ajv uses the [method chaining syntax](https://en.wikipedia.org/wiki/Method_chaining) for all methods with the prefix `add*` and `remove*`. This allows you to do nice things like the following. ```javascript var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri); ``` ##### .addMetaSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of `addSchema` because there may be instance options that would compile a meta schema incorrectly (at the moment it is `removeAdditional` option). There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option `meta` is set to `false`. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See `validateSchema`. ##### <a name="api-validateschema"></a>.validateSchema(Object schema) -&gt; Boolean Validates schema. This method should be used to validate schemas rather than `validate` due to the inconsistency of `uri` format in JSON Schema standard. By default this method is called automatically when the schema is added, so you rarely need to use it directly. If schema doesn't have `$schema` property, it is validated against draft 6 meta-schema (option `meta` should not be false). If schema has `$schema` property, then the schema with this id (that should be previously added) is used to validate passed schema. Errors will be available at `ajv.errors`. ##### .getSchema(String key) -&gt; Function&lt;Object data&gt; Retrieve compiled schema previously added with `addSchema` by the key passed to `addSchema` or by its full reference (id). The returned validating function has `schema` property with the reference to the original schema. ##### .removeSchema([Object schema|String key|String ref|RegExp pattern]) -&gt; Ajv Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references. Schema can be removed using: - key passed to `addSchema` - it's full reference (id) - RegExp that should match schema id or key (meta-schemas won't be removed) - actual schema object that will be stable-stringified to remove schema from cache If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared. ##### <a name="api-addformat"></a>.addFormat(String name, String|RegExp|Function|Object format) -&gt; Ajv Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance. Strings are converted to RegExp. Function should return validation result as `true` or `false`. If object is passed it should have properties `validate`, `compare` and `async`: - _validate_: a string, RegExp or a function as described above. - _compare_: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords `formatMaximum`/`formatMinimum` (defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package). It should return `1` if the first value is bigger than the second value, `-1` if it is smaller and `0` if it is equal. - _async_: an optional `true` value if `validate` is an asynchronous function; in this case it should return a promise that resolves with a value `true` or `false`. - _type_: an optional type of data that the format applies to. It can be `"string"` (default) or `"number"` (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass. Custom formats can be also added via `formats` option. ##### <a name="api-addkeyword"></a>.addKeyword(String keyword, Object definition) -&gt; Ajv Add custom validation keyword to Ajv instance. Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance. Keyword must start with a letter, `_` or `$`, and may continue with letters, numbers, `_`, `$`, or `-`. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions. Example Keywords: - `"xyz-example"`: valid, and uses prefix for the xyz project to avoid name collisions. - `"example"`: valid, but not recommended as it could collide with future versions of JSON Schema etc. - `"3-example"`: invalid as numbers are not allowed to be the first character in a keyword Keyword definition is an object with the following properties: - _type_: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types. - _validate_: validating function - _compile_: compiling function - _macro_: macro function - _inline_: compiling function that returns code (as string) - _schema_: an optional `false` value used with "validate" keyword to not pass schema - _metaSchema_: an optional meta-schema for keyword schema - _dependencies_: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation - _modifying_: `true` MUST be passed if keyword modifies data - _statements_: `true` can be passed in case inline keyword generates statements (as opposed to expression) - _valid_: pass `true`/`false` to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - _$data_: an optional `true` value to support [$data reference](#data-reference) as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - _async_: an optional `true` value if the validation function is asynchronous (whether it is compiled or passed in _validate_ property); in this case it should return a promise that resolves with a value `true` or `false`. This option is ignored in case of "macro" and "inline" keywords. - _errors_: an optional boolean or string `"full"` indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation. _compile_, _macro_ and _inline_ are mutually exclusive, only one should be used at a time. _validate_ can be used separately or in addition to them to support $data reference. __Please note__: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate `type` keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed. See [Defining custom keywords](#defining-custom-keywords) for more details. ##### .getKeyword(String keyword) -&gt; Object|Boolean Returns custom keyword definition, `true` for pre-defined keywords and `false` if the keyword is unknown. ##### .removeKeyword(String keyword) -&gt; Ajv Removes custom or pre-defined keyword so you can redefine them. While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results. __Please note__: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use `removeSchema` method and compile them again. ##### .errorsText([Array&lt;Object&gt; errors [, Object options]]) -&gt; String Returns the text with all errors in a String. Options can have properties `separator` (string used to separate errors, ", " by default) and `dataVar` (the variable name that dataPaths are prefixed with, "data" by default). ## Options Defaults: ```javascript { // validation and reporting options: $data: false, allErrors: false, verbose: false, $comment: false, // NEW in Ajv version 6.0 jsonPointers: false, uniqueItems: true, unicode: true, nullable: false, format: 'fast', formats: {}, unknownFormats: true, schemas: {}, logger: undefined, // referenced schema options: schemaId: '$id', missingRefs: true, extendRefs: 'ignore', // recommended 'fail' loadSchema: undefined, // function(uri: string): Promise {} // options to modify validated data: removeAdditional: false, useDefaults: false, coerceTypes: false, // strict mode options strictDefaults: false, strictKeywords: false, strictNumbers: false, // asynchronous validation options: transpile: undefined, // requires ajv-async package // advanced options: meta: true, validateSchema: true, addUsedSchema: true, inlineRefs: true, passContext: false, loopRequired: Infinity, ownProperties: false, multipleOfPrecision: false, errorDataPath: 'object', // deprecated messages: true, sourceCode: false, processCode: undefined, // function (str: string, schema: object): string {} cache: new Cache, serialize: undefined } ``` ##### Validation and reporting options - _$data_: support [$data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See [API](#api). - _allErrors_: check all rules collecting all errors. Default is to return after the first error. - _verbose_: include the reference to the part of the schema (`schema` and `parentSchema`) and validated data in errors (false by default). - _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$comment` keyword to a function. Option values: - `false` (default): ignore $comment keyword. - `true`: log the keyword value to console. - function: pass the keyword value, its schema path and root schema to the specified function - _jsonPointers_: set `dataPath` property of errors using [JSON Pointers](https://tools.ietf.org/html/rfc6901) instead of JavaScript property access notation. - _uniqueItems_: validate `uniqueItems` keyword (true by default). - _unicode_: calculate correct length of strings with unicode pairs (true by default). Pass `false` to use `.length` of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - _nullable_: support keyword "nullable" from [Open API 3 specification](https://swagger.io/docs/specification/data-models/data-types/). - _format_: formats validation mode. Option values: - `"fast"` (default) - simplified and fast validation (see [Formats](#formats) for details of which formats are available and affected by this option). - `"full"` - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - `false` - ignore all format keywords. - _formats_: an object with custom formats. Keys and values will be passed to `addFormat` method. - _keywords_: an object with custom keywords. Keys and values will be passed to `addKeyword` method. - _unknownFormats_: handling of unknown formats. Option values: - `true` (default) - if an unknown format is encountered the exception is thrown during schema compilation. If `format` keyword value is [$data reference](#data-reference) and it is unknown the validation will fail. - `[String]` - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. If `format` keyword value is [$data reference](#data-reference) and it is not in this array the validation will fail. - `"ignore"` - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification. - _schemas_: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method `addSchema(value, key)` will be called for each schema in this object. - _logger_: sets the logging method. Default is the global `console` object that should have methods `log`, `warn` and `error`. See [Error logging](#error-logging). Option values: - custom logger - it should have methods `log`, `warn` and `error`. If any of these methods is missing an exception will be thrown. - `false` - logging is disabled. ##### Referenced schema options - _schemaId_: this option defines which keywords are used as schema URI. Option value: - `"$id"` (default) - only use `$id` keyword as schema URI (as specified in JSON Schema draft-06/07), ignore `id` keyword (if it is present a warning will be logged). - `"id"` - only use `id` keyword as schema URI (as specified in JSON Schema draft-04), ignore `$id` keyword (if it is present a warning will be logged). - `"auto"` - use both `$id` and `id` keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation. - _missingRefs_: handling of missing referenced schemas. Option values: - `true` (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties `missingRef` (with hash fragment) and `missingSchema` (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). - `"ignore"` - to log error during compilation and always pass validation. - `"fail"` - to log error and successfully compile schema but fail validation if this rule is checked. - _extendRefs_: validation of other keywords when `$ref` is present in the schema. Option values: - `"ignore"` (default) - when `$ref` is used other keywords are ignored (as per [JSON Reference](https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03#section-3) standard). A warning will be logged during the schema compilation. - `"fail"` (recommended) - if other validation keywords are used together with `$ref` the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing. - `true` - validate all keywords in the schemas with `$ref` (the default behaviour in versions before 5.0.0). - _loadSchema_: asynchronous function that will be used to load remote schemas when `compileAsync` [method](#api-compileAsync) is used and some reference is missing (option `missingRefs` should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### Options to modify validated data - _removeAdditional_: remove additional properties - see example in [Filtering data](#filtering-data). This option is not used if schema is added with `addMetaSchema` method. Option values: - `false` (default) - not to remove additional properties - `"all"` - all additional properties are removed, regardless of `additionalProperties` keyword in schema (and no validation is made for them). - `true` - only additional properties with `additionalProperties` keyword equal to `false` are removed. - `"failing"` - additional properties that fail schema validation will be removed (where `additionalProperties` keyword is `false` or schema). - _useDefaults_: replace missing or undefined properties and items with the values from corresponding `default` keywords. Default behaviour is to ignore `default` keywords. This option is not used if schema is added with `addMetaSchema` method. See examples in [Assigning defaults](#assigning-defaults). Option values: - `false` (default) - do not use defaults - `true` - insert defaults by value (object literal is used). - `"empty"` - in addition to missing or undefined, use defaults for properties and items that are equal to `null` or `""` (an empty string). - `"shared"` (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well. - _coerceTypes_: change data type of data to match `type` keyword. See the example in [Coercing data types](#coercing-data-types) and [coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md). Option values: - `false` (default) - no type coercion. - `true` - coerce scalar data types. - `"array"` - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema). ##### Strict mode options - _strictDefaults_: report ignored `default` keywords in schemas. Option values: - `false` (default) - ignored defaults are not reported - `true` - if an ignored default is present, throw an error - `"log"` - if an ignored default is present, log warning - _strictKeywords_: report unknown keywords in schemas. Option values: - `false` (default) - unknown keywords are not reported - `true` - if an unknown keyword is present, throw an error - `"log"` - if an unknown keyword is present, log warning - _strictNumbers_: validate numbers strictly, failing validation for NaN and Infinity. Option values: - `false` (default) - NaN or Infinity will pass validation for numeric types - `true` - NaN or Infinity will not pass validation for numeric types ##### Asynchronous validation options - _transpile_: Requires [ajv-async](https://github.com/ajv-validator/ajv-async) package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values: - `undefined` (default) - transpile with [nodent](https://github.com/MatAtBread/nodent) if async functions are not supported. - `true` - always transpile with nodent. - `false` - do not transpile; if async functions are not supported an exception will be thrown. ##### Advanced options - _meta_: add [meta-schema](http://json-schema.org/documentation.html) so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no `$schema` keyword. This default meta-schema MUST have `$schema` keyword. - _validateSchema_: validate added/compiled schemas against meta-schema (true by default). `$schema` property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with `addMetaSchema` method. Option values: - `true` (default) - if the validation fails, throw the exception. - `"log"` - if the validation fails, log error. - `false` - skip schema validation. - _addUsedSchema_: by default methods `compile` and `validate` add schemas to the instance if they have `$id` (or `id`) property that doesn't start with "#". If `$id` is present and it is not unique the exception will be thrown. Set this option to `false` to skip adding schemas to the instance and the `$id` uniqueness check when these methods are used. This option does not affect `addSchema` method. - _inlineRefs_: Affects compilation of referenced schemas. Option values: - `true` (default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. - `false` - to not inline referenced schemas (they will be compiled as separate functions). - integer number - to limit the maximum number of keywords of the schema that will be inlined. - _passContext_: pass validation context to custom keyword functions. If this option is `true` and you pass some context to the compiled validation function with `validate.call(context, data)`, the `context` will be available as `this` in your custom keywords. By default `this` is Ajv instance. - _loopRequired_: by default `required` keyword is compiled into a single expression (or a sequence of statements in `allErrors` mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which `required` keyword will be validated in a loop - smaller validation function size but also worse performance. - _ownProperties_: by default Ajv iterates over all enumerable object properties; when this option is `true` only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - _multipleOfPrecision_: by default `multipleOf` keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue [#84](https://github.com/ajv-validator/ajv/issues/84)). If you need to use fractional dividers set this option to some positive integer N to have `multipleOf` validated using this formula: `Math.abs(Math.round(division) - division) < 1e-N` (it is slower but allows for float arithmetics deviations). - _errorDataPath_ (deprecated): set `dataPath` to point to 'object' (default) or to 'property' when validating keywords `required`, `additionalProperties` and `dependencies`. - _messages_: Include human-readable messages in errors. `true` by default. `false` can be passed when custom messages are used (e.g. with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n)). - _sourceCode_: add `sourceCode` property to validating function (for debugging; this code can be different from the result of toString call). - _processCode_: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options: - `beautify` that formatted the generated function using [js-beautify](https://github.com/beautify-web/js-beautify). If you want to beautify the generated code pass a function calling `require('js-beautify').js_beautify` as `processCode: code => js_beautify(code)`. - `transpile` that transpiled asynchronous validation function. You can still use `transpile` option with [ajv-async](https://github.com/ajv-validator/ajv-async) package. See [Asynchronous validation](#asynchronous-validation) for more information. - _cache_: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache [sacjs](https://github.com/epoberezkin/sacjs) can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods `put(key, value)`, `get(key)`, `del(key)` and `clear()`. - _serialize_: an optional function to serialize schema to cache key. Pass `false` to use schema itself as a key (e.g., if WeakMap used as a cache). By default [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) is used. ## Validation errors In case of validation failure, Ajv assigns the array of errors to `errors` property of validation function (or to `errors` property of Ajv instance when `validate` or `validateSchema` methods were called). In case of [asynchronous validation](#asynchronous-validation), the returned promise is rejected with exception `Ajv.ValidationError` that has `errors` property. ### Error objects Each error is an object with the following properties: - _keyword_: validation keyword. - _dataPath_: the path to the part of the data that was validated. By default `dataPath` uses JavaScript property access notation (e.g., `".prop[1].subProp"`). When the option `jsonPointers` is true (see [Options](#options)) `dataPath` will be set using JSON pointer standard (e.g., `"/prop/1/subProp"`). - _schemaPath_: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation. - _params_: the object with the additional information about error that can be used to create custom error messages (e.g., using [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package). See below for parameters set by all keywords. - _message_: the standard error message (can be excluded with option `messages` set to false). - _schema_: the schema of the keyword (added with `verbose` option). - _parentSchema_: the schema containing the keyword (added with `verbose` option) - _data_: the data validated by the keyword (added with `verbose` option). __Please note__: `propertyNames` keyword schema validation errors have an additional property `propertyName`, `dataPath` points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property `keyword` equal to `"propertyNames"`. ### Error parameters Properties of `params` object in errors depend on the keyword that failed validation. - `maxItems`, `minItems`, `maxLength`, `minLength`, `maxProperties`, `minProperties` - property `limit` (number, the schema of the keyword). - `additionalItems` - property `limit` (the maximum number of allowed items in case when `items` keyword is an array of schemas and `additionalItems` is false). - `additionalProperties` - property `additionalProperty` (the property not used in `properties` and `patternProperties` keywords). - `dependencies` - properties: - `property` (dependent property), - `missingProperty` (required missing dependency - only the first one is reported currently) - `deps` (required dependencies, comma separated list as a string), - `depsCount` (the number of required dependencies). - `format` - property `format` (the schema of the keyword). - `maximum`, `minimum` - properties: - `limit` (number, the schema of the keyword), - `exclusive` (boolean, the schema of `exclusiveMaximum` or `exclusiveMinimum`), - `comparison` (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=") - `multipleOf` - property `multipleOf` (the schema of the keyword) - `pattern` - property `pattern` (the schema of the keyword) - `required` - property `missingProperty` (required property that is missing). - `propertyNames` - property `propertyName` (an invalid property name). - `patternRequired` (in ajv-keywords) - property `missingPattern` (required pattern that did not match any property). - `type` - property `type` (required type(s), a string, can be a comma-separated list) - `uniqueItems` - properties `i` and `j` (indices of duplicate items). - `const` - property `allowedValue` pointing to the value (the schema of the keyword). - `enum` - property `allowedValues` pointing to the array of values (the schema of the keyword). - `$ref` - property `ref` with the referenced schema URI. - `oneOf` - property `passingSchemas` (array of indices of passing schemas, null if no schema passes). - custom keywords (in case keyword definition doesn't create errors) - property `keyword` (the keyword name). ### Error logging Using the `logger` option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - **Required Methods**: `log`, `warn`, `error` ```javascript var otherLogger = new OtherLogger(); var ajv = new Ajv({ logger: { log: console.log.bind(console), warn: function warn() { otherLogger.logWarn.apply(otherLogger, arguments); }, error: function error() { otherLogger.logError.apply(otherLogger, arguments); console.error.apply(console, arguments); } } }); ``` ## Plugins Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions: - it exports a function - this function accepts ajv instance as the first parameter and returns the same instance to allow chaining - this function can accept an optional configuration as the second parameter If you have published a useful plugin please submit a PR to add it to the next section. ## Related packages - [ajv-async](https://github.com/ajv-validator/ajv-async) - plugin to configure async validation mode - [ajv-bsontype](https://github.com/BoLaMN/ajv-bsontype) - plugin to validate mongodb's bsonType formats - [ajv-cli](https://github.com/jessedc/ajv-cli) - command line interface - [ajv-errors](https://github.com/ajv-validator/ajv-errors) - plugin for custom error messages - [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) - internationalised error messages - [ajv-istanbul](https://github.com/ajv-validator/ajv-istanbul) - plugin to instrument generated validation code to measure test coverage of your schemas - [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) - plugin with custom validation keywords (select, typeof, etc.) - [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) - plugin with keywords $merge and $patch - [ajv-pack](https://github.com/ajv-validator/ajv-pack) - produces a compact module exporting validation functions - [ajv-formats-draft2019](https://github.com/luzlab/ajv-formats-draft2019) - format validators for draft2019 that aren't already included in ajv (ie. `idn-hostname`, `idn-email`, `iri`, `iri-reference` and `duration`). ## Some packages using Ajv - [webpack](https://github.com/webpack/webpack) - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser - [jsonscript-js](https://github.com/JSONScript/jsonscript-js) - the interpreter for [JSONScript](http://www.jsonscript.org) - scripted processing of existing endpoints and services - [osprey-method-handler](https://github.com/mulesoft-labs/osprey-method-handler) - Express middleware for validating requests and responses based on a RAML method object, used in [osprey](https://github.com/mulesoft/osprey) - validating API proxy generated from a RAML definition - [har-validator](https://github.com/ahmadnassri/har-validator) - HTTP Archive (HAR) validator - [jsoneditor](https://github.com/josdejong/jsoneditor) - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org - [JSON Schema Lint](https://github.com/nickcmaynard/jsonschemalint) - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com - [objection](https://github.com/vincit/objection.js) - SQL-friendly ORM for Node.js - [table](https://github.com/gajus/table) - formats data into a string table - [ripple-lib](https://github.com/ripple/ripple-lib) - a JavaScript API for interacting with [Ripple](https://ripple.com) in Node.js and the browser - [restbase](https://github.com/wikimedia/restbase) - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content - [hippie-swagger](https://github.com/CacheControl/hippie-swagger) - [Hippie](https://github.com/vesln/hippie) wrapper that provides end to end API testing with swagger validation - [react-form-controlled](https://github.com/seeden/react-form-controlled) - React controlled form components with validation - [rabbitmq-schema](https://github.com/tjmehta/rabbitmq-schema) - a schema definition module for RabbitMQ graphs and messages - [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema - [chai-ajv-json-schema](https://github.com/peon374/chai-ajv-json-schema) - chai plugin to us JSON Schema with expect in mocha tests - [grunt-jsonschema-ajv](https://github.com/SignpostMarv/grunt-jsonschema-ajv) - Grunt plugin for validating files against JSON Schema - [extract-text-webpack-plugin](https://github.com/webpack-contrib/extract-text-webpack-plugin) - extract text from bundle into a file - [electron-builder](https://github.com/electron-userland/electron-builder) - a solution to package and build a ready for distribution Electron app - [addons-linter](https://github.com/mozilla/addons-linter) - Mozilla Add-ons Linter - [gh-pages-generator](https://github.com/epoberezkin/gh-pages-generator) - multi-page site generator converting markdown files to GitHub pages - [ESLint](https://github.com/eslint/eslint) - the pluggable linting utility for JavaScript and JSX ## Tests ``` npm install git submodule update --init npm test ``` ## Contributing All validation functions are generated using doT templates in [dot](https://github.com/ajv-validator/ajv/tree/master/lib/dot) folder. Templates are precompiled so doT is not a run-time dependency. `npm run build` - compiles templates to [dotjs](https://github.com/ajv-validator/ajv/tree/master/lib/dotjs) folder. `npm run watch` - automatically compiles templates when files in dot folder change Please see [Contributing guidelines](https://github.com/ajv-validator/ajv/blob/master/CONTRIBUTING.md) ## Changes history See https://github.com/ajv-validator/ajv/releases __Please note__: [Changes in version 7.0.0-beta](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](https://github.com/ajv-validator/ajv/blob/master/CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](https://github.com/ajv-validator/ajv/blob/master/LICENSE) <!--lint disable no-literal-urls--> <p align="center"> <a href="https://nodejs.org/"> <img alt="Node.js" src="https://nodejs.org/static/images/logo-light.svg" width="400" /> </a> </p> Node.js is an open-source, cross-platform, JavaScript runtime environment. It executes JavaScript code outside of a browser. For more information on using Node.js, see the [Node.js Website][]. The Node.js project uses an [open governance model](./GOVERNANCE.md). The [OpenJS Foundation][] provides support for the project. **This project is bound by a [Code of Conduct][].** # Table of contents * [Support](#support) * [Release types](#release-types) * [Download](#download) * [Current and LTS releases](#current-and-lts-releases) * [Nightly releases](#nightly-releases) * [API documentation](#api-documentation) * [Verifying binaries](#verifying-binaries) * [Building Node.js](#building-nodejs) * [Security](#security) * [Contributing to Node.js](#contributing-to-nodejs) * [Current project team members](#current-project-team-members) * [TSC (Technical Steering Committee)](#tsc-technical-steering-committee) * [Collaborators](#collaborators) * [Release keys](#release-keys) * [License](#license) ## Support Looking for help? Check out the [instructions for getting support](.github/SUPPORT.md). ## Release types * **Current**: Under active development. Code for the Current release is in the branch for its major version number (for example, [v10.x](https://github.com/nodejs/node/tree/v10.x)). Node.js releases a new major version every 6 months, allowing for breaking changes. This happens in April and October every year. Releases appearing each October have a support life of 8 months. Releases appearing each April convert to LTS (see below) each October. * **LTS**: Releases that receive Long Term Support, with a focus on stability and security. Every even-numbered major version will become an LTS release. LTS releases receive 12 months of _Active LTS_ support and a further 18 months of _Maintenance_. LTS release lines have alphabetically-ordered code names, beginning with v4 Argon. There are no breaking changes or feature additions, except in some special circumstances. * **Nightly**: Code from the Current branch built every 24-hours when there are changes. Use with caution. Current and LTS releases follow [Semantic Versioning](https://semver.org). A member of the Release Team [signs](#release-keys) each Current and LTS release. For more information, see the [Release README](https://github.com/nodejs/Release#readme). ### Download Binaries, installers, and source tarballs are available at <https://nodejs.org/en/download/>. #### Current and LTS releases <https://nodejs.org/download/release/> The [latest](https://nodejs.org/download/release/latest/) directory is an alias for the latest Current release. The latest-_codename_ directory is an alias for the latest release from an LTS line. For example, the [latest-carbon](https://nodejs.org/download/release/latest-carbon/) directory contains the latest Carbon (Node.js 8) release. #### Nightly releases <https://nodejs.org/download/nightly/> Each directory name and filename contains a date (in UTC) and the commit SHA at the HEAD of the release. #### API documentation Documentation for the latest Current release is at <https://nodejs.org/api/>. Version-specific documentation is available in each release directory in the _docs_ subdirectory. Version-specific documentation is also at <https://nodejs.org/download/docs/>. ### Verifying binaries Download directories contain a `SHASUMS256.txt` file with SHA checksums for the files. To download `SHASUMS256.txt` using `curl`: ```console $ curl -O https://nodejs.org/dist/vx.y.z/SHASUMS256.txt ``` To check that a downloaded file matches the checksum, run it through `sha256sum` with a command such as: ```console $ grep node-vx.y.z.tar.gz SHASUMS256.txt | sha256sum -c - ``` For Current and LTS, the GPG detached signature of `SHASUMS256.txt` is in `SHASUMS256.txt.sig`. You can use it with `gpg` to verify the integrity of `SHASUMS256.txt`. You will first need to import [the GPG keys of individuals authorized to create releases](#release-keys). To import the keys: ```console $ gpg --keyserver pool.sks-keyservers.net --recv-keys DD8F2338BAE7501E3DD5AC78C273792F7D83545D ``` See the bottom of this README for a full script to import active release keys. Next, download the `SHASUMS256.txt.sig` for the release: ```console $ curl -O https://nodejs.org/dist/vx.y.z/SHASUMS256.txt.sig ``` Then use `gpg --verify SHASUMS256.txt.sig SHASUMS256.txt` to verify the file's signature. ## Building Node.js See [BUILDING.md](BUILDING.md) for instructions on how to build Node.js from source and a list of supported platforms. ## Security For information on reporting security vulnerabilities in Node.js, see [SECURITY.md](./SECURITY.md). ## Contributing to Node.js * [Contributing to the project][] * [Working Groups][] * [Strategic initiatives][] * [Technical values and prioritization][] ## Current project team members For information about the governance of the Node.js project, see [GOVERNANCE.md](./GOVERNANCE.md). <!-- node-core-utils depends on the format of the TSC list. If the format changes, those utilities need to be tested and updated. --> ### TSC (Technical Steering Committee) <!--lint disable prohibited-strings--> * [aduh95](https://github.com/aduh95) - **Antoine du Hamel** &lt;duhamelantoine1995@gmail.com&gt; (he/him) * [apapirovski](https://github.com/apapirovski) - **Anatoli Papirovski** &lt;apapirovski@mac.com&gt; (he/him) * [BethGriggs](https://github.com/BethGriggs) - **Beth Griggs** &lt;bgriggs@redhat.com&gt; (she/her) * [BridgeAR](https://github.com/BridgeAR) - **Ruben Bridgewater** &lt;ruben@bridgewater.de&gt; (he/him) * [ChALkeR](https://github.com/ChALkeR) - **Сковорода Никита Андреевич** &lt;chalkerx@gmail.com&gt; (he/him) * [cjihrig](https://github.com/cjihrig) - **Colin Ihrig** &lt;cjihrig@gmail.com&gt; (he/him) * [codebytere](https://github.com/codebytere) - **Shelley Vohr** &lt;shelley.vohr@gmail.com&gt; (she/her) * [danbev](https://github.com/danbev) - **Daniel Bevenius** &lt;daniel.bevenius@gmail.com&gt; (he/him) * [danielleadams](https://github.com/danielleadams) - **Danielle Adams** &lt;adamzdanielle@gmail.com&gt; (she/her) * [fhinkel](https://github.com/fhinkel) - **Franziska Hinkelmann** &lt;franziska.hinkelmann@gmail.com&gt; (she/her) * [gabrielschulhof](https://github.com/gabrielschulhof) - **Gabriel Schulhof** &lt;gabrielschulhof@gmail.com&gt; * [gireeshpunathil](https://github.com/gireeshpunathil) - **Gireesh Punathil** &lt;gpunathi@in.ibm.com&gt; (he/him) * [jasnell](https://github.com/jasnell) - **James M Snell** &lt;jasnell@gmail.com&gt; (he/him) * [joyeecheung](https://github.com/joyeecheung) - **Joyee Cheung** &lt;joyeec9h3@gmail.com&gt; (she/her) * [mcollina](https://github.com/mcollina) - **Matteo Collina** &lt;matteo.collina@gmail.com&gt; (he/him) * [mhdawson](https://github.com/mhdawson) - **Michael Dawson** &lt;midawson@redhat.com&gt; (he/him) * [mmarchini](https://github.com/mmarchini) - **Mary Marchini** &lt;oss@mmarchini.me&gt; (she/her) * [MylesBorins](https://github.com/MylesBorins) - **Myles Borins** &lt;myles.borins@gmail.com&gt; (he/him) * [ronag](https://github.com/ronag) - **Robert Nagy** &lt;ronagy@icloud.com&gt; * [targos](https://github.com/targos) - **Michaël Zasso** &lt;targos@protonmail.com&gt; (he/him) * [tniessen](https://github.com/tniessen) - **Tobias Nießen** &lt;tniessen@tnie.de&gt; * [Trott](https://github.com/Trott) - **Rich Trott** &lt;rtrott@gmail.com&gt; (he/him) <details> <summary>Emeriti</summary> ### TSC emeriti * [addaleax](https://github.com/addaleax) - **Anna Henningsen** &lt;anna@addaleax.net&gt; (she/her) * [bnoordhuis](https://github.com/bnoordhuis) - **Ben Noordhuis** &lt;info@bnoordhuis.nl&gt; * [chrisdickinson](https://github.com/chrisdickinson) - **Chris Dickinson** &lt;christopher.s.dickinson@gmail.com&gt; * [evanlucas](https://github.com/evanlucas) - **Evan Lucas** &lt;evanlucas@me.com&gt; (he/him) * [Fishrock123](https://github.com/Fishrock123) - **Jeremiah Senkpiel** &lt;fishrock123@rocketmail.com&gt; (he/they) * [gibfahn](https://github.com/gibfahn) - **Gibson Fahnestock** &lt;gibfahn@gmail.com&gt; (he/him) * [indutny](https://github.com/indutny) - **Fedor Indutny** &lt;fedor@indutny.com&gt; * [isaacs](https://github.com/isaacs) - **Isaac Z. Schlueter** &lt;i@izs.me&gt; * [joshgav](https://github.com/joshgav) - **Josh Gavant** &lt;josh.gavant@outlook.com&gt; * [mscdex](https://github.com/mscdex) - **Brian White** &lt;mscdex@mscdex.net&gt; * [nebrius](https://github.com/nebrius) - **Bryan Hughes** &lt;bryan@nebri.us&gt; * [ofrobots](https://github.com/ofrobots) - **Ali Ijaz Sheikh** &lt;ofrobots@google.com&gt; (he/him) * [orangemocha](https://github.com/orangemocha) - **Alexis Campailla** &lt;orangemocha@nodejs.org&gt; * [piscisaureus](https://github.com/piscisaureus) - **Bert Belder** &lt;bertbelder@gmail.com&gt; * [rvagg](https://github.com/rvagg) - **Rod Vagg** &lt;r@va.gg&gt; * [sam-github](https://github.com/sam-github) - **Sam Roberts** &lt;vieuxtech@gmail.com&gt; * [shigeki](https://github.com/shigeki) - **Shigeki Ohtsu** &lt;ohtsu@ohtsu.org&gt; (he/him) * [thefourtheye](https://github.com/thefourtheye) - **Sakthipriyan Vairamani** &lt;thechargingvolcano@gmail.com&gt; (he/him) * [TimothyGu](https://github.com/TimothyGu) - **Tiancheng "Timothy" Gu** &lt;timothygu99@gmail.com&gt; (he/him) * [trevnorris](https://github.com/trevnorris) - **Trevor Norris** &lt;trev.norris@gmail.com&gt; </details> <!-- node-core-utils and find-inactive-collaborators.mjs depend on the format of the collaborator list. If the format changes, those utilities need to be tested and updated. --> ### Collaborators * [addaleax](https://github.com/addaleax) - **Anna Henningsen** &lt;anna@addaleax.net&gt; (she/her) * [aduh95](https://github.com/aduh95) - **Antoine du Hamel** &lt;duhamelantoine1995@gmail.com&gt; (he/him) * [ak239](https://github.com/ak239) - **Aleksei Koziatinskii** &lt;ak239spb@gmail.com&gt; * [antsmartian](https://github.com/antsmartian) - **Anto Aravinth** &lt;anto.aravinth.cse@gmail.com&gt; (he/him) * [apapirovski](https://github.com/apapirovski) - **Anatoli Papirovski** &lt;apapirovski@mac.com&gt; (he/him) * [AshCripps](https://github.com/AshCripps) - **Ash Cripps** &lt;acripps@redhat.com&gt; * [bcoe](https://github.com/bcoe) - **Ben Coe** &lt;bencoe@gmail.com&gt; (he/him) * [bengl](https://github.com/bengl) - **Bryan English** &lt;bryan@bryanenglish.com&gt; (he/him) * [benjamingr](https://github.com/benjamingr) - **Benjamin Gruenbaum** &lt;benjamingr@gmail.com&gt; * [BethGriggs](https://github.com/BethGriggs) - **Beth Griggs** &lt;bgriggs@redhat.com&gt; (she/her) * [bmeck](https://github.com/bmeck) - **Bradley Farias** &lt;bradley.meck@gmail.com&gt; * [bmeurer](https://github.com/bmeurer) - **Benedikt Meurer** &lt;benedikt.meurer@gmail.com&gt; * [boneskull](https://github.com/boneskull) - **Christopher Hiller** &lt;boneskull@boneskull.com&gt; (he/him) * [BridgeAR](https://github.com/BridgeAR) - **Ruben Bridgewater** &lt;ruben@bridgewater.de&gt; (he/him) * [bzoz](https://github.com/bzoz) - **Bartosz Sosnowski** &lt;bartosz@janeasystems.com&gt; * [cclauss](https://github.com/cclauss) - **Christian Clauss** &lt;cclauss@me.com&gt; (he/him) * [ChALkeR](https://github.com/ChALkeR) - **Сковорода Никита Андреевич** &lt;chalkerx@gmail.com&gt; (he/him) * [cjihrig](https://github.com/cjihrig) - **Colin Ihrig** &lt;cjihrig@gmail.com&gt; (he/him) * [codebytere](https://github.com/codebytere) - **Shelley Vohr** &lt;shelley.vohr@gmail.com&gt; (she/her) * [danbev](https://github.com/danbev) - **Daniel Bevenius** &lt;daniel.bevenius@gmail.com&gt; (he/him) * [danielleadams](https://github.com/danielleadams) - **Danielle Adams** &lt;adamzdanielle@gmail.com&gt; (she/her) * [davisjam](https://github.com/davisjam) - **Jamie Davis** &lt;davisjam@vt.edu&gt; (he/him) * [DerekNonGeneric](https://github.com/DerekNonGeneric) - **Derek Lewis** &lt;DerekNonGeneric@inf.is&gt; (he/him) * [devnexen](https://github.com/devnexen) - **David Carlier** &lt;devnexen@gmail.com&gt; * [devsnek](https://github.com/devsnek) - **Gus Caplan** &lt;me@gus.host&gt; (they/them) * [dmabupt](https://github.com/dmabupt) - **Xu Meng** &lt;dmabupt@gmail.com&gt; (he/him) * [dnlup](https://github.com/dnlup) **Daniele Belardi** &lt;dwon.dnl@gmail.com&gt; (he/him) * [edsadr](https://github.com/edsadr) - **Adrian Estrada** &lt;edsadr@gmail.com&gt; (he/him) * [eugeneo](https://github.com/eugeneo) - **Eugene Ostroukhov** &lt;eostroukhov@google.com&gt; * [evanlucas](https://github.com/evanlucas) - **Evan Lucas** &lt;evanlucas@me.com&gt; (he/him) * [fhinkel](https://github.com/fhinkel) - **Franziska Hinkelmann** &lt;franziska.hinkelmann@gmail.com&gt; (she/her) * [Fishrock123](https://github.com/Fishrock123) - **Jeremiah Senkpiel** &lt;fishrock123@rocketmail.com&gt; (he/they) * [Flarna](https://github.com/Flarna) - **Gerhard Stöbich** &lt;deb2001-github@yahoo.de&gt; (he/they) * [gabrielschulhof](https://github.com/gabrielschulhof) - **Gabriel Schulhof** &lt;gabrielschulhof@gmail.com&gt; * [geek](https://github.com/geek) - **Wyatt Preul** &lt;wpreul@gmail.com&gt; * [gengjiawen](https://github.com/gengjiawen) - **Jiawen Geng** &lt;technicalcute@gmail.com&gt; * [GeoffreyBooth](https://github.com/geoffreybooth) - **Geoffrey Booth** &lt;webmaster@geoffreybooth.com&gt; (he/him) * [gireeshpunathil](https://github.com/gireeshpunathil) - **Gireesh Punathil** &lt;gpunathi@in.ibm.com&gt; (he/him) * [guybedford](https://github.com/guybedford) - **Guy Bedford** &lt;guybedford@gmail.com&gt; (he/him) * [HarshithaKP](https://github.com/HarshithaKP) - **Harshitha K P** &lt;harshitha014@gmail.com&gt; (she/her) * [hashseed](https://github.com/hashseed) - **Yang Guo** &lt;yangguo@chromium.org&gt; (he/him) * [himself65](https://github.com/himself65) - **Zeyu Yang** &lt;himself65@outlook.com&gt; (he/him) * [hiroppy](https://github.com/hiroppy) - **Yuta Hiroto** &lt;hello@hiroppy.me&gt; (he/him) * [iansu](https://github.com/iansu) - **Ian Sutherland** &lt;ian@iansutherland.ca&gt; * [indutny](https://github.com/indutny) - **Fedor Indutny** &lt;fedor@indutny.com&gt; * [JacksonTian](https://github.com/JacksonTian) - **Jackson Tian** &lt;shyvo1987@gmail.com&gt; * [jasnell](https://github.com/jasnell) - **James M Snell** &lt;jasnell@gmail.com&gt; (he/him) * [jkrems](https://github.com/jkrems) - **Jan Krems** &lt;jan.krems@gmail.com&gt; (he/him) * [joaocgreis](https://github.com/joaocgreis) - **João Reis** &lt;reis@janeasystems.com&gt; * [joyeecheung](https://github.com/joyeecheung) - **Joyee Cheung** &lt;joyeec9h3@gmail.com&gt; (she/her) * [juanarbol](https://github.com/juanarbol) - **Juan José Arboleda** &lt;soyjuanarbol@gmail.com&gt; (he/him) * [JungMinu](https://github.com/JungMinu) - **Minwoo Jung** &lt;nodecorelab@gmail.com&gt; (he/him) * [legendecas](https://github.com/legendecas) - **Chengzhong Wu** &lt;legendecas@gmail.com&gt; (he/him) * [Leko](https://github.com/Leko) - **Shingo Inoue** &lt;leko.noor@gmail.com&gt; (he/him) * [linkgoron](https://github.com/linkgoron) - **Nitzan Uziely** &lt;linkgoron@gmail.com&gt; * [lpinca](https://github.com/lpinca) - **Luigi Pinca** &lt;luigipinca@gmail.com&gt; (he/him) * [lundibundi](https://github.com/lundibundi) - **Denys Otrishko** &lt;shishugi@gmail.com&gt; (he/him) * [Lxxyx](https://github.com/Lxxyx) - **Zijian Liu** &lt;lxxyxzj@gmail.com&gt; (he/him) * [mafintosh](https://github.com/mafintosh) - **Mathias Buus** &lt;mathiasbuus@gmail.com&gt; (he/him) * [mcollina](https://github.com/mcollina) - **Matteo Collina** &lt;matteo.collina@gmail.com&gt; (he/him) * [mhdawson](https://github.com/mhdawson) - **Michael Dawson** &lt;midawson@redhat.com&gt; (he/him) * [miladfarca](https://github.com/miladfarca) - **Milad Fa** &lt;mfarazma@redhat.com&gt; (he/him) * [mildsunrise](https://github.com/mildsunrise) - **Alba Mendez** &lt;me@alba.sh&gt; (she/her) * [misterdjules](https://github.com/misterdjules) - **Julien Gilli** &lt;jgilli@netflix.com&gt; * [mmarchini](https://github.com/mmarchini) - **Mary Marchini** &lt;oss@mmarchini.me&gt; (she/her) * [mscdex](https://github.com/mscdex) - **Brian White** &lt;mscdex@mscdex.net&gt; * [MylesBorins](https://github.com/MylesBorins) - **Myles Borins** &lt;myles.borins@gmail.com&gt; (he/him) * [oyyd](https://github.com/oyyd) - **Ouyang Yadong** &lt;oyydoibh@gmail.com&gt; (he/him) * [panva](https://github.com/panva) - **Filip Skokan** &lt;panva.ip@gmail.com&gt; * [PoojaDurgad](https://github.com/PoojaDurgad) - **Pooja D P** &lt;Pooja.D.P@ibm.com&gt; (she/her) * [puzpuzpuz](https://github.com/puzpuzpuz) - **Andrey Pechkurov** &lt;apechkurov@gmail.com&gt; (he/him) * [Qard](https://github.com/Qard) - **Stephen Belanger** &lt;admin@stephenbelanger.com&gt; (he/him) * [RaisinTen](https://github.com/RaisinTen) - **Darshan Sen** &lt;raisinten@gmail.com&gt; (he/him) * [refack](https://github.com/refack) - **Refael Ackermann (רפאל פלחי)** &lt;refack@gmail.com&gt; (he/him/הוא/אתה) * [rexagod](https://github.com/rexagod) - **Pranshu Srivastava** &lt;rexagod@gmail.com&gt; (he/him) * [richardlau](https://github.com/richardlau) - **Richard Lau** &lt;rlau@redhat.com&gt; * [rickyes](https://github.com/rickyes) - **Ricky Zhou** &lt;0x19951125@gmail.com&gt; (he/him) * [ronag](https://github.com/ronag) - **Robert Nagy** &lt;ronagy@icloud.com&gt; * [ruyadorno](https://github.com/ruyadorno) - **Ruy Adorno** &lt;ruyadorno@github.com&gt; (he/him) * [rvagg](https://github.com/rvagg) - **Rod Vagg** &lt;rod@vagg.org&gt; * [ryzokuken](https://github.com/ryzokuken) - **Ujjwal Sharma** &lt;ryzokuken@disroot.org&gt; (he/him) * [saghul](https://github.com/saghul) - **Saúl Ibarra Corretgé** &lt;s@saghul.net&gt; * [santigimeno](https://github.com/santigimeno) - **Santiago Gimeno** &lt;santiago.gimeno@gmail.com&gt; * [seishun](https://github.com/seishun) - **Nikolai Vavilov** &lt;vvnicholas@gmail.com&gt; * [shisama](https://github.com/shisama) - **Masashi Hirano** &lt;shisama07@gmail.com&gt; (he/him) * [silverwind](https://github.com/silverwind) - **Roman Reiss** &lt;me@silverwind.io&gt; * [srl295](https://github.com/srl295) - **Steven R Loomis** &lt;srloomis@us.ibm.com&gt; * [starkwang](https://github.com/starkwang) - **Weijia Wang** &lt;starkwang@126.com&gt; * [sxa](https://github.com/sxa) - **Stewart X Addison** &lt;sxa@redhat.com&gt; (he/him) * [targos](https://github.com/targos) - **Michaël Zasso** &lt;targos@protonmail.com&gt; (he/him) * [TimothyGu](https://github.com/TimothyGu) - **Tiancheng "Timothy" Gu** &lt;timothygu99@gmail.com&gt; (he/him) * [tniessen](https://github.com/tniessen) - **Tobias Nießen** &lt;tniessen@tnie.de&gt; * [trivikr](https://github.com/trivikr) - **Trivikram Kamat** &lt;trivikr.dev@gmail.com&gt; * [Trott](https://github.com/Trott) - **Rich Trott** &lt;rtrott@gmail.com&gt; (he/him) * [vdeturckheim](https://github.com/vdeturckheim) - **Vladimir de Turckheim** &lt;vlad2t@hotmail.com&gt; (he/him) * [watilde](https://github.com/watilde) - **Daijiro Wachi** &lt;daijiro.wachi@gmail.com&gt; (he/him) * [watson](https://github.com/watson) - **Thomas Watson** &lt;w@tson.dk&gt; * [XadillaX](https://github.com/XadillaX) - **Khaidi Chu** &lt;i@2333.moe&gt; (he/him) * [yashLadha](https://github.com/yashLadha) - **Yash Ladha** &lt;yash@yashladha.in&gt; (he/him) * [yhwang](https://github.com/yhwang) - **Yihong Wang** &lt;yh.wang@ibm.com&gt; * [yorkie](https://github.com/yorkie) - **Yorkie Liu** &lt;yorkiefixer@gmail.com&gt; * [yosuke-furukawa](https://github.com/yosuke-furukawa) - **Yosuke Furukawa** &lt;yosuke.furukawa@gmail.com&gt; * [ZYSzys](https://github.com/ZYSzys) - **Yongsheng Zhang** &lt;zyszys98@gmail.com&gt; (he/him) <details> <summary>Emeriti</summary> <!-- find-inactive-collaborators.mjs depends on the format of the emeriti list. If the format changes, those utilities need to be tested and updated. --> ### Collaborator emeriti * [andrasq](https://github.com/andrasq) - **Andras** &lt;andras@kinvey.com&gt; * [AnnaMag](https://github.com/AnnaMag) - **Anna M. Kedzierska** &lt;anna.m.kedzierska@gmail.com&gt; * [AndreasMadsen](https://github.com/AndreasMadsen) - **Andreas Madsen** &lt;amwebdk@gmail.com&gt; (he/him) * [aqrln](https://github.com/aqrln) - **Alexey Orlenko** &lt;eaglexrlnk@gmail.com&gt; (he/him) * [bnoordhuis](https://github.com/bnoordhuis) - **Ben Noordhuis** &lt;info@bnoordhuis.nl&gt; * [brendanashworth](https://github.com/brendanashworth) - **Brendan Ashworth** &lt;brendan.ashworth@me.com&gt; * [calvinmetcalf](https://github.com/calvinmetcalf) - **Calvin Metcalf** &lt;calvin.metcalf@gmail.com&gt; * [chrisdickinson](https://github.com/chrisdickinson) - **Chris Dickinson** &lt;christopher.s.dickinson@gmail.com&gt; * [claudiorodriguez](https://github.com/claudiorodriguez) - **Claudio Rodriguez** &lt;cjrodr@yahoo.com&gt; * [DavidCai1993](https://github.com/DavidCai1993) - **David Cai** &lt;davidcai1993@yahoo.com&gt; (he/him) * [digitalinfinity](https://github.com/digitalinfinity) - **Hitesh Kanwathirtha** &lt;digitalinfinity@gmail.com&gt; (he/him) * [eljefedelrodeodeljefe](https://github.com/eljefedelrodeodeljefe) - **Robert Jefe Lindstaedt** &lt;robert.lindstaedt@gmail.com&gt; * [estliberitas](https://github.com/estliberitas) - **Alexander Makarenko** &lt;estliberitas@gmail.com&gt; * [firedfox](https://github.com/firedfox) - **Daniel Wang** &lt;wangyang0123@gmail.com&gt; * [gdams](https://github.com/gdams) - **George Adams** &lt;george.adams@microsoft.com&gt; (he/him) * [gibfahn](https://github.com/gibfahn) - **Gibson Fahnestock** &lt;gibfahn@gmail.com&gt; (he/him) * [glentiki](https://github.com/glentiki) - **Glen Keane** &lt;glenkeane.94@gmail.com&gt; (he/him) * [iarna](https://github.com/iarna) - **Rebecca Turner** &lt;me@re-becca.org&gt; * [imran-iq](https://github.com/imran-iq) - **Imran Iqbal** &lt;imran@imraniqbal.org&gt; * [imyller](https://github.com/imyller) - **Ilkka Myller** &lt;ilkka.myller@nodefield.com&gt; * [isaacs](https://github.com/isaacs) - **Isaac Z. Schlueter** &lt;i@izs.me&gt; * [italoacasas](https://github.com/italoacasas) - **Italo A. Casas** &lt;me@italoacasas.com&gt; (he/him) * [jasongin](https://github.com/jasongin) - **Jason Ginchereau** &lt;jasongin@microsoft.com&gt; * [jbergstroem](https://github.com/jbergstroem) - **Johan Bergström** &lt;bugs@bergstroem.nu&gt; * [jdalton](https://github.com/jdalton) - **John-David Dalton** &lt;john.david.dalton@gmail.com&gt; * [jhamhader](https://github.com/jhamhader) - **Yuval Brik** &lt;yuval@brik.org.il&gt; * [joshgav](https://github.com/joshgav) - **Josh Gavant** &lt;josh.gavant@outlook.com&gt; * [julianduque](https://github.com/julianduque) - **Julian Duque** &lt;julianduquej@gmail.com&gt; (he/him) * [kfarnung](https://github.com/kfarnung) - **Kyle Farnung** &lt;kfarnung@microsoft.com&gt; (he/him) * [kunalspathak](https://github.com/kunalspathak) - **Kunal Pathak** &lt;kunal.pathak@microsoft.com&gt; * [lance](https://github.com/lance) - **Lance Ball** &lt;lball@redhat.com&gt; (he/him) * [lucamaraschi](https://github.com/lucamaraschi) - **Luca Maraschi** &lt;luca.maraschi@gmail.com&gt; (he/him) * [lxe](https://github.com/lxe) - **Aleksey Smolenchuk** &lt;lxe@lxe.co&gt; * [maclover7](https://github.com/maclover7) - **Jon Moss** &lt;me@jonathanmoss.me&gt; (he/him) * [matthewloring](https://github.com/matthewloring) - **Matthew Loring** &lt;mattloring@google.com&gt; * [micnic](https://github.com/micnic) - **Nicu Micleușanu** &lt;micnic90@gmail.com&gt; (he/him) * [mikeal](https://github.com/mikeal) - **Mikeal Rogers** &lt;mikeal.rogers@gmail.com&gt; * [monsanto](https://github.com/monsanto) - **Christopher Monsanto** &lt;chris@monsan.to&gt; * [MoonBall](https://github.com/MoonBall) - **Chen Gang** &lt;gangc.cxy@foxmail.com&gt; * [not-an-aardvark](https://github.com/not-an-aardvark) - **Teddy Katz** &lt;teddy.katz@gmail.com&gt; (he/him) * [ofrobots](https://github.com/ofrobots) - **Ali Ijaz Sheikh** &lt;ofrobots@google.com&gt; (he/him) * [Olegas](https://github.com/Olegas) - **Oleg Elifantiev** &lt;oleg@elifantiev.ru&gt; * [orangemocha](https://github.com/orangemocha) - **Alexis Campailla** &lt;orangemocha@nodejs.org&gt; * [othiym23](https://github.com/othiym23) - **Forrest L Norvell** &lt;ogd@aoaioxxysz.net&gt; (he/him) * [petkaantonov](https://github.com/petkaantonov) - **Petka Antonov** &lt;petka_antonov@hotmail.com&gt; * [phillipj](https://github.com/phillipj) - **Phillip Johnsen** &lt;johphi@gmail.com&gt; * [piscisaureus](https://github.com/piscisaureus) - **Bert Belder** &lt;bertbelder@gmail.com&gt; * [pmq20](https://github.com/pmq20) - **Minqi Pan** &lt;pmq2001@gmail.com&gt; * [princejwesley](https://github.com/princejwesley) - **Prince John Wesley** &lt;princejohnwesley@gmail.com&gt; * [psmarshall](https://github.com/psmarshall) - **Peter Marshall** &lt;petermarshall@chromium.org&gt; (he/him) * [rlidwka](https://github.com/rlidwka) - **Alex Kocharin** &lt;alex@kocharin.ru&gt; * [rmg](https://github.com/rmg) - **Ryan Graham** &lt;r.m.graham@gmail.com&gt; * [robertkowalski](https://github.com/robertkowalski) - **Robert Kowalski** &lt;rok@kowalski.gd&gt; * [romankl](https://github.com/romankl) - **Roman Klauke** &lt;romaaan.git@gmail.com&gt; * [ronkorving](https://github.com/ronkorving) - **Ron Korving** &lt;ron@ronkorving.nl&gt; * [RReverser](https://github.com/RReverser) - **Ingvar Stepanyan** &lt;me@rreverser.com&gt; * [rubys](https://github.com/rubys) - **Sam Ruby** &lt;rubys@intertwingly.net&gt; * [sam-github](https://github.com/sam-github) - **Sam Roberts** &lt;vieuxtech@gmail.com&gt; * [sebdeckers](https://github.com/sebdeckers) - **Sebastiaan Deckers** &lt;sebdeckers83@gmail.com&gt; * [shigeki](https://github.com/shigeki) - **Shigeki Ohtsu** &lt;ohtsu@ohtsu.org&gt; (he/him) * [stefanmb](https://github.com/stefanmb) - **Stefan Budeanu** &lt;stefan@budeanu.com&gt; * [tellnes](https://github.com/tellnes) - **Christian Tellnes** &lt;christian@tellnes.no&gt; * [thefourtheye](https://github.com/thefourtheye) - **Sakthipriyan Vairamani** &lt;thechargingvolcano@gmail.com&gt; (he/him) * [thlorenz](https://github.com/thlorenz) - **Thorsten Lorenz** &lt;thlorenz@gmx.de&gt; * [trevnorris](https://github.com/trevnorris) - **Trevor Norris** &lt;trev.norris@gmail.com&gt; * [tunniclm](https://github.com/tunniclm) - **Mike Tunnicliffe** &lt;m.j.tunnicliffe@gmail.com&gt; * [vkurchatkin](https://github.com/vkurchatkin) - **Vladimir Kurchatkin** &lt;vladimir.kurchatkin@gmail.com&gt; * [vsemozhetbyt](https://github.com/vsemozhetbyt) - **Vse Mozhet Byt** &lt;vsemozhetbyt@gmail.com&gt; (he/him) * [whitlockjc](https://github.com/whitlockjc) - **Jeremy Whitlock** &lt;jwhitlock@apache.org&gt; </details> <!--lint enable prohibited-strings--> Collaborators follow the [Collaborator Guide](./doc/guides/collaborator-guide.md) in maintaining the Node.js project. ### Triagers * [Ayase-252](https://github.com/Ayase-252) - **Qingyu Deng** &lt;i@ayase-lab.com&gt; * [himadriganguly](https://github.com/himadriganguly) - **Himadri Ganguly** &lt;himadri.tech@gmail.com&gt; (he/him) * [marsonya](https://github.com/marsonya) - **Akhil Marsonya** &lt;akhil.marsonya27@gmail.com&gt; (he/him) * [PoojaDurgad](https://github.com/PoojaDurgad) - **Pooja Durgad** &lt;Pooja.D.P@ibm.com&gt; * [RaisinTen](https://github.com/RaisinTen) - **Darshan Sen** &lt;raisinten@gmail.com&gt; ### Release keys Primary GPG keys for Node.js Releasers (some Releasers sign with subkeys): * **Beth Griggs** &lt;bgriggs@redhat.com&gt; `4ED778F539E3634C779C87C6D7062848A1AB005C` * **Colin Ihrig** &lt;cjihrig@gmail.com&gt; `94AE36675C464D64BAFA68DD7434390BDBE9B9C5` * **Danielle Adams** &lt;adamzdanielle@gmail.com&gt; `74F12602B6F1C4E913FAA37AD3A89613643B6201` * **James M Snell** &lt;jasnell@keybase.io&gt; `71DCFD284A79C3B38668286BC97EC7A07EDE3FC1` * **Michaël Zasso** &lt;targos@protonmail.com&gt; `8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600` * **Myles Borins** &lt;myles.borins@gmail.com&gt; `C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8` * **Richard Lau** &lt;rlau@redhat.com&gt; `C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C` * **Rod Vagg** &lt;rod@vagg.org&gt; `DD8F2338BAE7501E3DD5AC78C273792F7D83545D` * **Ruben Bridgewater** &lt;ruben@bridgewater.de&gt; `A48C2BEE680E841632CD4E44F07496B3EB3C1762` * **Ruy Adorno** &lt;ruyadorno@hotmail.com&gt; `108F52B48DB57BB0CC439B2997B01419BD92F80A` * **Shelley Vohr** &lt;shelley.vohr@gmail.com&gt; `B9E2F5981AA6E0CD28160D9FF13993A75599653C` To import the full set of trusted release keys (including subkeys possibly used to sign releases): ```bash gpg --keyserver pool.sks-keyservers.net --recv-keys 4ED778F539E3634C779C87C6D7062848A1AB005C gpg --keyserver pool.sks-keyservers.net --recv-keys 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 gpg --keyserver pool.sks-keyservers.net --recv-keys 74F12602B6F1C4E913FAA37AD3A89613643B6201 gpg --keyserver pool.sks-keyservers.net --recv-keys 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 gpg --keyserver pool.sks-keyservers.net --recv-keys 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 gpg --keyserver pool.sks-keyservers.net --recv-keys C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 gpg --keyserver pool.sks-keyservers.net --recv-keys C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C gpg --keyserver pool.sks-keyservers.net --recv-keys DD8F2338BAE7501E3DD5AC78C273792F7D83545D gpg --keyserver pool.sks-keyservers.net --recv-keys A48C2BEE680E841632CD4E44F07496B3EB3C1762 gpg --keyserver pool.sks-keyservers.net --recv-keys 108F52B48DB57BB0CC439B2997B01419BD92F80A gpg --keyserver pool.sks-keyservers.net --recv-keys B9E2F5981AA6E0CD28160D9FF13993A75599653C ``` See the section above on [Verifying Binaries](#verifying-binaries) for how to use these keys to verify a downloaded file. <details> <summary>Other keys used to sign some previous releases</summary> * **Chris Dickinson** &lt;christopher.s.dickinson@gmail.com&gt; `9554F04D7259F04124DE6B476D5A82AC7E37093B` * **Danielle Adams** &lt;adamzdanielle@gmail.com&gt; `1C050899334244A8AF75E53792EF661D867B9DFA` * **Evan Lucas** &lt;evanlucas@me.com&gt; `B9AE9905FFD7803F25714661B63B535A4C206CA9` * **Gibson Fahnestock** &lt;gibfahn@gmail.com&gt; `77984A986EBC2AA786BC0F66B01FBB92821C587A` * **Isaac Z. Schlueter** &lt;i@izs.me&gt; `93C7E9E91B49E432C2F75674B0A78B0A6C481CF6` * **Italo A. Casas** &lt;me@italoacasas.com&gt; `56730D5401028683275BD23C23EFEFE93C4CFFFE` * **Jeremiah Senkpiel** &lt;fishrock@keybase.io&gt; `FD3A5288F042B6850C66B31F09FE44734EB7990E` * **Julien Gilli** &lt;jgilli@fastmail.fm&gt; `114F43EE0176B71C7BC219DD50A3051F888C628D` * **Timothy J Fontaine** &lt;tjfontaine@gmail.com&gt; `7937DFD2AB06298B2293C3187D33FF9D0246406D` </details> ## License Node.js is available under the [MIT license](https://opensource.org/licenses/MIT). Node.js also includes external libraries that are available under a variety of licenses. See [LICENSE](https://github.com/nodejs/node/blob/HEAD/LICENSE) for the full license text. [Code of Conduct]: https://github.com/nodejs/admin/blob/HEAD/CODE_OF_CONDUCT.md [Contributing to the project]: CONTRIBUTING.md [Node.js Website]: https://nodejs.org/ [OpenJS Foundation]: https://openjsf.org/ [Strategic initiatives]: doc/guides/strategic-initiatives.md [Technical values and prioritization]: doc/guides/technical-values.md [Working Groups]: https://github.com/nodejs/TSC/blob/HEAD/WORKING_GROUPS.md # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![Build Status](https://travis-ci.org/epoberezkin/json-schema-traverse.svg?branch=master)](https://travis-ci.org/epoberezkin/json-schema-traverse) [![npm version](https://badge.fury.io/js/json-schema-traverse.svg)](https://www.npmjs.com/package/json-schema-traverse) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # ES6-Promise (subset of [rsvp.js](https://github.com/tildeio/rsvp.js)) [![Build Status](https://travis-ci.org/stefanpenner/es6-promise.svg?branch=master)](https://travis-ci.org/stefanpenner/es6-promise) This is a polyfill of the [ES6 Promise](http://www.ecma-international.org/ecma-262/6.0/#sec-promise-constructor). The implementation is a subset of [rsvp.js](https://github.com/tildeio/rsvp.js) extracted by @jakearchibald, if you're wanting extra features and more debugging options, check out the [full library](https://github.com/tildeio/rsvp.js). For API details and how to use promises, see the <a href="http://www.html5rocks.com/en/tutorials/es6/promises/">JavaScript Promises HTML5Rocks article</a>. ## Downloads * [es6-promise 27.86 KB (7.33 KB gzipped)](https://cdn.jsdelivr.net/npm/es6-promise/dist/es6-promise.js) * [es6-promise-auto 27.78 KB (7.3 KB gzipped)](https://cdn.jsdelivr.net/npm/es6-promise/dist/es6-promise.auto.js) - Automatically provides/replaces `Promise` if missing or broken. * [es6-promise-min 6.17 KB (2.4 KB gzipped)](https://cdn.jsdelivr.net/npm/es6-promise/dist/es6-promise.min.js) * [es6-promise-auto-min 6.19 KB (2.4 KB gzipped)](https://cdn.jsdelivr.net/npm/es6-promise/dist/es6-promise.auto.min.js) - Minified version of `es6-promise-auto` above. ## CDN To use via a CDN include this in your html: ```html <!-- Automatically provides/replaces `Promise` if missing or broken. --> <script src="https://cdn.jsdelivr.net/npm/es6-promise@4/dist/es6-promise.js"></script> <script src="https://cdn.jsdelivr.net/npm/es6-promise@4/dist/es6-promise.auto.js"></script> <!-- Minified version of `es6-promise-auto` below. --> <script src="https://cdn.jsdelivr.net/npm/es6-promise@4/dist/es6-promise.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/es6-promise@4/dist/es6-promise.auto.min.js"></script> ``` ## Node.js To install: ```sh yarn add es6-promise ``` or ```sh npm install es6-promise ``` To use: ```js var Promise = require('es6-promise').Promise; ``` ## Usage in IE<9 `catch` and `finally` are reserved keywords in IE<9, meaning `promise.catch(func)` or `promise.finally(func)` throw a syntax error. To work around this, you can use a string to access the property as shown in the following example. However most minifiers will automatically fix this for you, making the resulting code safe for old browsers and production: ```js promise['catch'](function(err) { // ... }); ``` ```js promise['finally'](function() { // ... }); ``` ## Auto-polyfill To polyfill the global environment (either in Node or in the browser via CommonJS) use the following code snippet: ```js require('es6-promise').polyfill(); ``` Alternatively ```js require('es6-promise/auto'); ``` Notice that we don't assign the result of `polyfill()` to any variable. The `polyfill()` method will patch the global environment (in this case to the `Promise` name) when called. ## Building & Testing You will need to have PhantomJS installed globally in order to run the tests. `npm install -g phantomjs` * `npm run build` to build * `npm test` to run tests * `npm start` to run a build watcher, and webserver to test * `npm run test:server` for a testem test runner and watching builder lazy-property ============= Adds a lazily initialized property to an object. ## Example ```javascript var addLazyProperty = require("lazy-property") var obj = {} addLazyProperty(obj, "foo", function() { console.log("initialized!") return "bar" }) //Access the property console.log(obj.foo) console.log(obj.foo) //Prints out: // // initialized! // bar // bar // ``` ## Install npm install lazy-property ## API ### `require("lazy-property")(obj, name, init[, enumerable])` Adds a lazily initialized property to the object. * `obj` is the object to add the property to * `name` is the name of the property * `init` is a function that computes the value of the property * `enumerable` if the property is enumerable (default `false`) ## Credits (c) 2013 Mikola Lysenko. MIT License <!-- -- This file is auto-generated from README_js.md. Changes should be made there. --> # uuid [![Build Status](https://secure.travis-ci.org/kelektiv/node-uuid.svg?branch=master)](http://travis-ci.org/kelektiv/node-uuid) # Simple, fast generation of [RFC4122](http://www.ietf.org/rfc/rfc4122.txt) UUIDS. Features: * Support for version 1, 3, 4 and 5 UUIDs * Cross-platform * Uses cryptographically-strong random number APIs (when available) * Zero-dependency, small footprint (... but not [this small](https://gist.github.com/982883)) [**Deprecation warning**: The use of `require('uuid')` is deprecated and will not be supported after version 3.x of this module. Instead, use `require('uuid/[v1|v3|v4|v5]')` as shown in the examples below.] ## Quickstart - CommonJS (Recommended) ```shell npm install uuid ``` Then generate your uuid version of choice ... Version 1 (timestamp): ```javascript const uuidv1 = require('uuid/v1'); uuidv1(); // ⇨ '2c5ea4c0-4067-11e9-8bad-9b1deb4d3b7d' ``` Version 3 (namespace): ```javascript const uuidv3 = require('uuid/v3'); // ... using predefined DNS namespace (for domain names) uuidv3('hello.example.com', uuidv3.DNS); // ⇨ '9125a8dc-52ee-365b-a5aa-81b0b3681cf6' // ... using predefined URL namespace (for, well, URLs) uuidv3('http://example.com/hello', uuidv3.URL); // ⇨ 'c6235813-3ba4-3801-ae84-e0a6ebb7d138' // ... using a custom namespace // // Note: Custom namespaces should be a UUID string specific to your application! // E.g. the one here was generated using this modules `uuid` CLI. const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341'; uuidv3('Hello, World!', MY_NAMESPACE); // ⇨ 'e8b5a51d-11c8-3310-a6ab-367563f20686' ``` Version 4 (random): ```javascript const uuidv4 = require('uuid/v4'); uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' ``` Version 5 (namespace): ```javascript const uuidv5 = require('uuid/v5'); // ... using predefined DNS namespace (for domain names) uuidv5('hello.example.com', uuidv5.DNS); // ⇨ 'fdda765f-fc57-5604-a269-52a7df8164ec' // ... using predefined URL namespace (for, well, URLs) uuidv5('http://example.com/hello', uuidv5.URL); // ⇨ '3bbcee75-cecc-5b56-8031-b6641c1ed1f1' // ... using a custom namespace // // Note: Custom namespaces should be a UUID string specific to your application! // E.g. the one here was generated using this modules `uuid` CLI. const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341'; uuidv5('Hello, World!', MY_NAMESPACE); // ⇨ '630eb68f-e0fa-5ecc-887a-7c7a62614681' ``` ## Quickstart - Browser-ready Versions Browser-ready versions of this module are available via [wzrd.in](https://github.com/jfhbrook/wzrd.in). For version 1 uuids: ```html <script src="http://wzrd.in/standalone/uuid%2Fv1@latest"></script> <script> uuidv1(); // -> v1 UUID </script> ``` For version 3 uuids: ```html <script src="http://wzrd.in/standalone/uuid%2Fv3@latest"></script> <script> uuidv3('http://example.com/hello', uuidv3.URL); // -> v3 UUID </script> ``` For version 4 uuids: ```html <script src="http://wzrd.in/standalone/uuid%2Fv4@latest"></script> <script> uuidv4(); // -> v4 UUID </script> ``` For version 5 uuids: ```html <script src="http://wzrd.in/standalone/uuid%2Fv5@latest"></script> <script> uuidv5('http://example.com/hello', uuidv5.URL); // -> v5 UUID </script> ``` ## API ### Version 1 ```javascript const uuidv1 = require('uuid/v1'); // Incantations uuidv1(); uuidv1(options); uuidv1(options, buffer, offset); ``` Generate and return a RFC4122 v1 (timestamp-based) UUID. * `options` - (Object) Optional uuid state to apply. Properties may include: * `node` - (Array) Node id as Array of 6 bytes (per 4.1.6). Default: Randomly generated ID. See note 1. * `clockseq` - (Number between 0 - 0x3fff) RFC clock sequence. Default: An internally maintained clockseq is used. * `msecs` - (Number) Time in milliseconds since unix Epoch. Default: The current time is used. * `nsecs` - (Number between 0-9999) additional time, in 100-nanosecond units. Ignored if `msecs` is unspecified. Default: internal uuid counter is used, as per 4.2.1.2. * `buffer` - (Array | Buffer) Array or buffer where UUID bytes are to be written. * `offset` - (Number) Starting index in `buffer` at which to begin writing. Returns `buffer`, if specified, otherwise the string form of the UUID Note: The default [node id](https://tools.ietf.org/html/rfc4122#section-4.1.6) (the last 12 digits in the UUID) is generated once, randomly, on process startup, and then remains unchanged for the duration of the process. Example: Generate string UUID with fully-specified options ```javascript const v1options = { node: [0x01, 0x23, 0x45, 0x67, 0x89, 0xab], clockseq: 0x1234, msecs: new Date('2011-11-01').getTime(), nsecs: 5678 }; uuidv1(v1options); // ⇨ '710b962e-041c-11e1-9234-0123456789ab' ``` Example: In-place generation of two binary IDs ```javascript // Generate two ids in an array const arr = new Array(); uuidv1(null, arr, 0); // ⇨ [ 44, 94, 164, 192, 64, 103, 17, 233, 146, 52, 155, 29, 235, 77, 59, 125 ] uuidv1(null, arr, 16); // ⇨ [ 44, 94, 164, 192, 64, 103, 17, 233, 146, 52, 155, 29, 235, 77, 59, 125, 44, 94, 164, 193, 64, 103, 17, 233, 146, 52, 155, 29, 235, 77, 59, 125 ] ``` ### Version 3 ```javascript const uuidv3 = require('uuid/v3'); // Incantations uuidv3(name, namespace); uuidv3(name, namespace, buffer); uuidv3(name, namespace, buffer, offset); ``` Generate and return a RFC4122 v3 UUID. * `name` - (String | Array[]) "name" to create UUID with * `namespace` - (String | Array[]) "namespace" UUID either as a String or Array[16] of byte values * `buffer` - (Array | Buffer) Array or buffer where UUID bytes are to be written. * `offset` - (Number) Starting index in `buffer` at which to begin writing. Default = 0 Returns `buffer`, if specified, otherwise the string form of the UUID Example: ```javascript uuidv3('hello world', MY_NAMESPACE); // ⇨ '042ffd34-d989-321c-ad06-f60826172424' ``` ### Version 4 ```javascript const uuidv4 = require('uuid/v4') // Incantations uuidv4(); uuidv4(options); uuidv4(options, buffer, offset); ``` Generate and return a RFC4122 v4 UUID. * `options` - (Object) Optional uuid state to apply. Properties may include: * `random` - (Number[16]) Array of 16 numbers (0-255) to use in place of randomly generated values * `rng` - (Function) Random # generator function that returns an Array[16] of byte values (0-255) * `buffer` - (Array | Buffer) Array or buffer where UUID bytes are to be written. * `offset` - (Number) Starting index in `buffer` at which to begin writing. Returns `buffer`, if specified, otherwise the string form of the UUID Example: Generate string UUID with predefined `random` values ```javascript const v4options = { random: [ 0x10, 0x91, 0x56, 0xbe, 0xc4, 0xfb, 0xc1, 0xea, 0x71, 0xb4, 0xef, 0xe1, 0x67, 0x1c, 0x58, 0x36 ] }; uuidv4(v4options); // ⇨ '109156be-c4fb-41ea-b1b4-efe1671c5836' ``` Example: Generate two IDs in a single buffer ```javascript const buffer = new Array(); uuidv4(null, buffer, 0); // ⇨ [ 155, 29, 235, 77, 59, 125, 75, 173, 155, 221, 43, 13, 123, 61, 203, 109 ] uuidv4(null, buffer, 16); // ⇨ [ 155, 29, 235, 77, 59, 125, 75, 173, 155, 221, 43, 13, 123, 61, 203, 109, 27, 157, 107, 205, 187, 253, 75, 45, 155, 93, 171, 141, 251, 189, 75, 237 ] ``` ### Version 5 ```javascript const uuidv5 = require('uuid/v5'); // Incantations uuidv5(name, namespace); uuidv5(name, namespace, buffer); uuidv5(name, namespace, buffer, offset); ``` Generate and return a RFC4122 v5 UUID. * `name` - (String | Array[]) "name" to create UUID with * `namespace` - (String | Array[]) "namespace" UUID either as a String or Array[16] of byte values * `buffer` - (Array | Buffer) Array or buffer where UUID bytes are to be written. * `offset` - (Number) Starting index in `buffer` at which to begin writing. Default = 0 Returns `buffer`, if specified, otherwise the string form of the UUID Example: ```javascript uuidv5('hello world', MY_NAMESPACE); // ⇨ '9f282611-e0fd-5650-8953-89c8e342da0b' ``` ## Command Line UUIDs can be generated from the command line with the `uuid` command. ```shell $ uuid ddeb27fb-d9a0-4624-be4d-4615062daed4 $ uuid v1 02d37060-d446-11e7-a9fa-7bdae751ebe1 ``` Type `uuid --help` for usage details ## Testing ```shell npm test ``` ---- Markdown generated from [README_js.md](README_js.md) by [![RunMD Logo](http://i.imgur.com/h0FVyzU.png)](https://github.com/broofa/runmd) # find-npm-prefix Find the npm project directory associated with for a given directory ## USAGE ``` const findPrefix = require('find-npm-prefix') findPrefix(process.cwd).then(prefix => { … }) ``` ## findPrefix(dir) → Promise(prefix) This computes the npm prefix, that is, the directory that npm adds and removes modules from for a given path. It takes a directory as an argument and returns a promise of the associated prefix directory. ## Algorithm 1. If the directory is a `node_modules` folder, scan up the tree till you find a non-`node_modules` directory and return that. 2. Else, look for the first parent directory that contains a `node_modules` or a `package.json` 1. If one is found, that's the prefix. 2. If none are found, return the original directory we were given # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # pump pump is a small node module that pipes streams together and destroys all of them if one of them closes. ``` npm install pump ``` [![build status](http://img.shields.io/travis/mafintosh/pump.svg?style=flat)](http://travis-ci.org/mafintosh/pump) ## What problem does it solve? When using standard `source.pipe(dest)` source will _not_ be destroyed if dest emits close or an error. You are also not able to provide a callback to tell when then pipe has finished. pump does these two things for you ## Usage Simply pass the streams you want to pipe together to pump and add an optional callback ``` js var pump = require('pump') var fs = require('fs') var source = fs.createReadStream('/dev/random') var dest = fs.createWriteStream('/dev/null') pump(source, dest, function(err) { console.log('pipe finished', err) }) setTimeout(function() { dest.destroy() // when dest is closed pump will destroy source }, 1000) ``` You can use pump to pipe more than two streams together as well ``` js var transform = someTransformStream() pump(source, transform, anotherTransform, dest, function(err) { console.log('pipe finished', err) }) ``` If `source`, `transform`, `anotherTransform` or `dest` closes all of them will be destroyed. Similarly to `stream.pipe()`, `pump()` returns the last stream passed in, so you can do: ``` return pump(s1, s2) // returns s2 ``` If you want to return a stream that combines *both* s1 and s2 to a single stream use [pumpify](https://github.com/mafintosh/pumpify) instead. ## License MIT ## Related `pump` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. # duplexify Turn a writeable and readable stream into a single streams2 duplex stream. Similar to [duplexer2](https://github.com/deoxxa/duplexer2) except it supports both streams2 and streams1 as input and it allows you to set the readable and writable part asynchronously using `setReadable(stream)` and `setWritable(stream)` ``` npm install duplexify ``` [![build status](http://img.shields.io/travis/mafintosh/duplexify.svg?style=flat)](http://travis-ci.org/mafintosh/duplexify) ## Usage Use `duplexify(writable, readable, streamOptions)` (or `duplexify.obj(writable, readable)` to create an object stream) ``` js var duplexify = require('duplexify') // turn writableStream and readableStream into a single duplex stream var dup = duplexify(writableStream, readableStream) dup.write('hello world') // will write to writableStream dup.on('data', function(data) { // will read from readableStream }) ``` You can also set the readable and writable parts asynchronously ``` js var dup = duplexify() dup.write('hello world') // write will buffer until the writable // part has been set // wait a bit ... dup.setReadable(readableStream) // maybe wait some more? dup.setWritable(writableStream) ``` If you call `setReadable` or `setWritable` multiple times it will unregister the previous readable/writable stream. To disable the readable or writable part call `setReadable` or `setWritable` with `null`. If the readable or writable streams emits an error or close it will destroy both streams and bubble up the event. You can also explicitly destroy the streams by calling `dup.destroy()`. The `destroy` method optionally takes an error object as argument, in which case the error is emitted as part of the `error` event. ``` js dup.on('error', function(err) { console.log('readable or writable emitted an error - close will follow') }) dup.on('close', function() { console.log('the duplex stream is destroyed') }) dup.destroy() // calls destroy on the readable and writable part (if present) ``` ## HTTP request example Turn a node core http request into a duplex stream is as easy as ``` js var duplexify = require('duplexify') var http = require('http') var request = function(opts) { var req = http.request(opts) var dup = duplexify(req) req.on('response', function(res) { dup.setReadable(res) }) return dup } var req = request({ method: 'GET', host: 'www.google.com', port: 80 }) req.end() req.pipe(process.stdout) ``` ## License MIT ## Related `duplexify` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). socks-proxy-agent ================ ### A SOCKS proxy `http.Agent` implementation for HTTP and HTTPS [![Build Status](https://travis-ci.org/TooTallNate/node-socks-proxy-agent.svg?branch=master)](https://travis-ci.org/TooTallNate/node-socks-proxy-agent) This module provides an `http.Agent` implementation that connects to a specified SOCKS proxy server, and can be used with the built-in `http` or `https` modules. It can also be used in conjunction with the `ws` module to establish a WebSocket connection over a SOCKS proxy. See the "Examples" section below. Installation ------------ Install with `npm`: ``` bash $ npm install socks-proxy-agent ``` Examples -------- #### `http` module example ``` js var url = require('url'); var http = require('http'); var SocksProxyAgent = require('socks-proxy-agent'); // SOCKS proxy to connect to var proxy = process.env.socks_proxy || 'socks://127.0.0.1:9050'; console.log('using proxy server %j', proxy); // HTTP endpoint for the proxy to connect to var endpoint = process.argv[2] || 'http://nodejs.org/api/'; console.log('attempting to GET %j', endpoint); var opts = url.parse(endpoint); // create an instance of the `SocksProxyAgent` class with the proxy server information var agent = new SocksProxyAgent(proxy); opts.agent = agent; http.get(opts, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` #### `https` module example ``` js var url = require('url'); var https = require('https'); var SocksProxyAgent = require('socks-proxy-agent'); // SOCKS proxy to connect to var proxy = process.env.socks_proxy || 'socks://127.0.0.1:9050'; console.log('using proxy server %j', proxy); // HTTP endpoint for the proxy to connect to var endpoint = process.argv[2] || 'https://encrypted.google.com/'; console.log('attempting to GET %j', endpoint); var opts = url.parse(endpoint); // create an instance of the `SocksProxyAgent` class with the proxy server information var agent = new SocksProxyAgent(proxy); opts.agent = agent; https.get(opts, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` #### `ws` WebSocket connection example ``` js var WebSocket = require('ws'); var SocksProxyAgent = require('socks-proxy-agent'); // SOCKS proxy to connect to var proxy = process.env.socks_proxy || 'socks://127.0.0.1:9050'; console.log('using proxy server %j', proxy); // WebSocket endpoint for the proxy to connect to var endpoint = process.argv[2] || 'ws://echo.websocket.org'; console.log('attempting to connect to WebSocket %j', endpoint); // create an instance of the `SocksProxyAgent` class with the proxy server information var agent = new SocksProxyAgent(proxy); // initiate the WebSocket connection var socket = new WebSocket(endpoint, { agent: agent }); socket.on('open', function () { console.log('"open" event!'); socket.send('hello world'); }); socket.on('message', function (data, flags) { console.log('"message" event! %j %j', data, flags); socket.close(); }); ``` License ------- (The MIT License) Copyright (c) 2013 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # npm-bundled Run this in a node package, and it'll tell you which things in node_modules are bundledDependencies, or transitive dependencies of bundled dependencies. [![Build Status](https://travis-ci.org/npm/npm-bundled.svg?branch=master)](https://travis-ci.org/npm/npm-bundled) ## USAGE To get the list of deps at the top level that are bundled (or transitive deps of a bundled dep) run this: ```js const bundled = require('npm-bundled') // async version bundled({ path: '/path/to/pkg/defaults/to/cwd'}, (er, list) => { // er means it had an error, which is _hella_ weird // list is a list of package names, like `fooblz` or `@corp/blerg` // the might not all be deps of the top level, because transitives }) // async promise version bundled({ path: '/path/to/pkg/defaults/to/cwd'}).then(list => { // so promisey! // actually the callback version returns a promise, too, it just // attaches the supplied callback to the promise }) // sync version, throws if there's an error const list = bundled({ path: '/path/to/pkg/defaults/to/cwd'}) ``` That's basically all you need to know. If you care to dig into it, you can also use the `bundled.Walker` and `bundled.WalkerSync` classes to get fancy. This library does not write anything to the filesystem, but it _may_ have undefined behavior if the structure of `node_modules` changes while it's reading deps. All symlinks are followed. This means that it can lead to surprising results if a symlinked bundled dependency has a missing dependency that is satisfied at the top level. Since package creation resolves symlinks as well, this is an edge case where package creation and development environment are not going to be aligned, and is best avoided. # mime-types [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Node.js Version][node-version-image]][node-version-url] [![Build Status][travis-image]][travis-url] [![Test Coverage][coveralls-image]][coveralls-url] The ultimate javascript content-type utility. Similar to [the `mime@1.x` module](https://www.npmjs.com/package/mime), except: - __No fallbacks.__ Instead of naively returning the first available type, `mime-types` simply returns `false`, so do `var type = mime.lookup('unrecognized') || 'application/octet-stream'`. - No `new Mime()` business, so you could do `var lookup = require('mime-types').lookup`. - No `.define()` functionality - Bug fixes for `.lookup(path)` Otherwise, the API is compatible with `mime` 1.x. ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```sh $ npm install mime-types ``` ## Adding Types All mime types are based on [mime-db](https://www.npmjs.com/package/mime-db), so open a PR there if you'd like to add mime types. ## API ```js var mime = require('mime-types') ``` All functions return `false` if input is invalid or not found. ### mime.lookup(path) Lookup the content-type associated with a file. ```js mime.lookup('json') // 'application/json' mime.lookup('.md') // 'text/markdown' mime.lookup('file.html') // 'text/html' mime.lookup('folder/file.js') // 'application/javascript' mime.lookup('folder/.htaccess') // false mime.lookup('cats') // false ``` ### mime.contentType(type) Create a full content-type header given a content-type or extension. ```js mime.contentType('markdown') // 'text/x-markdown; charset=utf-8' mime.contentType('file.json') // 'application/json; charset=utf-8' // from a full path mime.contentType(path.extname('/path/to/file.json')) // 'application/json; charset=utf-8' ``` ### mime.extension(type) Get the default extension for a content-type. ```js mime.extension('application/octet-stream') // 'bin' ``` ### mime.charset(type) Lookup the implied default charset of a content-type. ```js mime.charset('text/markdown') // 'UTF-8' ``` ### var type = mime.types[extension] A map of content-types by extension. ### [extensions...] = mime.extensions[type] A map of extensions by content-type. ## License [MIT](LICENSE) [npm-image]: https://img.shields.io/npm/v/mime-types.svg [npm-url]: https://npmjs.org/package/mime-types [node-version-image]: https://img.shields.io/node/v/mime-types.svg [node-version-url]: https://nodejs.org/en/download/ [travis-image]: https://img.shields.io/travis/jshttp/mime-types/master.svg [travis-url]: https://travis-ci.org/jshttp/mime-types [coveralls-image]: https://img.shields.io/coveralls/jshttp/mime-types/master.svg [coveralls-url]: https://coveralls.io/r/jshttp/mime-types [downloads-image]: https://img.shields.io/npm/dm/mime-types.svg [downloads-url]: https://npmjs.org/package/mime-types # fs-vacuum Remove the empty branches of a directory tree, optionally up to (but not including) a specified base directory. Optionally nukes the leaf directory. ## Usage ```javascript var logger = require("npmlog"); var vacuum = require("fs-vacuum"); var options = { base : "/path/to/my/tree/root", purge : true, log : logger.silly.bind(logger, "myCleanup") }; /* Assuming there are no other files or directories in "out", "to", or "my", * the final path will just be "/path/to/my/tree/root". */ vacuum("/path/to/my/tree/root/out/to/my/files", options, function (error) { if (error) console.error("Unable to cleanly vacuum:", error.message); }); ``` # vacuum(directory, options, callback) * `directory` {String} Leaf node to remove. **Must be a directory, symlink, or file.** * `options` {Object} * `base` {String} No directories at or above this level of the filesystem will be removed. * `purge` {Boolean} If set, nuke the whole leaf directory, including its contents. * `log` {Function} A logging function that takes `npmlog`-compatible argument lists. * `callback` {Function} Function to call once vacuuming is complete. * `error` {Error} What went wrong along the way, if anything. TweetNaCl.js ============ Port of [TweetNaCl](http://tweetnacl.cr.yp.to) / [NaCl](http://nacl.cr.yp.to/) to JavaScript for modern browsers and Node.js. Public domain. [![Build Status](https://travis-ci.org/dchest/tweetnacl-js.svg?branch=master) ](https://travis-ci.org/dchest/tweetnacl-js) Demo: <https://tweetnacl.js.org> **:warning: The library is stable and API is frozen, however it has not been independently reviewed. If you can help reviewing it, please [contact me](mailto:dmitry@codingrobots.com).** Documentation ============= * [Overview](#overview) * [Installation](#installation) * [Usage](#usage) * [Public-key authenticated encryption (box)](#public-key-authenticated-encryption-box) * [Secret-key authenticated encryption (secretbox)](#secret-key-authenticated-encryption-secretbox) * [Scalar multiplication](#scalar-multiplication) * [Signatures](#signatures) * [Hashing](#hashing) * [Random bytes generation](#random-bytes-generation) * [Constant-time comparison](#constant-time-comparison) * [System requirements](#system-requirements) * [Development and testing](#development-and-testing) * [Benchmarks](#benchmarks) * [Contributors](#contributors) * [Who uses it](#who-uses-it) Overview -------- The primary goal of this project is to produce a translation of TweetNaCl to JavaScript which is as close as possible to the original C implementation, plus a thin layer of idiomatic high-level API on top of it. There are two versions, you can use either of them: * `nacl.js` is the port of TweetNaCl with minimum differences from the original + high-level API. * `nacl-fast.js` is like `nacl.js`, but with some functions replaced with faster versions. Installation ------------ You can install TweetNaCl.js via a package manager: [Bower](http://bower.io): $ bower install tweetnacl [NPM](https://www.npmjs.org/): $ npm install tweetnacl or [download source code](https://github.com/dchest/tweetnacl-js/releases). Usage ----- All API functions accept and return bytes as `Uint8Array`s. If you need to encode or decode strings, use functions from <https://github.com/dchest/tweetnacl-util-js> or one of the more robust codec packages. In Node.js v4 and later `Buffer` objects are backed by `Uint8Array`s, so you can freely pass them to TweetNaCl.js functions as arguments. The returned objects are still `Uint8Array`s, so if you need `Buffer`s, you'll have to convert them manually; make sure to convert using copying: `new Buffer(array)`, instead of sharing: `new Buffer(array.buffer)`, because some functions return subarrays of their buffers. ### Public-key authenticated encryption (box) Implements *curve25519-xsalsa20-poly1305*. #### nacl.box.keyPair() Generates a new random key pair for box and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 32-byte secret key } #### nacl.box.keyPair.fromSecretKey(secretKey) Returns a key pair for box with public key corresponding to the given secret key. #### nacl.box(message, nonce, theirPublicKey, mySecretKey) Encrypt and authenticates message using peer's public key, our secret key, and the given nonce, which must be unique for each distinct message for a key pair. Returns an encrypted and authenticated message, which is `nacl.box.overheadLength` longer than the original message. #### nacl.box.open(box, nonce, theirPublicKey, mySecretKey) Authenticates and decrypts the given box with peer's public key, our secret key, and the given nonce. Returns the original message, or `false` if authentication fails. #### nacl.box.before(theirPublicKey, mySecretKey) Returns a precomputed shared key which can be used in `nacl.box.after` and `nacl.box.open.after`. #### nacl.box.after(message, nonce, sharedKey) Same as `nacl.box`, but uses a shared key precomputed with `nacl.box.before`. #### nacl.box.open.after(box, nonce, sharedKey) Same as `nacl.box.open`, but uses a shared key precomputed with `nacl.box.before`. #### nacl.box.publicKeyLength = 32 Length of public key in bytes. #### nacl.box.secretKeyLength = 32 Length of secret key in bytes. #### nacl.box.sharedKeyLength = 32 Length of precomputed shared key in bytes. #### nacl.box.nonceLength = 24 Length of nonce in bytes. #### nacl.box.overheadLength = 16 Length of overhead added to box compared to original message. ### Secret-key authenticated encryption (secretbox) Implements *xsalsa20-poly1305*. #### nacl.secretbox(message, nonce, key) Encrypt and authenticates message using the key and the nonce. The nonce must be unique for each distinct message for this key. Returns an encrypted and authenticated message, which is `nacl.secretbox.overheadLength` longer than the original message. #### nacl.secretbox.open(box, nonce, key) Authenticates and decrypts the given secret box using the key and the nonce. Returns the original message, or `false` if authentication fails. #### nacl.secretbox.keyLength = 32 Length of key in bytes. #### nacl.secretbox.nonceLength = 24 Length of nonce in bytes. #### nacl.secretbox.overheadLength = 16 Length of overhead added to secret box compared to original message. ### Scalar multiplication Implements *curve25519*. #### nacl.scalarMult(n, p) Multiplies an integer `n` by a group element `p` and returns the resulting group element. #### nacl.scalarMult.base(n) Multiplies an integer `n` by a standard group element and returns the resulting group element. #### nacl.scalarMult.scalarLength = 32 Length of scalar in bytes. #### nacl.scalarMult.groupElementLength = 32 Length of group element in bytes. ### Signatures Implements [ed25519](http://ed25519.cr.yp.to). #### nacl.sign.keyPair() Generates new random key pair for signing and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 64-byte secret key } #### nacl.sign.keyPair.fromSecretKey(secretKey) Returns a signing key pair with public key corresponding to the given 64-byte secret key. The secret key must have been generated by `nacl.sign.keyPair` or `nacl.sign.keyPair.fromSeed`. #### nacl.sign.keyPair.fromSeed(seed) Returns a new signing key pair generated deterministically from a 32-byte seed. The seed must contain enough entropy to be secure. This method is not recommended for general use: instead, use `nacl.sign.keyPair` to generate a new key pair from a random seed. #### nacl.sign(message, secretKey) Signs the message using the secret key and returns a signed message. #### nacl.sign.open(signedMessage, publicKey) Verifies the signed message and returns the message without signature. Returns `null` if verification failed. #### nacl.sign.detached(message, secretKey) Signs the message using the secret key and returns a signature. #### nacl.sign.detached.verify(message, signature, publicKey) Verifies the signature for the message and returns `true` if verification succeeded or `false` if it failed. #### nacl.sign.publicKeyLength = 32 Length of signing public key in bytes. #### nacl.sign.secretKeyLength = 64 Length of signing secret key in bytes. #### nacl.sign.seedLength = 32 Length of seed for `nacl.sign.keyPair.fromSeed` in bytes. #### nacl.sign.signatureLength = 64 Length of signature in bytes. ### Hashing Implements *SHA-512*. #### nacl.hash(message) Returns SHA-512 hash of the message. #### nacl.hash.hashLength = 64 Length of hash in bytes. ### Random bytes generation #### nacl.randomBytes(length) Returns a `Uint8Array` of the given length containing random bytes of cryptographic quality. **Implementation note** TweetNaCl.js uses the following methods to generate random bytes, depending on the platform it runs on: * `window.crypto.getRandomValues` (WebCrypto standard) * `window.msCrypto.getRandomValues` (Internet Explorer 11) * `crypto.randomBytes` (Node.js) If the platform doesn't provide a suitable PRNG, the following functions, which require random numbers, will throw exception: * `nacl.randomBytes` * `nacl.box.keyPair` * `nacl.sign.keyPair` Other functions are deterministic and will continue working. If a platform you are targeting doesn't implement secure random number generator, but you somehow have a cryptographically-strong source of entropy (not `Math.random`!), and you know what you are doing, you can plug it into TweetNaCl.js like this: nacl.setPRNG(function(x, n) { // ... copy n random bytes into x ... }); Note that `nacl.setPRNG` *completely replaces* internal random byte generator with the one provided. ### Constant-time comparison #### nacl.verify(x, y) Compares `x` and `y` in constant time and returns `true` if their lengths are non-zero and equal, and their contents are equal. Returns `false` if either of the arguments has zero length, or arguments have different lengths, or their contents differ. System requirements ------------------- TweetNaCl.js supports modern browsers that have a cryptographically secure pseudorandom number generator and typed arrays, including the latest versions of: * Chrome * Firefox * Safari (Mac, iOS) * Internet Explorer 11 Other systems: * Node.js Development and testing ------------------------ Install NPM modules needed for development: $ npm install To build minified versions: $ npm run build Tests use minified version, so make sure to rebuild it every time you change `nacl.js` or `nacl-fast.js`. ### Testing To run tests in Node.js: $ npm run test-node By default all tests described here work on `nacl.min.js`. To test other versions, set environment variable `NACL_SRC` to the file name you want to test. For example, the following command will test fast minified version: $ NACL_SRC=nacl-fast.min.js npm run test-node To run full suite of tests in Node.js, including comparing outputs of JavaScript port to outputs of the original C version: $ npm run test-node-all To prepare tests for browsers: $ npm run build-test-browser and then open `test/browser/test.html` (or `test/browser/test-fast.html`) to run them. To run headless browser tests with `tape-run` (powered by Electron): $ npm run test-browser (If you get `Error: spawn ENOENT`, install *xvfb*: `sudo apt-get install xvfb`.) To run tests in both Node and Electron: $ npm test ### Benchmarking To run benchmarks in Node.js: $ npm run bench $ NACL_SRC=nacl-fast.min.js npm run bench To run benchmarks in a browser, open `test/benchmark/bench.html` (or `test/benchmark/bench-fast.html`). Benchmarks ---------- For reference, here are benchmarks from MacBook Pro (Retina, 13-inch, Mid 2014) laptop with 2.6 GHz Intel Core i5 CPU (Intel) in Chrome 53/OS X and Xiaomi Redmi Note 3 smartphone with 1.8 GHz Qualcomm Snapdragon 650 64-bit CPU (ARM) in Chrome 52/Android: | | nacl.js Intel | nacl-fast.js Intel | nacl.js ARM | nacl-fast.js ARM | | ------------- |:-------------:|:-------------------:|:-------------:|:-----------------:| | salsa20 | 1.3 MB/s | 128 MB/s | 0.4 MB/s | 43 MB/s | | poly1305 | 13 MB/s | 171 MB/s | 4 MB/s | 52 MB/s | | hash | 4 MB/s | 34 MB/s | 0.9 MB/s | 12 MB/s | | secretbox 1K | 1113 op/s | 57583 op/s | 334 op/s | 14227 op/s | | box 1K | 145 op/s | 718 op/s | 37 op/s | 368 op/s | | scalarMult | 171 op/s | 733 op/s | 56 op/s | 380 op/s | | sign | 77 op/s | 200 op/s | 20 op/s | 61 op/s | | sign.open | 39 op/s | 102 op/s | 11 op/s | 31 op/s | (You can run benchmarks on your devices by clicking on the links at the bottom of the [home page](https://tweetnacl.js.org)). In short, with *nacl-fast.js* and 1024-byte messages you can expect to encrypt and authenticate more than 57000 messages per second on a typical laptop or more than 14000 messages per second on a $170 smartphone, sign about 200 and verify 100 messages per second on a laptop or 60 and 30 messages per second on a smartphone, per CPU core (with Web Workers you can do these operations in parallel), which is good enough for most applications. Contributors ------------ See AUTHORS.md file. Third-party libraries based on TweetNaCl.js ------------------------------------------- * [forward-secrecy](https://github.com/alax/forward-secrecy) — Axolotl ratchet implementation * [nacl-stream](https://github.com/dchest/nacl-stream-js) - streaming encryption * [tweetnacl-auth-js](https://github.com/dchest/tweetnacl-auth-js) — implementation of [`crypto_auth`](http://nacl.cr.yp.to/auth.html) * [chloride](https://github.com/dominictarr/chloride) - unified API for various NaCl modules Who uses it ----------- Some notable users of TweetNaCl.js: * [miniLock](http://minilock.io/) * [Stellar](https://www.stellar.org/) has-unicode =========== Try to guess if your terminal supports unicode ```javascript var hasUnicode = require("has-unicode") if (hasUnicode()) { // the terminal probably has unicode support } ``` ```javascript var hasUnicode = require("has-unicode").tryHarder hasUnicode(function(unicodeSupported) { if (unicodeSupported) { // the terminal probably has unicode support } }) ``` ## Detecting Unicode What we actually detect is UTF-8 support, as that's what Node itself supports. If you have a UTF-16 locale then you won't be detected as unicode capable. ### Windows Since at least Windows 7, `cmd` and `powershell` have been unicode capable, but unfortunately even then it's not guaranteed. In many localizations it still uses legacy code pages and there's no facility short of running programs or linking C++ that will let us detect this. As such, we report any Windows installation as NOT unicode capable, and recommend that you encourage your users to override this via config. ### Unix Like Operating Systems We look at the environment variables `LC_ALL`, `LC_CTYPE`, and `LANG` in that order. For `LC_ALL` and `LANG`, it looks for `.UTF-8` in the value. For `LC_CTYPE` it looks to see if the value is `UTF-8`. This is sufficient for most POSIX systems. While locale data can be put in `/etc/locale.conf` as well, AFAIK it's always copied into the environment. # dotenv <img src="https://raw.githubusercontent.com/motdotla/dotenv/master/dotenv.png" alt="dotenv" align="right" /> Dotenv is a zero-dependency module that loads environment variables from a `.env` file into [`process.env`](https://nodejs.org/docs/latest/api/process.html#process_process_env). Storing configuration in the environment separate from code is based on [The Twelve-Factor App](http://12factor.net/config) methodology. [![BuildStatus](https://img.shields.io/travis/motdotla/dotenv/master.svg?style=flat-square)](https://travis-ci.org/motdotla/dotenv) [![Build status](https://ci.appveyor.com/api/projects/status/rnba2pyi87hgc8xw/branch/master?svg=true)](https://ci.appveyor.com/project/maxbeatty/dotenv/branch/master) [![NPM version](https://img.shields.io/npm/v/dotenv.svg?style=flat-square)](https://www.npmjs.com/package/dotenv) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/feross/standard) [![Coverage Status](https://img.shields.io/coveralls/motdotla/dotenv/master.svg?style=flat-square)](https://coveralls.io/github/motdotla/dotenv?branch=coverall-intergration) ## Install ```bash npm install dotenv --save ``` ## Usage As early as possible in your application, require and configure dotenv. ```javascript require('dotenv').config() ``` Create a `.env` file in the root directory of your project. Add environment-specific variables on new lines in the form of `NAME=VALUE`. For example: ```dosini DB_HOST=localhost DB_USER=root DB_PASS=s1mpl3 ``` That's it. `process.env` now has the keys and values you defined in your `.env` file. ```javascript const db = require('db') db.connect({ host: process.env.DB_HOST, username: process.env.DB_USER, password: process.env.DB_PASS }) ``` ### Preload You can use the `--require` (`-r`) command line option to preload dotenv. By doing this, you do not need to require and load dotenv in your application code. This is the preferred approach when using `import` instead of `require`. ```bash $ node -r dotenv/config your_script.js ``` The configuration options below are supported as command line arguments in the format `dotenv_config_<option>=value` ```bash $ node -r dotenv/config your_script.js dotenv_config_path=/custom/path/to/your/env/vars ``` ## Config _Alias: `load`_ `config` will read your .env file, parse the contents, assign it to [`process.env`](https://nodejs.org/docs/latest/api/process.html#process_process_env), and return an Object with a `parsed` key containing the loaded content or an `error` key if it failed. ```js const result = dotenv.config() if (result.error) { throw result.error } console.log(result.parsed) ``` You can additionally, pass options to `config`. ### Options #### Path Default: `path.resolve(process.cwd(), '.env')` You can specify a custom path if your file containing environment variables is named or located differently. ```js require('dotenv').config({path: '/full/custom/path/to/your/env/vars'}) ``` #### Encoding Default: `utf8` You may specify the encoding of your file containing environment variables using this option. ```js require('dotenv').config({encoding: 'base64'}) ``` ## Parse The engine which parses the contents of your file containing environment variables is available to use. It accepts a String or Buffer and will return an Object with the parsed keys and values. ```js const dotenv = require('dotenv') const buf = new Buffer('BASIC=basic') const config = dotenv.parse(buf) // will return an object console.log(typeof config, config) // object { BASIC : 'basic' } ``` ### Rules The parsing engine currently supports the following rules: - `BASIC=basic` becomes `{BASIC: 'basic'}` - empty lines are skipped - lines beginning with `#` are treated as comments - empty values become empty strings (`EMPTY=` becomes `{EMPTY: ''}`) - single and double quoted values are escaped (`SINGLE_QUOTE='quoted'` becomes `{SINGLE_QUOTE: "quoted"}`) - new lines are expanded if in double quotes (`MULTILINE="new\nline"` becomes ``` {MULTILINE: 'new line'} ``` - inner quotes are maintained (think JSON) (`JSON={"foo": "bar"}` becomes `{JSON:"{\"foo\": \"bar\"}"`) - whitespace is removed from both ends of the value (see more on [`trim`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/Trim)) (`FOO=" some value "` becomes `{FOO: 'some value'}`) ## FAQ ### Should I commit my `.env` file? No. We **strongly** recommend against committing your `.env` file to version control. It should only include environment-specific values such as database passwords or API keys. Your production database should have a different password than your development database. ### Should I have multiple `.env` files? No. We **strongly** recommend against having a "main" `.env` file and an "environment" `.env` file like `.env.test`. Your config should vary between deploys, and you should not be sharing values between environments. > In a twelve-factor app, env vars are granular controls, each fully orthogonal to other env vars. They are never grouped together as “environments”, but instead are independently managed for each deploy. This is a model that scales up smoothly as the app naturally expands into more deploys over its lifetime. > > – [The Twelve-Factor App](http://12factor.net/config) ### What happens to environment variables that were already set? We will never modify any environment variables that have already been set. In particular, if there is a variable in your `.env` file which collides with one that already exists in your environment, then that variable will be skipped. This behavior allows you to override all `.env` configurations with a machine-specific environment, although it is not recommended. If you want to override `process.env` you can do something like this: ```javascript const fs = require('fs') const dotenv = require('dotenv') const envConfig = dotenv.parse(fs.readFileSync('.env.override')) for (var k in envConfig) { process.env[k] = envConfig[k] } ``` ### Can I customize/write plugins for dotenv? For `dotenv@2.x.x`: Yes. `dotenv.config()` now returns an object representing the parsed `.env` file. This gives you everything you need to continue setting values on `process.env`. For example: ```js var dotenv = require('dotenv') var variableExpansion = require('dotenv-expand') const myEnv = dotenv.config() variableExpansion(myEnv) ``` ### What about variable expansion? For `dotenv@2.x.x`: Use [dotenv-expand](https://github.com/motdotla/dotenv-expand). For `dotenv@1.x.x`: We haven't been presented with a compelling use case for expanding variables and believe it leads to env vars that are not "fully orthogonal" as [The Twelve-Factor App](http://12factor.net/config) outlines.<sup>[[1](https://github.com/motdotla/dotenv/issues/39)][[2](https://github.com/motdotla/dotenv/pull/97)]</sup> Please open an issue if you have a compelling use case. ### How do I use dotenv with `import`? ES2015 and beyond offers modules that allow you to `export` any top-level `function`, `class`, `var`, `let`, or `const`. > When you run a module containing an `import` declaration, the modules it imports are loaded first, then each module body is executed in a depth-first traversal of the dependency graph, avoiding cycles by skipping anything already executed. > > – [ES6 In Depth: Modules](https://hacks.mozilla.org/2015/08/es6-in-depth-modules/) You must run `dotenv.config()` before referencing any environment variables. Here's an example of problematic code: `errorReporter.js`: ```js import { Client } from 'best-error-reporting-service' export const client = new Client(process.env.BEST_API_KEY) ``` `index.js`: ```js import dotenv from 'dotenv' dotenv.config() import errorReporter from './errorReporter' errorReporter.client.report(new Error('faq example')) ``` `client` will not be configured correctly because it was constructed before `dotenv.config()` was executed. There are (at least) 3 ways to make this work. 1. Preload dotenv: `node --require dotenv/config index.js` (_Note: you do not need to `import` dotenv with this approach_) 2. Import `dotenv/config` instead of `dotenv` (_Note: you do not need to call `dotenv.config()` and must pass options via the command line with this approach_) 3. Create a separate file that will execute `config` first as outlined in [this comment on #133](https://github.com/motdotla/dotenv/issues/133#issuecomment-255298822) ## Contributing Guide See [CONTRIBUTING.md](CONTRIBUTING.md) ## Change Log See [CHANGELOG.md](CHANGELOG.md) ## License See [LICENSE](LICENSE) ## Who's using dotenv Here's just a few of many repositories using dotenv: * [jaws](https://github.com/jaws-framework/jaws-core-js) * [node-lambda](https://github.com/motdotla/node-lambda) * [resume-cli](https://www.npmjs.com/package/resume-cli) * [phant](https://www.npmjs.com/package/phant) * [adafruit-io-node](https://github.com/adafruit/adafruit-io-node) * [mockbin](https://www.npmjs.com/package/mockbin) * [and many more...](https://www.npmjs.com/browse/depended/dotenv) ## Go well with dotenv Here's some projects that expand on dotenv. Check them out. * [require-environment-variables](https://github.com/bjoshuanoah/require-environment-variables) * [dotenv-safe](https://github.com/rolodato/dotenv-safe) * [envalid](https://github.com/af/envalid) * [lookenv](https://github.com/RodrigoEspinosa/lookenv) * [run.env](https://www.npmjs.com/package/run.env) * [dotenv-webpack](https://github.com/mrsteele/dotenv-webpack) https-proxy-agent ================ ### An HTTP(s) proxy `http.Agent` implementation for HTTPS [![Build Status](https://travis-ci.org/TooTallNate/node-https-proxy-agent.svg?branch=master)](https://travis-ci.org/TooTallNate/node-https-proxy-agent) This module provides an `http.Agent` implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in `https` module. Specifically, this `Agent` implementation connects to an intermediary "proxy" server and issues the [CONNECT HTTP method][CONNECT], which tells the proxy to open a direct TCP connection to the destination server. Since this agent implements the CONNECT HTTP method, it also works with other protocols that use this method when connecting over proxies (i.e. WebSockets). See the "Examples" section below for more. Installation ------------ Install with `npm`: ``` bash $ npm install https-proxy-agent ``` Examples -------- #### `https` module example ``` js var url = require('url'); var https = require('https'); var HttpsProxyAgent = require('https-proxy-agent'); // HTTP/HTTPS proxy to connect to var proxy = process.env.http_proxy || 'http://168.63.76.32:3128'; console.log('using proxy server %j', proxy); // HTTPS endpoint for the proxy to connect to var endpoint = process.argv[2] || 'https://graph.facebook.com/tootallnate'; console.log('attempting to GET %j', endpoint); var options = url.parse(endpoint); // create an instance of the `HttpsProxyAgent` class with the proxy server information var agent = new HttpsProxyAgent(proxy); options.agent = agent; https.get(options, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` #### `ws` WebSocket connection example ``` js var url = require('url'); var WebSocket = require('ws'); var HttpsProxyAgent = require('https-proxy-agent'); // HTTP/HTTPS proxy to connect to var proxy = process.env.http_proxy || 'http://168.63.76.32:3128'; console.log('using proxy server %j', proxy); // WebSocket endpoint for the proxy to connect to var endpoint = process.argv[2] || 'ws://echo.websocket.org'; var parsed = url.parse(endpoint); console.log('attempting to connect to WebSocket %j', endpoint); // create an instance of the `HttpsProxyAgent` class with the proxy server information var options = url.parse(proxy); var agent = new HttpsProxyAgent(options); // finally, initiate the WebSocket connection var socket = new WebSocket(endpoint, { agent: agent }); socket.on('open', function () { console.log('"open" event!'); socket.send('hello world'); }); socket.on('message', function (data, flags) { console.log('"message" event! %j %j', data, flags); socket.close(); }); ``` API --- ### new HttpsProxyAgent(Object options) The `HttpsProxyAgent` class implements an `http.Agent` subclass that connects to the specified "HTTP(s) proxy server" in order to proxy HTTPS and/or WebSocket requests. This is achieved by using the [HTTP `CONNECT` method][CONNECT]. The `options` argument may either be a string URI of the proxy server to use, or an "options" object with more specific properties: * `host` - String - Proxy host to connect to (may use `hostname` as well). Required. * `port` - Number - Proxy port to connect to. Required. * `protocol` - String - If `https:`, then use TLS to connect to the proxy. * `headers` - Object - Additional HTTP headers to be sent on the HTTP CONNECT method. * Any other options given are passed to the `net.connect()`/`tls.connect()` functions. License ------- (The MIT License) Copyright (c) 2013 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [CONNECT]: http://en.wikipedia.org/wiki/HTTP_tunnel#HTTP_CONNECT_Tunneling # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support. [![Build Status](https://travis-ci.org/epoberezkin/fast-deep-equal.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-deep-equal) [![npm](https://img.shields.io/npm/v/fast-deep-equal.svg)](https://www.npmjs.com/package/fast-deep-equal) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-deep-equal/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-deep-equal?branch=master) ## Install ```bash npm install fast-deep-equal ``` ## Features - ES5 compatible - works in node.js (8+) and browsers (IE9+) - checks equality of Date and RegExp objects by value. ES6 equal (`require('fast-deep-equal/es6')`) also supports: - Maps - Sets - Typed arrays ## Usage ```javascript var equal = require('fast-deep-equal'); console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true ``` To support ES6 Maps, Sets and Typed arrays equality use: ```javascript var equal = require('fast-deep-equal/es6'); console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true ``` To use with React (avoiding the traversal of React elements' _owner property that contains circular references and is not needed when comparing the elements - borrowed from [react-fast-compare](https://github.com/FormidableLabs/react-fast-compare)): ```javascript var equal = require('fast-deep-equal/react'); var equal = require('fast-deep-equal/es6/react'); ``` ## Performance benchmark Node.js v12.6.0: ``` fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled) fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled) fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled) nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled) shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled) underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled) lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled) deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled) deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled) ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled) util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled) assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled) The fastest is fast-deep-equal ``` To run benchmark (requires node.js 6+): ```bash npm run benchmark ``` __Please note__: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application. ## Enterprise support fast-deep-equal package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-deep-equal?utm_source=npm-fast-deep-equal&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/fast-deep-equal/blob/master/LICENSE) # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` are-we-there-yet ---------------- Track complex hiearchies of asynchronous task completion statuses. This is intended to give you a way of recording and reporting the progress of the big recursive fan-out and gather type workflows that are so common in async. What you do with this completion data is up to you, but the most common use case is to feed it to one of the many progress bar modules. Most progress bar modules include a rudamentary version of this, but my needs were more complex. Usage ===== ```javascript var TrackerGroup = require("are-we-there-yet").TrackerGroup var top = new TrackerGroup("program") var single = top.newItem("one thing", 100) single.completeWork(20) console.log(top.completed()) // 0.2 fs.stat("file", function(er, stat) { if (er) throw er var stream = top.newStream("file", stat.size) console.log(top.completed()) // now 0.1 as single is 50% of the job and is 20% complete // and 50% * 20% == 10% fs.createReadStream("file").pipe(stream).on("data", function (chunk) { // do stuff with chunk }) top.on("change", function (name) { // called each time a chunk is read from "file" // top.completed() will start at 0.1 and fill up to 0.6 as the file is read }) }) ``` Shared Methods ============== * var completed = tracker.completed() Implemented in: `Tracker`, `TrackerGroup`, `TrackerStream` Returns the ratio of completed work to work to be done. Range of 0 to 1. * tracker.finish() Implemented in: `Tracker`, `TrackerGroup` Marks the tracker as completed. With a TrackerGroup this marks all of its components as completed. Marks all of the components of this tracker as finished, which in turn means that `tracker.completed()` for this will now be 1. This will result in one or more `change` events being emitted. Events ====== All tracker objects emit `change` events with the following arguments: ``` function (name, completed, tracker) ``` `name` is the name of the tracker that originally emitted the event, or if it didn't have one, the first containing tracker group that had one. `completed` is the percent complete (as returned by `tracker.completed()` method). `tracker` is the tracker object that you are listening for events on. TrackerGroup ============ * var tracker = new TrackerGroup(**name**) * **name** *(optional)* - The name of this tracker group, used in change notifications if the component updating didn't have a name. Defaults to undefined. Creates a new empty tracker aggregation group. These are trackers whose completion status is determined by the completion status of other trackers. * tracker.addUnit(**otherTracker**, **weight**) * **otherTracker** - Any of the other are-we-there-yet tracker objects * **weight** *(optional)* - The weight to give the tracker, defaults to 1. Adds the **otherTracker** to this aggregation group. The weight determines how long you expect this tracker to take to complete in proportion to other units. So for instance, if you add one tracker with a weight of 1 and another with a weight of 2, you're saying the second will take twice as long to complete as the first. As such, the first will account for 33% of the completion of this tracker and the second will account for the other 67%. Returns **otherTracker**. * var subGroup = tracker.newGroup(**name**, **weight**) The above is exactly equivalent to: ```javascript var subGroup = tracker.addUnit(new TrackerGroup(name), weight) ``` * var subItem = tracker.newItem(**name**, **todo**, **weight**) The above is exactly equivalent to: ```javascript var subItem = tracker.addUnit(new Tracker(name, todo), weight) ``` * var subStream = tracker.newStream(**name**, **todo**, **weight**) The above is exactly equivalent to: ```javascript var subStream = tracker.addUnit(new TrackerStream(name, todo), weight) ``` * console.log( tracker.debug() ) Returns a tree showing the completion of this tracker group and all of its children, including recursively entering all of the children. Tracker ======= * var tracker = new Tracker(**name**, **todo**) * **name** *(optional)* The name of this counter to report in change events. Defaults to undefined. * **todo** *(optional)* The amount of work todo (a number). Defaults to 0. Ordinarily these are constructed as a part of a tracker group (via `newItem`). * var completed = tracker.completed() Returns the ratio of completed work to work to be done. Range of 0 to 1. If total work to be done is 0 then it will return 0. * tracker.addWork(**todo**) * **todo** A number to add to the amount of work to be done. Increases the amount of work to be done, thus decreasing the completion percentage. Triggers a `change` event. * tracker.completeWork(**completed**) * **completed** A number to add to the work complete Increase the amount of work complete, thus increasing the completion percentage. Will never increase the work completed past the amount of work todo. That is, percentages > 100% are not allowed. Triggers a `change` event. * tracker.finish() Marks this tracker as finished, tracker.completed() will now be 1. Triggers a `change` event. TrackerStream ============= * var tracker = new TrackerStream(**name**, **size**, **options**) * **name** *(optional)* The name of this counter to report in change events. Defaults to undefined. * **size** *(optional)* The number of bytes being sent through this stream. * **options** *(optional)* A hash of stream options The tracker stream object is a pass through stream that updates an internal tracker object each time a block passes through. It's intended to track downloads, file extraction and other related activities. You use it by piping your data source into it and then using it as your data source. If your data has a length attribute then that's used as the amount of work completed when the chunk is passed through. If it does not (eg, object streams) then each chunk counts as completing 1 unit of work, so your size should be the total number of objects being streamed. * tracker.addWork(**todo**) * **todo** Increase the expected overall size by **todo** bytes. Increases the amount of work to be done, thus decreasing the completion percentage. Triggers a `change` event. # npm-packlist [![Build Status](https://travis-ci.com/npm/npm-packlist.svg?token=hHeDp9pQmz9kvsgRNVHy&branch=master)](https://travis-ci.com/npm/npm-packlist) Get a list of the files to add from a folder into an npm package These can be handed to [tar](http://npm.im/tar) like so to make an npm package tarball: ```js const packlist = require('npm-packlist') const tar = require('tar') const packageDir = '/path/to/package' const packageTarball = '/path/to/package.tgz' packlist({ path: packageDir }) .then(files => tar.create({ prefix: 'package/', cwd: packageDir, file: packageTarball, gzip: true }, files)) .then(_ => { // tarball has been created, continue with your day }) ``` This uses the following rules: 1. If a `package.json` file is found, and it has a `files` list, then ignore everything that isn't in `files`. Always include the readme, license, notice, changes, changelog, and history files, if they exist, and the package.json file itself. 2. If there's no `package.json` file (or it has no `files` list), and there is a `.npmignore` file, then ignore all the files in the `.npmignore` file. 3. If there's no `package.json` with a `files` list, and there's no `.npmignore` file, but there is a `.gitignore` file, then ignore all the files in the `.gitignore` file. 4. Everything in the root `node_modules` is ignored, unless it's a bundled dependency. If it IS a bundled dependency, and it's a symbolic link, then the target of the link is included, not the symlink itself. 4. Unless they're explicitly included (by being in a `files` list, or a `!negated` rule in a relevant `.npmignore` or `.gitignore`), always ignore certain common cruft files: 1. .npmignore and .gitignore files (their effect is in the package already, there's no need to include them in the package) 2. editor junk like `.*.swp`, `._*` and `.*.orig` files 3. `.npmrc` files (these may contain private configs) 4. The `node_modules/.bin` folder 5. Waf and gyp cruft like `/build/config.gypi` and `.lock-wscript` 6. Darwin's `.DS_Store` files because wtf are those even 7. `npm-debug.log` files at the root of a project You can explicitly re-include any of these with a `files` list in `package.json` or a negated ignore file rule. ## API Same API as [ignore-walk](http://npm.im/ignore-walk), just hard-coded file list and rule sets. The `Walker` and `WalkerSync` classes take a `bundled` argument, which is a list of package names to include from node_modules. When calling the top-level `packlist()` and `packlist.sync()` functions, this module calls into `npm-bundled` directly. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` [Get supported safe-buffer with the Tidelift Subscription](https://tidelift.com/subscription/pkg/npm-safe-buffer?utm_source=npm-safe-buffer&utm_medium=referral&utm_campaign=readme) ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # ASAP [![Build Status](https://travis-ci.org/kriskowal/asap.png?branch=master)](https://travis-ci.org/kriskowal/asap) Promise and asynchronous observer libraries, as well as hand-rolled callback programs and libraries, often need a mechanism to postpone the execution of a callback until the next available event. (See [Designing API’s for Asynchrony][Zalgo].) The `asap` function executes a task **as soon as possible** but not before it returns, waiting only for the completion of the current event and previously scheduled tasks. ```javascript asap(function () { // ... }); ``` [Zalgo]: http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony This CommonJS package provides an `asap` module that exports a function that executes a task function *as soon as possible*. ASAP strives to schedule events to occur before yielding for IO, reflow, or redrawing. Each event receives an independent stack, with only platform code in parent frames and the events run in the order they are scheduled. ASAP provides a fast event queue that will execute tasks until it is empty before yielding to the JavaScript engine's underlying event-loop. When a task gets added to a previously empty event queue, ASAP schedules a flush event, preferring for that event to occur before the JavaScript engine has an opportunity to perform IO tasks or rendering, thus making the first task and subsequent tasks semantically indistinguishable. ASAP uses a variety of techniques to preserve this invariant on different versions of browsers and Node.js. By design, ASAP prevents input events from being handled until the task queue is empty. If the process is busy enough, this may cause incoming connection requests to be dropped, and may cause existing connections to inform the sender to reduce the transmission rate or stall. ASAP allows this on the theory that, if there is enough work to do, there is no sense in looking for trouble. As a consequence, ASAP can interfere with smooth animation. If your task should be tied to the rendering loop, consider using `requestAnimationFrame` instead. A long sequence of tasks can also effect the long running script dialog. If this is a problem, you may be able to use ASAP’s cousin `setImmediate` to break long processes into shorter intervals and periodically allow the browser to breathe. `setImmediate` will yield for IO, reflow, and repaint events. It also returns a handler and can be canceled. For a `setImmediate` shim, consider [YuzuJS setImmediate][setImmediate]. [setImmediate]: https://github.com/YuzuJS/setImmediate Take care. ASAP can sustain infinite recursive calls without warning. It will not halt from a stack overflow, and it will not consume unbounded memory. This is behaviorally equivalent to an infinite loop. Just as with infinite loops, you can monitor a Node.js process for this behavior with a heart-beat signal. As with infinite loops, a very small amount of caution goes a long way to avoiding problems. ```javascript function loop() { asap(loop); } loop(); ``` In browsers, if a task throws an exception, it will not interrupt the flushing of high-priority tasks. The exception will be postponed to a later, low-priority event to avoid slow-downs. In Node.js, if a task throws an exception, ASAP will resume flushing only if—and only after—the error is handled by `domain.on("error")` or `process.on("uncaughtException")`. ## Raw ASAP Checking for exceptions comes at a cost. The package also provides an `asap/raw` module that exports the underlying implementation which is faster but stalls if a task throws an exception. This internal version of the ASAP function does not check for errors. If a task does throw an error, it will stall the event queue unless you manually call `rawAsap.requestFlush()` before throwing the error, or any time after. In Node.js, `asap/raw` also runs all tasks outside any domain. If you need a task to be bound to your domain, you will have to do it manually. ```js if (process.domain) { task = process.domain.bind(task); } rawAsap(task); ``` ## Tasks A task may be any object that implements `call()`. A function will suffice, but closures tend not to be reusable and can cause garbage collector churn. Both `asap` and `rawAsap` accept task objects to give you the option of recycling task objects or using higher callable object abstractions. See the `asap` source for an illustration. ## Compatibility ASAP is tested on Node.js v0.10 and in a broad spectrum of web browsers. The following charts capture the browser test results for the most recent release. The first chart shows test results for ASAP running in the main window context. The second chart shows test results for ASAP running in a web worker context. Test results are inconclusive (grey) on browsers that do not support web workers. These data are captured automatically by [Continuous Integration][]. [Continuous Integration]: https://github.com/kriskowal/asap/blob/master/CONTRIBUTING.md ![Browser Compatibility](http://kriskowal-asap.s3-website-us-west-2.amazonaws.com/train/integration-2/saucelabs-results-matrix.svg) ![Compatibility in Web Workers](http://kriskowal-asap.s3-website-us-west-2.amazonaws.com/train/integration-2/saucelabs-worker-results-matrix.svg) ## Caveats When a task is added to an empty event queue, it is not always possible to guarantee that the task queue will begin flushing immediately after the current event. However, once the task queue begins flushing, it will not yield until the queue is empty, even if the queue grows while executing tasks. The following browsers allow the use of [DOM mutation observers][] to access the HTML [microtask queue][], and thus begin flushing ASAP's task queue immediately at the end of the current event loop turn, before any rendering or IO: [microtask queue]: http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#microtask-queue [DOM mutation observers]: http://dom.spec.whatwg.org/#mutation-observers - Android 4–4.3 - Chrome 26–34 - Firefox 14–29 - Internet Explorer 11 - iPad Safari 6–7.1 - iPhone Safari 7–7.1 - Safari 6–7 In the absense of mutation observers, there are a few browsers, and situations like web workers in some of the above browsers, where [message channels][] would be a useful way to avoid falling back to timers. Message channels give direct access to the HTML [task queue][], so the ASAP task queue would flush after any already queued rendering and IO tasks, but without having the minimum delay imposed by timers. However, among these browsers, Internet Explorer 10 and Safari do not reliably dispatch messages, so they are not worth the trouble to implement. [message channels]: http://www.whatwg.org/specs/web-apps/current-work/multipage/web-messaging.html#message-channels [task queue]: http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#concept-task - Internet Explorer 10 - Safair 5.0-1 - Opera 11-12 In the absense of mutation observers, these browsers and the following browsers all fall back to using `setTimeout` and `setInterval` to ensure that a `flush` occurs. The implementation uses both and cancels whatever handler loses the race, since `setTimeout` tends to occasionally skip tasks in unisolated circumstances. Timers generally delay the flushing of ASAP's task queue for four milliseconds. - Firefox 3–13 - Internet Explorer 6–10 - iPad Safari 4.3 - Lynx 2.8.7 ## Heritage ASAP has been factored out of the [Q][] asynchronous promise library. It originally had a naïve implementation in terms of `setTimeout`, but [Malte Ubl][NonBlocking] provided an insight that `postMessage` might be useful for creating a high-priority, no-delay event dispatch hack. Since then, Internet Explorer proposed and implemented `setImmediate`. Robert Katić began contributing to Q by measuring the performance of the internal implementation of `asap`, paying particular attention to error recovery. Domenic, Robert, and Kris Kowal collectively settled on the current strategy of unrolling the high-priority event queue internally regardless of what strategy we used to dispatch the potentially lower-priority flush event. Domenic went on to make ASAP cooperate with Node.js domains. [Q]: https://github.com/kriskowal/q [NonBlocking]: http://www.nonblocking.io/2011/06/windownexttick.html For further reading, Nicholas Zakas provided a thorough article on [The Case for setImmediate][NCZ]. [NCZ]: http://www.nczonline.net/blog/2013/07/09/the-case-for-setimmediate/ Ember’s RSVP promise implementation later [adopted][RSVP ASAP] the name ASAP but further developed the implentation. Particularly, The `MessagePort` implementation was abandoned due to interaction [problems with Mobile Internet Explorer][IE Problems] in favor of an implementation backed on the newer and more reliable DOM `MutationObserver` interface. These changes were back-ported into this library. [IE Problems]: https://github.com/cujojs/when/issues/197 [RSVP ASAP]: https://github.com/tildeio/rsvp.js/blob/cddf7232546a9cf858524b75cde6f9edf72620a7/lib/rsvp/asap.js In addition, ASAP factored into `asap` and `asap/raw`, such that `asap` remained exception-safe, but `asap/raw` provided a tight kernel that could be used for tasks that guaranteed that they would not throw exceptions. This core is useful for promise implementations that capture thrown errors in rejected promises and do not need a second safety net. At the same time, the exception handling in `asap` was factored into separate implementations for Node.js and browsers, using the the [Browserify][Browser Config] `browser` property in `package.json` to instruct browser module loaders and bundlers, including [Browserify][], [Mr][], and [Mop][], to use the browser-only implementation. [Browser Config]: https://gist.github.com/defunctzombie/4339901 [Browserify]: https://github.com/substack/node-browserify [Mr]: https://github.com/montagejs/mr [Mop]: https://github.com/montagejs/mop ## License Copyright 2009-2014 by Contributors MIT License (enclosed) humanize-ms --------------- [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] [![Gittip][gittip-image]][gittip-url] [![David deps][david-image]][david-url] [npm-image]: https://img.shields.io/npm/v/humanize-ms.svg?style=flat [npm-url]: https://npmjs.org/package/humanize-ms [travis-image]: https://img.shields.io/travis/node-modules/humanize-ms.svg?style=flat [travis-url]: https://travis-ci.org/node-modules/humanize-ms [coveralls-image]: https://img.shields.io/coveralls/node-modules/humanize-ms.svg?style=flat [coveralls-url]: https://coveralls.io/r/node-modules/humanize-ms?branch=master [gittip-image]: https://img.shields.io/gittip/dead-horse.svg?style=flat [gittip-url]: https://www.gittip.com/dead-horse/ [david-image]: https://img.shields.io/david/node-modules/humanize-ms.svg?style=flat [david-url]: https://david-dm.org/node-modules/humanize-ms transform humanize time to ms ## Installation ```bash $ npm install humanize-ms ``` ## Examples ```js var ms = require('humanize-ms'); ms('1s') // 1000 ms(1000) // 1000 ``` ### License MIT # is-date-object <sup>[![Version Badge][2]][1]</sup> [![Build Status][3]][4] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] [![browser support][9]][10] Is this value a JS Date object? This module works cross-realm/iframe, and despite ES6 @@toStringTag. ## Example ```js var isDate = require('is-date-object'); var assert = require('assert'); assert.notOk(isDate(undefined)); assert.notOk(isDate(null)); assert.notOk(isDate(false)); assert.notOk(isDate(true)); assert.notOk(isDate(42)); assert.notOk(isDate('foo')); assert.notOk(isDate(function () {})); assert.notOk(isDate([])); assert.notOk(isDate({})); assert.notOk(isDate(/a/g)); assert.notOk(isDate(new RegExp('a', 'g'))); assert.ok(isDate(new Date())); ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-date-object [2]: http://versionbadg.es/ljharb/is-date-object.svg [3]: https://travis-ci.org/ljharb/is-date-object.svg [4]: https://travis-ci.org/ljharb/is-date-object [5]: https://david-dm.org/ljharb/is-date-object.svg [6]: https://david-dm.org/ljharb/is-date-object [7]: https://david-dm.org/ljharb/is-date-object/dev-status.svg [8]: https://david-dm.org/ljharb/is-date-object#info=devDependencies [9]: https://ci.testling.com/ljharb/is-date-object.png [10]: https://ci.testling.com/ljharb/is-date-object [11]: https://nodei.co/npm/is-date-object.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/is-date-object.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/is-date-object.svg [downloads-url]: http://npm-stat.com/charts.html?package=is-date-object # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # ansicolors [![build status](https://secure.travis-ci.org/thlorenz/ansicolors.png)](http://next.travis-ci.org/thlorenz/ansicolors) Functions that surround a string with ansicolor codes so it prints in color. In case you need styles, like `bold`, have a look at [ansistyles](https://github.com/thlorenz/ansistyles). ## Installation npm install ansicolors ## Usage ```js var colors = require('ansicolors'); // foreground colors var redHerring = colors.red('herring'); var blueMoon = colors.blue('moon'); var brighBlueMoon = colors.brightBlue('moon'); console.log(redHerring); // this will print 'herring' in red console.log(blueMoon); // this 'moon' in blue console.log(brightBlueMoon); // I think you got the idea // background colors console.log(colors.bgYellow('printed on yellow background')); console.log(colors.bgBrightBlue('printed on bright blue background')); // mixing background and foreground colors // below two lines have same result (order in which bg and fg are combined doesn't matter) console.log(colors.bgYellow(colors.blue('printed on yellow background in blue'))); console.log(colors.blue(colors.bgYellow('printed on yellow background in blue'))); ``` ## Advanced API **ansicolors** allows you to access opening and closing escape sequences separately. ```js var colors = require('ansicolors'); function inspect(obj, depth) { return require('util').inspect(obj, false, depth || 5, true); } console.log('open blue', inspect(colors.open.blue)); console.log('close bgBlack', inspect(colors.close.bgBlack)); // => open blue '\u001b[34m' // close bgBlack '\u001b[49m' ``` ## Tests Look at the [tests](https://github.com/thlorenz/ansicolors/blob/master/test/ansicolors.js) to see more examples and/or run them via: npm explore ansicolors && npm test ## Alternatives **ansicolors** tries to meet simple use cases with a very simple API. However, if you need a more powerful ansi formatting tool, I'd suggest to look at the [features](https://github.com/TooTallNate/ansi.js#features) of the [ansi module](https://github.com/TooTallNate/ansi.js). # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install --save semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero digit in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0` * `^0.2.3` := `>=0.2.3 <0.3.0` * `^0.0.3` := `>=0.0.3 <0.0.4` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0` * `^0.0.x` := `>=0.0.0 <0.1.0` * `^0.0` := `>=0.0.0 <0.1.0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0` * `^0.x` := `>=0.0.0 <1.0.0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = LRU(options) , otherCache = LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # graceful-fs graceful-fs functions as a drop-in replacement for the fs module, making various improvements. The improvements are meant to normalize behavior across different platforms and environments, and to make filesystem access more resilient to errors. ## Improvements over [fs module](https://nodejs.org/api/fs.html) * Queues up `open` and `readdir` calls, and retries them once something closes if there is an EMFILE error from too many file descriptors. * fixes `lchmod` for Node versions prior to 0.6.2. * implements `fs.lutimes` if possible. Otherwise it becomes a noop. * ignores `EINVAL` and `EPERM` errors in `chown`, `fchown` or `lchown` if the user isn't root. * makes `lchmod` and `lchown` become noops, if not available. * retries reading a file if `read` results in EAGAIN error. On Windows, it retries renaming a file for up to one second if `EACCESS` or `EPERM` error occurs, likely because antivirus software has locked the directory. ## USAGE ```javascript // use just like fs var fs = require('graceful-fs') // now go and do stuff with it... fs.readFileSync('some-file-or-whatever') ``` ## Global Patching If you want to patch the global fs module (or any other fs-like module) you can do this: ```javascript // Make sure to read the caveat below. var realFs = require('fs') var gracefulFs = require('graceful-fs') gracefulFs.gracefulify(realFs) ``` This should only ever be done at the top-level application layer, in order to delay on EMFILE errors from any fs-using dependencies. You should **not** do this in a library, because it can cause unexpected delays in other parts of the program. ## Changes This module is fairly stable at this point, and used by a lot of things. That being said, because it implements a subtle behavior change in a core part of the node API, even modest changes can be extremely breaking, and the versioning is thus biased towards bumping the major when in doubt. The main change between major versions has been switching between providing a fully-patched `fs` module vs monkey-patching the node core builtin, and the approach by which a non-monkey-patched `fs` was created. The goal is to trade `EMFILE` errors for slower fs operations. So, if you try to open a zillion files, rather than crashing, `open` operations will be queued up and wait for something else to `close`. There are advantages to each approach. Monkey-patching the fs means that no `EMFILE` errors can possibly occur anywhere in your application, because everything is using the same core `fs` module, which is patched. However, it can also obviously cause undesirable side-effects, especially if the module is loaded multiple times. Implementing a separate-but-identical patched `fs` module is more surgical (and doesn't run the risk of patching multiple times), but also imposes the challenge of keeping in sync with the core module. The current approach loads the `fs` module, and then creates a lookalike object that has all the same methods, except a few that are patched. It is safe to use in all versions of Node from 0.8 through 7.0. ### v4 * Do not monkey-patch the fs module. This module may now be used as a drop-in dep, and users can opt into monkey-patching the fs builtin if their app requires it. ### v3 * Monkey-patch fs, because the eval approach no longer works on recent node. * fixed possible type-error throw if rename fails on windows * verify that we *never* get EMFILE errors * Ignore ENOSYS from chmod/chown * clarify that graceful-fs must be used as a drop-in ### v2.1.0 * Use eval rather than monkey-patching fs. * readdir: Always sort the results * win32: requeue a file if error has an OK status ### v2.0 * A return to monkey patching * wrap process.cwd ### v1.1 * wrap readFile * Wrap fs.writeFile. * readdir protection * Don't clobber the fs builtin * Handle fs.read EAGAIN errors by trying again * Expose the curOpen counter * No-op lchown/lchmod if not implemented * fs.rename patch only for win32 * Patch fs.rename to handle AV software on Windows * Close #4 Chown should not fail on einval or eperm if non-root * Fix isaacs/fstream#1 Only wrap fs one time * Fix #3 Start at 1024 max files, then back off on EMFILE * lutimes that doens't blow up on Linux * A full on-rewrite using a queue instead of just swallowing the EMFILE error * Wrap Read/Write streams as well ### 1.0 * Update engines for node 0.6 * Be lstat-graceful on Windows * first # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) stream.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC ## read For reading user input from stdin. Similar to the `readline` builtin's `question()` method, but with a few more features. ## USAGE ```javascript var read = require("read") read(options, callback) ``` The callback gets called with either the user input, or the default specified, or an error, as `callback(error, result, isDefault)` node style. ## OPTIONS Every option is optional. * `prompt` What to write to stdout before reading input. * `silent` Don't echo the output as the user types it. * `replace` Replace silenced characters with the supplied character value. * `timeout` Number of ms to wait for user input before giving up. * `default` The default value if the user enters nothing. * `edit` Allow the user to edit the default value. * `terminal` Treat the output as a TTY, whether it is or not. * `input` Readable stream to get input data from. (default `process.stdin`) * `output` Writeable stream to write prompts to. (default: `process.stdout`) If silent is true, and the input is a TTY, then read will set raw mode, and read character by character. ## COMPATIBILITY This module works sort of with node 0.6. It does not work with node versions less than 0.6. It is best on node 0.8. On node version 0.6, it will remove all listeners on the input stream's `data` and `keypress` events, because the readline module did not fully clean up after itself in that version of node, and did not make it possible to clean up after it in a way that has no potential for side effects. Additionally, some of the readline options (like `terminal`) will not function in versions of node before 0.8, because they were not implemented in the builtin readline module. ## CONTRIBUTING Patches welcome. **string_decoder.js** (`require('string_decoder')`) from Node.js core Copyright Joyent, Inc. and other Node contributors. See LICENCE file for details. Version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. **Prefer the stable version over the unstable.** The *build/* directory contains a build script that will scrape the source from the [joyent/node](https://github.com/joyent/node) repo given a specific Node version. # URI.js URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI ([RFC 3987](http://www.ietf.org/rfc/rfc3987.txt)), IDNA ([RFC 5890](http://www.ietf.org/rfc/rfc5890.txt)), IPv6 Address ([RFC 5952](http://www.ietf.org/rfc/rfc5952.txt)), IPv6 Zone Identifier ([RFC 6874](http://www.ietf.org/rfc/rfc6874.txt)) specifications. URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated). ## API ### Parsing URI.parse("uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body"); //returns: //{ // scheme : "uri", // userinfo : "user:pass", // host : "example.com", // port : 123, // path : "/one/two.three", // query : "q1=a1&q2=a2", // fragment : "body" //} ### Serializing URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer" ### Resolving URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g" ### Normalizing URI.normalize("HTTP://ABC.com:80/%7Esmith/home.html") === "http://abc.com/~smith/home.html" ### Comparison URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true ### IP Support //IPv4 normalization URI.normalize("//192.068.001.000") === "//192.68.1.0" //IPv6 normalization URI.normalize("//[2001:0:0DB8::0:0001]") === "//[2001:0:db8::1]" //IPv6 zone identifier support URI.parse("//[2001:db8::7%25en1]"); //returns: //{ // host : "2001:db8::7%en1" //} ### IRI Support //convert IRI to URI URI.serialize(URI.parse("http://examplé.org/rosé")) === "http://xn--exampl-gva.org/ros%C3%A9" //convert URI to IRI URI.serialize(URI.parse("http://xn--exampl-gva.org/ros%C3%A9"), {iri:true}) === "http://examplé.org/rosé" ### Options All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties: * `scheme` (string) Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior. * `reference` (string) If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme. * `tolerant` (boolean, false) If set to `true`, the parser will relax URI resolving rules. * `absolutePath` (boolean, false) If set to `true`, the serializer will not resolve a relative `path` component. * `iri` (boolean, false) If set to `true`, the serializer will unescape non-ASCII characters as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `unicodeSupport` (boolean, false) If set to `true`, the parser will unescape non-ASCII characters in the parsed output as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `domainHost` (boolean, false) If set to `true`, the library will treat the `host` component as a domain name, and convert IDNs (International Domain Names) as per [RFC 5891](http://www.ietf.org/rfc/rfc5891.txt). ## Scheme Extendable URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. Currently, URI.js has built in support for the following schemes: * http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\] * https \[[RFC 2818](http://www.ietf.org/rfc/rfc2818.txt)\] * mailto \[[RFC 6068](http://www.ietf.org/rfc/rfc6068.txt)\] * urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\] * urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\] ### HTTP/HTTPS Support URI.equal("HTTP://ABC.COM:80", "http://abc.com/") === true URI.equal("https://abc.com", "HTTPS://ABC.COM:443/") === true ### WS/WSS Support URI.parse("wss://example.com/foo?bar=baz"); //returns: //{ // scheme : "wss", // host: "example.com", // resourceName: "/foo?bar=baz", // secure: true, //} URI.equal("WS://ABC.COM:80/chat#one", "ws://abc.com/chat") === true ### Mailto Support URI.parse("mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!"); //returns: //{ // scheme : "mailto", // to : ["alpha@example.com", "bravo@example.com"], // subject : "SUBSCRIBE", // body : "Sign me up!" //} URI.serialize({ scheme : "mailto", to : ["alpha@example.com"], subject : "REMOVE", body : "Please remove me", headers : { cc : "charlie@example.com" } }) === "mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me" ### URN Support URI.parse("urn:example:foo"); //returns: //{ // scheme : "urn", // nid : "example", // nss : "foo", //} #### URN UUID Support URI.parse("urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6"); //returns: //{ // scheme : "urn", // nid : "example", // uuid : "f81d4fae-7dec-11d0-a765-00a0c91e6bf6", //} ## Usage To load in a browser, use the following tag: <script type="text/javascript" src="uri-js/dist/es5/uri.all.min.js"></script> To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line: npm install uri-js # OR yarn add uri-js Then, in your code, load it using: const URI = require("uri-js"); If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using: import * as URI from "uri-js"; Or you can load just what you need using named exports: import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from "uri-js"; ## Breaking changes ### Breaking changes from 3.x URN parsing has been completely changed to better align with the specification. Scheme is now always `urn`, but has two new properties: `nid` which contains the Namspace Identifier, and `nss` which contains the Namespace Specific String. The `nss` property will be removed by higher order scheme handlers, such as the UUID URN scheme handler. The UUID of a URN can now be found in the `uuid` property. ### Breaking changes from 2.x URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful. ### Breaking changes from 1.x The `errors` array on parsed components is now an `error` string. # cross-spawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build status][appveyor-image]][appveyor-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/cross-spawn [downloads-image]:http://img.shields.io/npm/dm/cross-spawn.svg [npm-image]:http://img.shields.io/npm/v/cross-spawn.svg [travis-url]:https://travis-ci.org/IndigoUnited/node-cross-spawn [travis-image]:http://img.shields.io/travis/IndigoUnited/node-cross-spawn/master.svg [appveyor-url]:https://ci.appveyor.com/project/satazor/node-cross-spawn [appveyor-image]:https://img.shields.io/appveyor/ci/satazor/node-cross-spawn/master.svg [david-dm-url]:https://david-dm.org/IndigoUnited/node-cross-spawn [david-dm-image]:https://img.shields.io/david/IndigoUnited/node-cross-spawn.svg [david-dm-dev-url]:https://david-dm.org/IndigoUnited/node-cross-spawn#info=devDependencies [david-dm-dev-image]:https://img.shields.io/david/dev/IndigoUnited/node-cross-spawn.svg A cross platform solution to node's spawn and spawnSync. ## Installation `$ npm install cross-spawn` If you are using `spawnSync` on node 0.10 or older, you will also need to install `spawn-sync`: `$ npm install spawn-sync` ## Why Node has issues when using spawn on Windows: - It ignores [PATHEXT](https://github.com/joyent/node/issues/2318) - It does not support [shebangs](http://pt.wikipedia.org/wiki/Shebang) - No `options.shell` support on node < v6 - It does not allow you to run `del` or `dir` All these issues are handled correctly by `cross-spawn`. There are some known modules, such as [win-spawn](https://github.com/ForbesLindesay/win-spawn), that try to solve this but they are either broken or provide faulty escaping of shell arguments. ## Usage Exactly the same way as node's [`spawn`](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options) or [`spawnSync`](https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options), so it's a drop in replacement. ```js var spawn = require('cross-spawn'); // Spawn NPM asynchronously var child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); // Spawn NPM synchronously var results = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); ``` ## Caveats #### `options.shell` as an alternative to `cross-spawn` Starting from node v6, `spawn` has a `shell` option that allows you run commands from within a shell. This new option solves most of the problems that `cross-spawn` attempts to solve, but: - It's not supported in node < v6 - It has no support for shebangs on Windows - You must manually escape the command and arguments which is very error prone, specially when passing user input If you are using the `shell` option to spawn a command in a cross platform way, consider using `cross-spawn` instead. You have been warned. #### Shebangs While `cross-spawn` handles shebangs on Windows, its support is limited: e.g.: it doesn't handle arguments after the path, e.g.: `#!/bin/bash -e`. Remember to always test your code on Windows! ## Tests `$ npm test` ## License Released under the [MIT License](http://www.opensource.org/licenses/mit-license.php). # `cli-columns` [![NPM version][npm-img]][npm-url] [![Downloads][downloads-img]][npm-url] [![Build Status][travis-img]][travis-url] [![Coverage Status][coveralls-img]][coveralls-url] [![Chat][gitter-img]][gitter-url] [![Tip][amazon-img]][amazon-url] Columnated lists for the CLI. Unicode and ANSI safe. ## Install $ npm install --save cli-columns ## Usage ```js const chalk = require('chalk'); const columns = require('.'); const values = [ 'blue' + chalk.bgBlue('berry'), '笔菠萝' + chalk.yellow('苹果笔'), chalk.red('apple'), 'pomegranate', 'durian', chalk.green('star fruit'), 'パイナップル', 'apricot', 'banana', 'pineapple', chalk.bgRed.yellow('orange') ]; console.log(columns(values)); ``` <img alt="screenshot" src="https://user-images.githubusercontent.com/155164/28672800-bd415c86-72ae-11e7-855c-6f6aa108921b.png"> ## API ### columns(values [, options]): String - `values` `{Array<String>}` Array of strings to display. - `options` `{Object}` - `character` `{String}` (default: `' '`) Padding character. - `newline` `{String}` (default: `'\n'`) Newline character. - `padding` `{Number}` (default: `2`) Space between columns. - `sort` `{Boolean}` (default: `true`) Whether to sort results. - `width` `{Number}` (default: `process.stdout.columns`) Max width of list. Sorts and formats a list of values into columns suitable to display in a given width. ## Contribute Standards for this project, including tests, code coverage, and semantics are enforced with a build tool. Pull requests must include passing tests with 100% code coverage and no linting errors. ### Test $ npm test ---- © Shannon Moeller <me@shannonmoeller.com> (shannonmoeller.com) Licensed under [MIT](http://shannonmoeller.com/mit.txt) [amazon-img]: https://img.shields.io/badge/amazon-tip_jar-yellow.svg?style=flat-square [amazon-url]: https://www.amazon.com/gp/registry/wishlist/1VQM9ID04YPC5?sort=universal-price [coveralls-img]: http://img.shields.io/coveralls/shannonmoeller/cli-columns/master.svg?style=flat-square [coveralls-url]: https://coveralls.io/r/shannonmoeller/cli-columns [downloads-img]: http://img.shields.io/npm/dm/cli-columns.svg?style=flat-square [gitter-img]: http://img.shields.io/badge/gitter-join_chat-1dce73.svg?style=flat-square [gitter-url]: https://gitter.im/shannonmoeller/shannonmoeller [npm-img]: http://img.shields.io/npm/v/cli-columns.svg?style=flat-square [npm-url]: https://npmjs.org/package/cli-columns [travis-img]: http://img.shields.io/travis/shannonmoeller/cli-columns.svg?style=flat-square [travis-url]: https://travis-ci.org/shannonmoeller/cli-columns <a href="http://promisesaplus.com/"> <img src="http://promisesaplus.com/assets/logo-small.png" alt="Promises/A+ logo" title="Promises/A+ 1.1 compliant" align="right" /> </a> [![Build Status](https://travis-ci.org/petkaantonov/bluebird.svg?branch=master)](https://travis-ci.org/petkaantonov/bluebird) [![coverage-98%](https://img.shields.io/badge/coverage-98%25-brightgreen.svg?style=flat)](http://petkaantonov.github.io/bluebird/coverage/debug/index.html) **Got a question?** Join us on [stackoverflow](http://stackoverflow.com/questions/tagged/bluebird), the [mailing list](https://groups.google.com/forum/#!forum/bluebird-js) or chat on [IRC](https://webchat.freenode.net/?channels=#promises) # Introduction Bluebird is a fully featured promise library with focus on innovative features and performance See the [**bluebird website**](http://bluebirdjs.com/docs/getting-started.html) for further documentation, references and instructions. See the [**API reference**](http://bluebirdjs.com/docs/api-reference.html) here. For bluebird 2.x documentation and files, see the [2.x tree](https://github.com/petkaantonov/bluebird/tree/2.x). ### Note Promises in Node.js 10 are significantly faster than before. Bluebird still includes a lot of features like cancellation, iteration methods and warnings that native promises don't. If you are using Bluebird for performance rather than for those - please consider giving native promises a shot and running the benchmarks yourself. # Questions and issues The [github issue tracker](https://github.com/petkaantonov/bluebird/issues) is **_only_** for bug reports and feature requests. Anything else, such as questions for help in using the library, should be posted in [StackOverflow](http://stackoverflow.com/questions/tagged/bluebird) under tags `promise` and `bluebird`. ## Thanks Thanks to BrowserStack for providing us with a free account which lets us support old browsers like IE8. # License The MIT License (MIT) Copyright (c) 2013-2017 Petka Antonov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. # npm-profile Provides functions for fetching and updating an npmjs.com profile. ```js const profile = require('npm-profile') const result = await profile.get(registry, {token}) //... ``` The API that this implements is documented here: * [authentication](https://github.com/npm/registry/blob/master/docs/user/authentication.md) * [profile editing](https://github.com/npm/registry/blob/master/docs/user/profile.md) (and two-factor authentication) ## Table of Contents * [API](#api) * Login and Account Creation * [`adduser()`](#adduser) * [`login()`](#login) * [`adduserWeb()`](#adduser-web) * [`loginWeb()`](#login-web) * [`adduserCouch()`](#adduser-couch) * [`loginCouch()`](#login-couch) * Profile Data Management * [`get()`](#get) * [`set()`](#set) * Token Management * [`listTokens()`](#list-tokens) * [`removeToken()`](#remove-token) * [`createToken()`](#create-token) ## API ### <a name="adduser"></a> `> profile.adduser(opener, prompter, [opts]) → Promise` Tries to create a user new web based login, if that fails it falls back to using the legacy CouchDB APIs. * `opener` Function (url) → Promise, returns a promise that resolves after a browser has been opened for the user at `url`. * `prompter` Function (creds) → Promise, returns a promise that resolves to an object with `username`, `email` and `password` properties. * [`opts`](#opts) Object (optional) plus extra keys: * `creds` Object, passed through to prompter, common values are: * `username` String, default value for username * `email` String, default value for email #### **Promise Value** An object with the following properties: * `token` String, to be used to authenticate further API calls * `username` String, the username the user authenticated as #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be `'E'` followed by the HTTP response code, for example a Forbidden response would be `E403`. ### <a name="login"></a> `> profile.login(opener, prompter, [opts]) → Promise` Tries to login using new web based login, if that fails it falls back to using the legacy CouchDB APIs. * `opener` Function (url) → Promise, returns a promise that resolves after a browser has been opened for the user at `url`. * `prompter` Function (creds) → Promise, returns a promise that resolves to an object with `username`, and `password` properties. * [`opts`](#opts) Object (optional) plus extra keys: * `creds` Object, passed through to prompter, common values are: * `name` String, default value for username #### **Promise Value** An object with the following properties: * `token` String, to be used to authenticate further API calls * `username` String, the username the user authenticated as #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because an OTP is required then `code` will be set to `EOTP`. This error code can only come from a legacy CouchDB login and so this should be retried with loginCouch. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be `'E'` followed by the HTTP response code, for example a Forbidden response would be `E403`. ### <a name="adduser-web"></a> `> profile.adduserWeb(opener, [opts]) → Promise` Tries to create a user new web based login, if that fails it falls back to using the legacy CouchDB APIs. * `opener` Function (url) → Promise, returns a promise that resolves after a browser has been opened for the user at `url`. * [`opts`](#opts) Object #### **Promise Value** An object with the following properties: * `token` String, to be used to authenticate further API calls * `username` String, the username the user authenticated as #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the registry does not support web-login then an error will be thrown with its `code` property set to `ENYI` . You should retry with `adduserCouch`. If you use `adduser` then this fallback will be done automatically. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be `'E'` followed by the HTTP response code, for example a Forbidden response would be `E403`. ### <a name="login-web"></a> `> profile.loginWeb(opener, [opts]) → Promise` Tries to login using new web based login, if that fails it falls back to using the legacy CouchDB APIs. * `opener` Function (url) → Promise, returns a promise that resolves after a browser has been opened for the user at `url`. * [`opts`](#opts) Object (optional) #### **Promise Value** An object with the following properties: * `token` String, to be used to authenticate further API calls * `username` String, the username the user authenticated as #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the registry does not support web-login then an error will be thrown with its `code` property set to `ENYI` . You should retry with `loginCouch`. If you use `login` then this fallback will be done automatically. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be `'E'` followed by the HTTP response code, for example a Forbidden response would be `E403`. ### <a name="adduser-couch"></a> `> profile.adduserCouch(username, email, password, [opts]) → Promise` ```js const {token} = await profile.adduser(username, email, password, {registry}) // `token` can be passed in through `opts` for authentication. ``` Creates a new user on the server along with a fresh bearer token for future authentication as this user. This is what you see as an `authToken` in an `.npmrc`. If the user already exists then the npm registry will return an error, but this is registry specific and not guaranteed. * `username` String * `email` String * `password` String * [`opts`](#opts) Object (optional) #### **Promise Value** An object with the following properties: * `token` String, to be used to authenticate further API calls * `username` String, the username the user authenticated as #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because an OTP is required then `code` will be set to `EOTP`. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be `'E'` followed by the HTTP response code, for example a Forbidden response would be `E403`. ### <a name="login-couch"></a> `> profile.loginCouch(username, password, [opts]) → Promise` ```js let token try { {token} = await profile.login(username, password, {registry}) } catch (err) { if (err.code === 'otp') { const otp = await getOTPFromSomewhere() {token} = await profile.login(username, password, {otp}) } } // `token` can now be passed in through `opts` for authentication. ``` Logs you into an existing user. Does not create the user if they do not already exist. Logging in means generating a new bearer token for use in future authentication. This is what you use as an `authToken` in an `.npmrc`. * `username` String * `email` String * `password` String * [`opts`](#opts) Object (optional) #### **Promise Value** An object with the following properties: * `token` String, to be used to authenticate further API calls * `username` String, the username the user authenticated as #### **Promise Rejection** An error object indicating what went wrong. If the object has a `code` property set to `EOTP` then that indicates that this account must use two-factor authentication to login. Try again with a one-time password. If the object has a `code` property set to `EAUTHIP` then that indicates that this account is only allowed to login from certain networks and this ip is not on one of those networks. If the error was neither of these then the error object will have a `code` property set to the HTTP response code and a `headers` property with the HTTP headers in the response. ### <a name="get"></a> `> profile.get([opts]) → Promise` ```js const {name, email} = await profile.get({token}) console.log(`${token} belongs to https://npm.im/~${name}, (mailto:${email})`) ``` Fetch profile information for the authenticated user. * [`opts`](#opts) Object #### **Promise Value** An object that looks like this: ```js // "*" indicates a field that may not always appear { tfa: null | false | {"mode": "auth-only", pending: Boolean} | ["recovery", "codes"] | "otpauth://...", name: String, email: String, email_verified: Boolean, created: Date, updated: Date, cidr_whitelist: null | ["192.168.1.1/32", ...], fullname: String, // * homepage: String, // * freenode: String, // * twitter: String, // * github: String // * } ``` #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because an OTP is required then `code` will be set to `EOTP`. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be the HTTP response code. ### <a name="set"></a> `> profile.set(profileData, [opts]) → Promise` ```js await profile.set({github: 'great-github-account-name'}, {token}) ``` Update profile information for the authenticated user. * `profileData` An object, like that returned from `profile.get`, but see below for caveats relating to `password`, `tfa` and `cidr_whitelist`. * [`opts`](#opts) Object (optional) #### **SETTING `password`** This is used to change your password and is not visible (for obvious reasons) through the `get()` API. The value should be an object with `old` and `new` properties, where the former has the user's current password and the latter has the desired new password. For example ```js await profile.set({ password: { old: 'abc123', new: 'my new (more secure) password' } }, {token}) ``` #### **SETTING `cidr_whitelist`** The value for this is an Array. Only valid CIDR ranges are allowed in it. Be very careful as it's possible to lock yourself out of your account with this. This is not currently exposed in `npm` itself. ```js await profile.set({ cidr_whitelist: [ '8.8.8.8/32' ] }, {token}) // ↑ only one of google's dns servers can now access this account. ``` #### **SETTING `tfa`** Enabling two-factor authentication is a multi-step process. 1. Call `profile.get` and check the status of `tfa`. If `pending` is true then you'll need to disable it with `profile.set({tfa: {password, mode: 'disable'}, …)`. 2. `profile.set({tfa: {password, mode}}, {registry, token})` * Note that the user's `password` is required here in the `tfa` object, regardless of how you're authenticating. * `mode` is either `auth-only` which requires an `otp` when calling `login` or `createToken`, or `mode` is `auth-and-writes` and an `otp` will be required on login, publishing or when granting others access to your modules. * Be aware that this set call may require otp as part of the auth object. If otp is needed it will be indicated through a rejection in the usual way. 3. If tfa was already enabled then you're just switch modes and a successful response means that you're done. If the tfa property is empty and tfa _wasn't_ enabled then it means they were in a pending state. 3. The response will have a `tfa` property set to an `otpauth` URL, as [used by Google Authenticator](https://github.com/google/google-authenticator/wiki/Key-Uri-Format). You will need to show this to the user for them to add to their authenticator application. This is typically done as a QRCODE, but you can also show the value of the `secret` key in the `otpauth` query string and they can type or copy paste that in. 4. To complete setting up two factor auth you need to make a second call to `profile.set` with `tfa` set to an array of TWO codes from the user's authenticator, eg: `profile.set(tfa: [otp1, otp2]}, {registry, token})` 5. On success you'll get a result object with a `tfa` property that has an array of one-time-use recovery codes. These are used to authenticate later if the second factor is lost and generally should be printed and put somewhere safe. Disabling two-factor authentication is more straightforward, set the `tfa` attribute to an object with a `password` property and a `mode` of `disable`. ```js await profile.set({tfa: {password, mode: 'disable'}}, {token}) ``` #### **Promise Value** An object reflecting the changes you made, see description for `profile.get`. #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because an OTP is required then `code` will be set to `EOTP`. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be the HTTP response code. ### <a name="list-tokens"></a> `> profile.listTokens([opts]) → Promise` ```js const tokens = await profile.listTokens({registry, token}) console.log(`Number of tokens in your accounts: ${tokens.length}`) ``` Fetch a list of all of the authentication tokens the authenticated user has. * [`opts`](#opts) Object (optional) #### **Promise Value** An array of token objects. Each token object has the following properties: * key — A sha512 that can be used to remove this token. * token — The first six characters of the token UUID. This should be used by the user to identify which token this is. * created — The date and time the token was created * readonly — If true, this token can only be used to download private modules. Critically, it CAN NOT be used to publish. * cidr_whitelist — An array of CIDR ranges that this token is allowed to be used from. #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because an OTP is required then `code` will be set to `EOTP`. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be the HTTP response code. ### <a name="remove-token"><a> `> profile.removeToken(token|key, opts) → Promise` ```js await profile.removeToken(key, {token}) // token is gone! ``` Remove a specific authentication token. * `token|key` String, either a complete authentication token or the key returned by `profile.listTokens`. * [`opts`](#opts) Object (optional) #### **Promise Value** No value. #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because an OTP is required then `code` will be set to `EOTP`. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be the HTTP response code. ### <a name="create-token"></a> `> profile.createToken(password, readonly, cidr_whitelist, [opts]) → Promise` ```js const newToken = await profile.createToken( password, readonly, cidr_whitelist, {token, otp} ) // do something with the newToken ``` Create a new authentication token, possibly with restrictions. * `password` String * `readonly` Boolean * `cidr_whitelist` Array * [`opts`](#opts) Object Optional #### **Promise Value** The promise will resolve with an object very much like the one's returned by `profile.listTokens`. The only difference is that `token` is not truncated. ```js { token: String, key: String, // sha512 hash of the token UUID cidr_whitelist: [String], created: Date, readonly: Boolean } ``` #### **Promise Rejection** An error object indicating what went wrong. The `headers` property will contain the HTTP headers of the response. If the action was denied because an OTP is required then `code` will be set to `EOTP`. If the action was denied because it came from an IP address that this action on this account isn't allowed from then the `code` will be set to `EAUTHIP`. Otherwise the code will be the HTTP response code. ### <a name="opts"></a> options objects The various API functions accept an optional `opts` object as a final argument. This opts object can either be a regular Object, or a [`figgy-pudding`](https://npm.im/figgy-pudding) options object instance. Unless otherwise noted, the options accepted are the same as the [`npm-registry-fetch` options](https://www.npmjs.com/package/npm-registry-fetch#fetch-opts). Of particular note are `opts.registry`, and the auth-related options: * `opts.token` - used for Bearer auth * `opts.username` and `opts.password` - used for Basic auth * `opts.otp` - the 2fa OTP token ## <a name="logging"></a> Logging This modules logs by emitting `log` events on the global `process` object. These events look like this: ```js process.emit('log', 'loglevel', 'feature', 'message part 1', 'part 2', 'part 3', 'etc') ``` `loglevel` can be one of: `error`, `warn`, `notice`, `http`, `timing`, `info`, `verbose`, and `silly`. `feature` is any brief string that describes the component doing the logging. The remaining arguments are evaluated like `console.log` and joined together with spaces. A real world example of this is: ```js process.emit('log', 'http', 'request', '→', conf.method || 'GET', conf.target) ``` To handle the log events, you would do something like this: ```js const log = require('npmlog') process.on('log', function (level) { return log[level].apply(log, [].slice.call(arguments, 1)) }) ``` # qw Quoted word literals! ```js const qw = require('qw') const myword = qw` this is a long list of words` // equiv of: const myword = [ 'this', 'is', 'a', 'long', 'list', 'of', 'words' ] ``` You can embed vars in the usual way: ```js const mywords = qw`product ${23 * 5} also ${'escaping a string'}` // equiv of: const mywords = [ 'product', 23 * 5, 'also', 'escaping a string' ] ``` You can also embed vars inside strings: ```js const mywords = qw`product=${23 * 5} also "${'escaping a string'}"` // equiv of: const mywords = [ 'product=' + (23 * 5), 'also', '"escaping a string"' ] ``` ## DESCRIPTION This uses template strings to bring over this little common convenience from Perl-land. # normalize-package-data [![Build Status](https://travis-ci.org/npm/normalize-package-data.png?branch=master)](https://travis-ci.org/npm/normalize-package-data) normalize-package-data exports a function that normalizes package metadata. This data is typically found in a package.json file, but in principle could come from any source - for example the npm registry. normalize-package-data is used by [read-package-json](https://npmjs.org/package/read-package-json) to normalize the data it reads from a package.json file. In turn, read-package-json is used by [npm](https://npmjs.org/package/npm) and various npm-related tools. ## Installation ``` npm install normalize-package-data ``` ## Usage Basic usage is really simple. You call the function that normalize-package-data exports. Let's call it `normalizeData`. ```javascript normalizeData = require('normalize-package-data') packageData = require("./package.json") normalizeData(packageData) // packageData is now normalized ``` #### Strict mode You may activate strict validation by passing true as the second argument. ```javascript normalizeData = require('normalize-package-data') packageData = require("./package.json") normalizeData(packageData, true) // packageData is now normalized ``` If strict mode is activated, only Semver 2.0 version strings are accepted. Otherwise, Semver 1.0 strings are accepted as well. Packages must have a name, and the name field must not have contain leading or trailing whitespace. #### Warnings Optionally, you may pass a "warning" function. It gets called whenever the `normalizeData` function encounters something that doesn't look right. It indicates less than perfect input data. ```javascript normalizeData = require('normalize-package-data') packageData = require("./package.json") warnFn = function(msg) { console.error(msg) } normalizeData(packageData, warnFn) // packageData is now normalized. Any number of warnings may have been logged. ``` You may combine strict validation with warnings by passing `true` as the second argument, and `warnFn` as third. When `private` field is set to `true`, warnings will be suppressed. ### Potential exceptions If the supplied data has an invalid name or version vield, `normalizeData` will throw an error. Depending on where you call `normalizeData`, you may want to catch these errors so can pass them to a callback. ## What normalization (currently) entails * The value of `name` field gets trimmed (unless in strict mode). * The value of the `version` field gets cleaned by `semver.clean`. See [documentation for the semver module](https://github.com/isaacs/node-semver). * If `name` and/or `version` fields are missing, they are set to empty strings. * If `files` field is not an array, it will be removed. * If `bin` field is a string, then `bin` field will become an object with `name` set to the value of the `name` field, and `bin` set to the original string value. * If `man` field is a string, it will become an array with the original string as its sole member. * If `keywords` field is string, it is considered to be a list of keywords separated by one or more white-space characters. It gets converted to an array by splitting on `\s+`. * All people fields (`author`, `maintainers`, `contributors`) get converted into objects with name, email and url properties. * If `bundledDependencies` field (a typo) exists and `bundleDependencies` field does not, `bundledDependencies` will get renamed to `bundleDependencies`. * If the value of any of the dependencies fields (`dependencies`, `devDependencies`, `optionalDependencies`) is a string, it gets converted into an object with familiar `name=>value` pairs. * The values in `optionalDependencies` get added to `dependencies`. The `optionalDependencies` array is left untouched. * As of v2: Dependencies that point at known hosted git providers (currently: github, bitbucket, gitlab) will have their URLs canonicalized, but protocols will be preserved. * As of v2: Dependencies that use shortcuts for hosted git providers (`org/proj`, `github:org/proj`, `bitbucket:org/proj`, `gitlab:org/proj`, `gist:docid`) will have the shortcut left in place. (In the case of github, the `org/proj` form will be expanded to `github:org/proj`.) THIS MARKS A BREAKING CHANGE FROM V1, where the shorcut was previously expanded to a URL. * If `description` field does not exist, but `readme` field does, then (more or less) the first paragraph of text that's found in the readme is taken as value for `description`. * If `repository` field is a string, it will become an object with `url` set to the original string value, and `type` set to `"git"`. * If `repository.url` is not a valid url, but in the style of "[owner-name]/[repo-name]", `repository.url` will be set to git+https://github.com/[owner-name]/[repo-name].git * If `bugs` field is a string, the value of `bugs` field is changed into an object with `url` set to the original string value. * If `bugs` field does not exist, but `repository` field points to a repository hosted on GitHub, the value of the `bugs` field gets set to an url in the form of https://github.com/[owner-name]/[repo-name]/issues . If the repository field points to a GitHub Gist repo url, the associated http url is chosen. * If `bugs` field is an object, the resulting value only has email and url properties. If email and url properties are not strings, they are ignored. If no valid values for either email or url is found, bugs field will be removed. * If `homepage` field is not a string, it will be removed. * If the url in the `homepage` field does not specify a protocol, then http is assumed. For example, `myproject.org` will be changed to `http://myproject.org`. * If `homepage` field does not exist, but `repository` field points to a repository hosted on GitHub, the value of the `homepage` field gets set to an url in the form of https://github.com/[owner-name]/[repo-name]#readme . If the repository field points to a GitHub Gist repo url, the associated http url is chosen. ### Rules for name field If `name` field is given, the value of the name field must be a string. The string may not: * start with a period. * contain the following characters: `/@\s+%` * contain any characters that would need to be encoded for use in urls. * resemble the word `node_modules` or `favicon.ico` (case doesn't matter). ### Rules for version field If `version` field is given, the value of the version field must be a valid *semver* string, as determined by the `semver.valid` method. See [documentation for the semver module](https://github.com/isaacs/node-semver). ### Rules for license field The `license` field should be a valid *SPDX license expression* or one of the special values allowed by [validate-npm-package-license](https://npmjs.com/package/validate-npm-package-license). See [documentation for the license field in package.json](https://docs.npmjs.com/files/package.json#license). ## Credits This package contains code based on read-package-json written by Isaac Z. Schlueter. Used with permisson. ## License normalize-package-data is released under the [BSD 2-Clause License](http://opensource.org/licenses/MIT). Copyright (c) 2013 Meryn Stol # stream-each Iterate all the data in a stream ``` npm install stream-each ``` [![build status](http://img.shields.io/travis/mafintosh/stream-each.svg?style=flat)](http://travis-ci.org/mafintosh/stream-each) ## Usage ``` js var each = require('stream-each') each(stream, function (data, next) { console.log('data from stream', data) // when ready to consume next chunk next() }, function (err) { console.log('no more data') }) ``` ## API #### `each(stream, iterator, cb)` Iterate the data in the stream by calling the iterator function with `(data, next)` where data is a data chunk and next is a callback. Call next when you are ready to consume the next chunk. Optionally you can call next with an error to destroy the stream When the stream ends/errors the callback is called if provided ## License MIT ## Related `stream-each` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. If you want to write an option parser, and have it be good, there are two ways to do it. The Right Way, and the Wrong Way. The Wrong Way is to sit down and write an option parser. We've all done that. The Right Way is to write some complex configurable program with so many options that you hit the limit of your frustration just trying to manage them all, and defer it with duct-tape solutions until you see exactly to the core of the problem, and finally snap and write an awesome option parser. If you want to write an option parser, don't write an option parser. Write a package manager, or a source control system, or a service restarter, or an operating system. You probably won't end up with a good one of those, but if you don't give up, and you are relentless and diligent enough in your procrastination, you may just end up with a very nice option parser. ## USAGE ```javascript // my-program.js var nopt = require("nopt") , Stream = require("stream").Stream , path = require("path") , knownOpts = { "foo" : [String, null] , "bar" : [Stream, Number] , "baz" : path , "bloo" : [ "big", "medium", "small" ] , "flag" : Boolean , "pick" : Boolean , "many1" : [String, Array] , "many2" : [path, Array] } , shortHands = { "foofoo" : ["--foo", "Mr. Foo"] , "b7" : ["--bar", "7"] , "m" : ["--bloo", "medium"] , "p" : ["--pick"] , "f" : ["--flag"] } // everything is optional. // knownOpts and shorthands default to {} // arg list defaults to process.argv // slice defaults to 2 , parsed = nopt(knownOpts, shortHands, process.argv, 2) console.log(parsed) ``` This would give you support for any of the following: ```console $ node my-program.js --foo "blerp" --no-flag { "foo" : "blerp", "flag" : false } $ node my-program.js ---bar 7 --foo "Mr. Hand" --flag { bar: 7, foo: "Mr. Hand", flag: true } $ node my-program.js --foo "blerp" -f -----p { foo: "blerp", flag: true, pick: true } $ node my-program.js -fp --foofoo { foo: "Mr. Foo", flag: true, pick: true } $ node my-program.js --foofoo -- -fp # -- stops the flag parsing. { foo: "Mr. Foo", argv: { remain: ["-fp"] } } $ node my-program.js --blatzk -fp # unknown opts are ok. { blatzk: true, flag: true, pick: true } $ node my-program.js --blatzk=1000 -fp # but you need to use = if they have a value { blatzk: 1000, flag: true, pick: true } $ node my-program.js --no-blatzk -fp # unless they start with "no-" { blatzk: false, flag: true, pick: true } $ node my-program.js --baz b/a/z # known paths are resolved. { baz: "/Users/isaacs/b/a/z" } # if Array is one of the types, then it can take many # values, and will always be an array. The other types provided # specify what types are allowed in the list. $ node my-program.js --many1 5 --many1 null --many1 foo { many1: ["5", "null", "foo"] } $ node my-program.js --many2 foo --many2 bar { many2: ["/path/to/foo", "path/to/bar"] } ``` Read the tests at the bottom of `lib/nopt.js` for more examples of what this puppy can do. ## Types The following types are supported, and defined on `nopt.typeDefs` * String: A normal string. No parsing is done. * path: A file system path. Gets resolved against cwd if not absolute. * url: A url. If it doesn't parse, it isn't accepted. * Number: Must be numeric. * Date: Must parse as a date. If it does, and `Date` is one of the options, then it will return a Date object, not a string. * Boolean: Must be either `true` or `false`. If an option is a boolean, then it does not need a value, and its presence will imply `true` as the value. To negate boolean flags, do `--no-whatever` or `--whatever false` * NaN: Means that the option is strictly not allowed. Any value will fail. * Stream: An object matching the "Stream" class in node. Valuable for use when validating programmatically. (npm uses this to let you supply any WriteStream on the `outfd` and `logfd` config options.) * Array: If `Array` is specified as one of the types, then the value will be parsed as a list of options. This means that multiple values can be specified, and that the value will always be an array. If a type is an array of values not on this list, then those are considered valid values. For instance, in the example above, the `--bloo` option can only be one of `"big"`, `"medium"`, or `"small"`, and any other value will be rejected. When parsing unknown fields, `"true"`, `"false"`, and `"null"` will be interpreted as their JavaScript equivalents. You can also mix types and values, or multiple types, in a list. For instance `{ blah: [Number, null] }` would allow a value to be set to either a Number or null. When types are ordered, this implies a preference, and the first type that can be used to properly interpret the value will be used. To define a new type, add it to `nopt.typeDefs`. Each item in that hash is an object with a `type` member and a `validate` method. The `type` member is an object that matches what goes in the type list. The `validate` method is a function that gets called with `validate(data, key, val)`. Validate methods should assign `data[key]` to the valid value of `val` if it can be handled properly, or return boolean `false` if it cannot. You can also call `nopt.clean(data, types, typeDefs)` to clean up a config object and remove its invalid properties. ## Error Handling By default, nopt outputs a warning to standard error when invalid values for known options are found. You can change this behavior by assigning a method to `nopt.invalidHandler`. This method will be called with the offending `nopt.invalidHandler(key, val, types)`. If no `nopt.invalidHandler` is assigned, then it will console.error its whining. If it is assigned to boolean `false` then the warning is suppressed. ## Abbreviations Yes, they are supported. If you define options like this: ```javascript { "foolhardyelephants" : Boolean , "pileofmonkeys" : Boolean } ``` Then this will work: ```bash node program.js --foolhar --pil node program.js --no-f --pileofmon # etc. ``` ## Shorthands Shorthands are a hash of shorter option names to a snippet of args that they expand to. If multiple one-character shorthands are all combined, and the combination does not unambiguously match any other option or shorthand, then they will be broken up into their constituent parts. For example: ```json { "s" : ["--loglevel", "silent"] , "g" : "--global" , "f" : "--force" , "p" : "--parseable" , "l" : "--long" } ``` ```bash npm ls -sgflp # just like doing this: npm ls --loglevel silent --global --force --long --parseable ``` ## The Rest of the args The config object returned by nopt is given a special member called `argv`, which is an object with the following fields: * `remain`: The remaining args after all the parsing has occurred. * `original`: The args as they originally appeared. * `cooked`: The args after flags and shorthands are expanded. ## Slicing Node programs are called with more or less the exact argv as it appears in C land, after the v8 and node-specific options have been plucked off. As such, `argv[0]` is always `node` and `argv[1]` is always the JavaScript program being run. That's usually not very useful to you. So they're sliced off by default. If you want them, then you can pass in `0` as the last argument, or any other number that you'd like to slice off the start of the list. # It Opens Stuff That is, in your desktop environment. This will make *actual windows pop up*, with stuff in them: ```bash npm install opener -g opener http://google.com opener ./my-file.txt opener firefox opener npm run lint ``` Also if you want to use it programmatically you can do that too: ```js var opener = require("opener"); opener("http://google.com"); opener("./my-file.txt"); opener("firefox"); opener("npm run lint"); ``` Plus, it returns the child process created, so you can do things like let your script exit while the window stays open: ```js var editor = opener("documentation.odt"); editor.unref(); // These other unrefs may be necessary if your OS's opener process // exits before the process it started is complete. editor.stdin.unref(); editor.stdout.unref(); editor.stderr.unref(); ``` ## Use It for Good Like opening the user's browser with a test harness in your package's test script: ```json { "scripts": { "test": "opener ./test/runner.html" }, "devDependencies": { "opener": "*" } } ``` ## Why Because Windows has `start`, Macs have `open`, and *nix has `xdg-open`. At least [according to some person on StackOverflow](http://stackoverflow.com/q/1480971/3191). And I like things that work on all three. Like Node.js. And Opener. The package exports an array of strings. Each string is an identifier for a license exception under the [Software Package Data Exchange (SPDX)][SPDX] software license metadata standard. [SPDX]: https://spdx.org ## Copyright and Licensing ### SPDX "SPDX" is a federally registered United States trademark of The Linux Foundation Corporation. From version 2.0 of the [SPDX] specification: > Copyright © 2010-2015 Linux Foundation and its Contributors. Licensed > under the Creative Commons Attribution License 3.0 Unported. All other > rights are expressly reserved. The Linux Foundation and the SPDX working groups are good people. Only they decide what "SPDX" means, as a standard and otherwise. I respect their work and their rights. You should, too. ### This Package > I created this package by copying exception identifiers out of the > SPDX specification. That work was mechanical, routine, and required no > creativity whatsoever. - Kyle Mitchell, package author United States users concerned about intellectual property may wish to discuss the following Supreme Court decisions with their attorneys: - _Baker v. Selden_, 101 U.S. 99 (1879) - _Feist Publications, Inc., v. Rural Telephone Service Co._, 499 U.S. 340 (1991) # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![Coverage Status](https://coveralls.io/repos/yargs/yargs-parser/badge.svg?branch=)](https://coveralls.io/r/yargs/yargs-parser?branch=master) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion. * `configuration`: the configuration loaded from the `yargs` stanza in package.json. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # npm-registry-fetch [![npm version](https://img.shields.io/npm/v/npm-registry-fetch.svg)](https://npm.im/npm-registry-fetch) [![license](https://img.shields.io/npm/l/npm-registry-fetch.svg)](https://npm.im/npm-registry-fetch) [![Travis](https://img.shields.io/travis/npm/npm-registry-fetch/latest.svg)](https://travis-ci.org/npm/npm-registry-fetch) [![AppVeyor](https://img.shields.io/appveyor/ci/zkat/npm-registry-fetch/latest.svg)](https://ci.appveyor.com/project/npm/npm-registry-fetch) [![Coverage Status](https://coveralls.io/repos/github/npm/npm-registry-fetch/badge.svg?branch=latest)](https://coveralls.io/github/npm/npm-registry-fetch?branch=latest) [`npm-registry-fetch`](https://github.com/npm/npm-registry-fetch) is a Node.js library that implements a `fetch`-like API for accessing npm registry APIs consistently. It's able to consume npm-style configuration values and has all the necessary logic for picking registries, handling scopes, and dealing with authentication details built-in. This package is meant to replace the older [`npm-registry-client`](https://npm.im/npm-registry-client). ## Example ```javascript const npmFetch = require('npm-registry-fetch') console.log( await npmFetch.json('/-/ping') ) ``` ## Table of Contents * [Installing](#install) * [Example](#example) * [Contributing](#contributing) * [API](#api) * [`fetch`](#fetch) * [`fetch.json`](#fetch-json) * [`fetch` options](#fetch-opts) ### Install `$ npm install npm-registry-fetch` ### Contributing The npm team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other. Please refer to the [Changelog](CHANGELOG.md) for project history details, too. Happy hacking! ### API #### Caching and `write=true` query strings Before performing any PUT or DELETE operation, npm clients first make a GET request to the registry resource being updated, which includes the query string `?write=true`. The semantics of this are, effectively, "I intend to write to this thing, and need to know the latest current value, so that my write can land cleanly". The public npm registry handles these `?write=true` requests by ensuring that the cache is re-validated before sending a response. In order to maintain the same behavior on the client, and not get tripped up by an overeager local cache when we intend to write data to the registry, any request that comes through `npm-registry-fetch` that contains `write=true` in the query string will forcibly set the `prefer-online` option to `true`, and set both `prefer-offline` and `offline` to false, so that any local cached value will be revalidated. #### <a name="fetch"></a> `> fetch(url, [opts]) -> Promise<Response>` Performs a request to a given URL. The URL can be either a full URL, or a path to one. The appropriate registry will be automatically picked if only a URL path is given. For available options, please see the section on [`fetch` options](#fetch-opts). ##### Example ```javascript const res = await fetch('/-/ping') console.log(res.headers) res.on('data', d => console.log(d.toString('utf8'))) ``` #### <a name="fetch-json"></a> `> fetch.json(url, [opts]) -> Promise<ResponseJSON>` Performs a request to a given registry URL, parses the body of the response as JSON, and returns it as its final value. This is a utility shorthand for `fetch(url).then(res => res.json())`. For available options, please see the section on [`fetch` options](#fetch-opts). ##### Example ```javascript const res = await fetch.json('/-/ping') console.log(res) // Body parsed as JSON ``` #### <a name="fetch-json-stream"></a> `> fetch.json.stream(url, jsonPath, [opts]) -> Stream` Performs a request to a given registry URL and parses the body of the response as JSON, with each entry being emitted through the stream. The `jsonPath` argument is a [`JSONStream.parse()` path](https://github.com/dominictarr/JSONStream#jsonstreamparsepath), and the returned stream (unlike default `JSONStream`s), has a valid `Symbol.asyncIterator` implementation. For available options, please see the section on [`fetch` options](#fetch-opts). ##### Example ```javascript console.log('https://npm.im/~zkat has access to the following packages:') for await (let {key, value} of fetch.json.stream('/-/user/zkat/package', '$*')) { console.log(`https://npm.im/${key} (perms: ${value})`) } ``` #### <a name="fetch-opts"></a> `fetch` Options Fetch options are optional, and can be passed in as either a Map-like object (one with a `.get()` method), a plain javascript object, or a [`figgy-pudding`](https://npm.im/figgy-pudding) instance. ##### <a name="opts-agent"></a> `opts.agent` * Type: http.Agent * Default: an appropriate agent based on URL protocol and proxy settings An [`Agent`](https://nodejs.org/api/http.html#http_class_http_agent) instance to be shared across requests. This allows multiple concurrent `fetch` requests to happen on the same socket. You do _not_ need to provide this option unless you want something particularly specialized, since proxy configurations and http/https agents are already automatically managed internally when this option is not passed through. ##### <a name="opts-body"></a> `opts.body` * Type: Buffer | Stream | Object * Default: null Request body to send through the outgoing request. Buffers and Streams will be passed through as-is, with a default `content-type` of `application/octet-stream`. Plain JavaScript objects will be `JSON.stringify`ed and the `content-type` will default to `application/json`. Use [`opts.headers`](#opts-headers) to set the content-type to something else. ##### <a name="opts-ca"></a> `opts.ca` * Type: String, Array, or null * Default: null The Certificate Authority signing certificate that is trusted for SSL connections to the registry. Values should be in PEM format (Windows calls it "Base-64 encoded X.509 (.CER)") with newlines replaced by the string `'\n'`. For example: ``` { ca: '-----BEGIN CERTIFICATE-----\nXXXX\nXXXX\n-----END CERTIFICATE-----' } ``` Set to `null` to only allow "known" registrars, or to a specific CA cert to trust only that specific signing authority. Multiple CAs can be trusted by specifying an array of certificates instead of a single string. See also [`opts.strict-ssl`](#opts-strict-ssl), [`opts.ca`](#opts-ca) and [`opts.key`](#opts-key) ##### <a name="opts-cache"></a> `opts.cache` * Type: path * Default: null The location of the http cache directory. If provided, certain cachable requests will be cached according to [IETF RFC 7234](https://tools.ietf.org/html/rfc7234) rules. This will speed up future requests, as well as make the cached data available offline if necessary/requested. See also [`offline`](#opts-offline), [`prefer-offline`](#opts-prefer-offline), and [`prefer-online`](#opts-prefer-online). ##### <a name="opts-cert"></a> `opts.cert` * Type: String * Default: null A client certificate to pass when accessing the registry. Values should be in PEM format (Windows calls it "Base-64 encoded X.509 (.CER)") with newlines replaced by the string `'\n'`. For example: ``` { cert: '-----BEGIN CERTIFICATE-----\nXXXX\nXXXX\n-----END CERTIFICATE-----' } ``` It is _not_ the path to a certificate file (and there is no "certfile" option). See also: [`opts.ca`](#opts-ca) and [`opts.key`](#opts-key) ##### <a name="opts-fetch-retries"></a> `opts.fetch-retries` * Type: Number * Default: 2 The "retries" config for [`retry`](https://npm.im/retry) to use when fetching packages from the registry. See also [`opts.retry`](#opts-retry) to provide all retry options as a single object. ##### <a name="opts-fetch-retry-factor"></a> `opts.fetch-retry-factor` * Type: Number * Default: 10 The "factor" config for [`retry`](https://npm.im/retry) to use when fetching packages. See also [`opts.retry`](#opts-retry) to provide all retry options as a single object. ##### <a name="opts-fetch-retry-mintimeout"></a> `opts.fetch-retry-mintimeout` * Type: Number * Default: 10000 (10 seconds) The "minTimeout" config for [`retry`](https://npm.im/retry) to use when fetching packages. See also [`opts.retry`](#opts-retry) to provide all retry options as a single object. ##### <a name="opts-fetch-retry-maxtimeout"></a> `opts.fetch-retry-maxtimeout` * Type: Number * Default: 60000 (1 minute) The "maxTimeout" config for [`retry`](https://npm.im/retry) to use when fetching packages. See also [`opts.retry`](#opts-retry) to provide all retry options as a single object. ##### <a name="opts-force-auth"></a> `opts.force-auth` * Alias: `opts.forceAuth` * Type: Object * Default: null If present, other auth-related values in `opts` will be completely ignored, including `alwaysAuth`, `email`, and `otp`, when calculating auth for a request, and the auth details in `opts.forceAuth` will be used instead. ##### <a name="opts-gzip"></a> `opts.gzip` * Type: Boolean * Default: false If true, `npm-registry-fetch` will set the `Content-Encoding` header to `gzip` and use `zlib.gzip()` or `zlib.createGzip()` to gzip-encode [`opts.body`](#opts-body). ##### <a name="opts-headers"></a> `opts.headers` * Type: Object * Default: null Additional headers for the outgoing request. This option can also be used to override headers automatically generated by `npm-registry-fetch`, such as `Content-Type`. ##### <a name="opts-ignore-body"></a> `opts.ignore-body` * Alias: `opts.ignoreBody` * Type: Boolean * Default: false If true, the **response body** will be thrown away and `res.body` set to `null`. This will prevent dangling response sockets for requests where you don't usually care what the response body is. ##### <a name="opts-integrity"></a> `opts.integrity` * Type: String | [SRI object](https://npm.im/ssri) * Default: null If provided, the response body's will be verified against this integrity string, using [`ssri`](https://npm.im/ssri). If verification succeeds, the response will complete as normal. If verification fails, the response body will error with an `EINTEGRITY` error. Body integrity is only verified if the body is actually consumed to completion -- that is, if you use `res.json()`/`res.buffer()`, or if you consume the default `res` stream data to its end. Cached data will have its integrity automatically verified using the previously-generated integrity hash for the saved request information, so `EINTEGRITY` errors can happen if [`opts.cache`](#opts-cache) is used, even if `opts.integrity` is not passed in. ##### <a name='opts-is-from-ci'></a> `opts.is-from-ci` * Alias: `opts.isFromCI` * Type: Boolean * Default: Based on environment variables This is used to populate the `npm-in-ci` request header sent to the registry. ##### <a name="opts-key"></a> `opts.key` * Type: String * Default: null A client key to pass when accessing the registry. Values should be in PEM format with newlines replaced by the string `'\n'`. For example: ``` { key: '-----BEGIN PRIVATE KEY-----\nXXXX\nXXXX\n-----END PRIVATE KEY-----' } ``` It is _not_ the path to a key file (and there is no "keyfile" option). See also: [`opts.ca`](#opts-ca) and [`opts.cert`](#opts-cert) ##### <a name="opts-local-address"></a> `opts.local-address` * Type: IP Address String * Default: null The IP address of the local interface to use when making connections to the registry. See also [`opts.proxy`](#opts-proxy) ##### <a name="opts-log"></a> `opts.log` * Type: [`npmlog`](https://npm.im/npmlog)-like * Default: null Logger object to use for logging operation details. Must have the same methods as `npmlog`. ##### <a name="opts-map-json"></a> `opts.map-json` * Alias: `mapJson`, `mapJSON` * Type: Function * Default: undefined When using `fetch.json.stream()` (NOT `fetch.json()`), this will be passed down to [`JSONStream`](https://npm.im/JSONStream) as the second argument to `JSONStream.parse`, and can be used to transform stream data before output. ##### <a name="opts-maxsockets"></a> `opts.maxsockets` * Alias: `opts.max-sockets` * Type: Integer * Default: 12 Maximum number of sockets to keep open during requests. Has no effect if [`opts.agent`](#opts-agent) is used. ##### <a name="opts-method"></a> `opts.method` * Type: String * Default: 'GET' HTTP method to use for the outgoing request. Case-insensitive. ##### <a name="opts-noproxy"></a> `opts.noproxy` * Type: Boolean * Default: process.env.NOPROXY If true, proxying will be disabled even if [`opts.proxy`](#opts-proxy) is used. ##### <a name="opts-npm-session"></a> `opts.npm-session` * Alias: `opts.npmSession` * Type: String * Default: null If provided, will be sent in the `npm-session` header. This header is used by the npm registry to identify individual user sessions (usually individual invocations of the CLI). ##### <a name="opts-offline"></a> `opts.offline` * Type: Boolean * Default: false Force offline mode: no network requests will be done during install. To allow `npm-registry-fetch` to fill in missing cache data, see [`opts.prefer-offline`](#opts-prefer-offline). This option is only really useful if you're also using [`opts.cache`](#opts-cache). This option is set to `true` when the request includes `write=true` in the query string. ##### <a name="opts-otp"></a> `opts.otp` * Type: Number | String * Default: null This is a one-time password from a two-factor authenticator. It is required for certain registry interactions when two-factor auth is enabled for a user account. ##### <a name="opts-password"></a> `opts.password` * Alias: `_password` * Type: String * Default: null Password used for basic authentication. For the more modern authentication method, please use the (more secure) [`opts.token`](#opts-token) Can optionally be scoped to a registry by using a "nerf dart" for that registry. That is: ``` { '//registry.npmjs.org/:password': 't0k3nH34r' } ``` See also [`opts.username`](#opts-username) ##### <a name="opts-prefer-offline"></a> `opts.prefer-offline` * Type: Boolean * Default: false If true, staleness checks for cached data will be bypassed, but missing data will be requested from the server. To force full offline mode, use [`opts.offline`](#opts-offline). This option is generally only useful if you're also using [`opts.cache`](#opts-cache). This option is set to `false` when the request includes `write=true` in the query string. ##### <a name="opts-prefer-online"></a> `opts.prefer-online` * Type: Boolean * Default: false If true, staleness checks for cached data will be forced, making the CLI look for updates immediately even for fresh package data. This option is generally only useful if you're also using [`opts.cache`](#opts-cache). This option is set to `true` when the request includes `write=true` in the query string. ##### <a name="opts-project-scope"></a> `opts.project-scope` * Alias: `opts.projectScope` * Type: String * Default: null If provided, will be sent in the `npm-scope` header. This header is used by the npm registry to identify the toplevel package scope that a particular project installation is using. ##### <a name="opts-proxy"></a> `opts.proxy` * Type: url * Default: null A proxy to use for outgoing http requests. If not passed in, the `HTTP(S)_PROXY` environment variable will be used. ##### <a name="opts-query"></a> `opts.query` * Type: String | Object * Default: null If provided, the request URI will have a query string appended to it using this query. If `opts.query` is an object, it will be converted to a query string using [`querystring.stringify()`](https://nodejs.org/api/querystring.html#querystring_querystring_stringify_obj_sep_eq_options). If the request URI already has a query string, it will be merged with `opts.query`, preferring `opts.query` values. ##### <a name="opts-refer"></a> `opts.refer` * Alias: `opts.referer` * Type: String * Default: null Value to use for the `Referer` header. The npm CLI itself uses this to serialize the npm command line using the given request. ##### <a name="opts-registry"></a> `opts.registry` * Type: URL * Default: `'https://registry.npmjs.org'` Registry configuration for a request. If a request URL only includes the URL path, this registry setting will be prepended. This configuration is also used to determine authentication details, so even if the request URL references a completely different host, `opts.registry` will be used to find the auth details for that request. See also [`opts.scope`](#opts-scope), [`opts.spec`](#opts-spec), and [`opts.<scope>:registry`](#opts-scope-registry) which can all affect the actual registry URL used by the outgoing request. ##### <a name="opts-retry"></a> `opts.retry` * Type: Object * Default: null Single-object configuration for request retry settings. If passed in, will override individually-passed `fetch-retry-*` settings. ##### <a name="opts-scope"></a> `opts.scope` * Type: String * Default: null Associate an operation with a scope for a scoped registry. This option can force lookup of scope-specific registries and authentication. See also [`opts.<scope>:registry`](#opts-scope-registry) and [`opts.spec`](#opts-spec) for interactions with this option. ##### <a name="opts-scope-registry"></a> `opts.<scope>:registry` * Type: String * Default: null This option type can be used to configure the registry used for requests involving a particular scope. For example, `opts['@myscope:registry'] = 'https://scope-specific.registry/'` will make it so requests go out to this registry instead of [`opts.registry`](#opts-registry) when [`opts.scope`](#opts-scope) is used, or when [`opts.spec`](#opts-spec) is a scoped package spec. The `@` before the scope name is optional, but recommended. ##### <a name="opts-spec"></a> `opts.spec` * Type: String | [`npm-registry-arg`](https://npm.im/npm-registry-arg) object. * Default: null If provided, can be used to automatically configure [`opts.scope`](#opts-scope) based on a specific package name. Non-registry package specs will throw an error. ##### <a name="opts-strict-ssl"></a> `opts.strict-ssl` * Type: Boolean * Default: true Whether or not to do SSL key validation when making requests to the registry via https. See also [`opts.ca`](#opts-ca). ##### <a name="opts-timeout"></a> `opts.timeout` * Type: Milliseconds * Default: 0 (no timeout) Time before a hanging request times out. ##### <a name="opts-token"></a> `opts.token` * Alias: `opts._authToken` * Type: String * Default: null Authentication token string. Can be scoped to a registry by using a "nerf dart" for that registry. That is: ``` { '//registry.npmjs.org/:token': 't0k3nH34r' } ``` ##### <a name="opts-user-agent"></a> `opts.user-agent` * Type: String * Default: `'npm-registry-fetch@<version>/node@<node-version>+<arch> (<platform>)'` User agent string to send in the `User-Agent` header. ##### <a name="opts-username"></a> `opts.username` * Type: String * Default: null Username used for basic authentication. For the more modern authentication method, please use the (more secure) [`opts.token`](#opts-token) Can optionally be scoped to a registry by using a "nerf dart" for that registry. That is: ``` { '//registry.npmjs.org/:username': 't0k3nH34r' } ``` See also [`opts.password`](#opts-password) ##### <a name="opts-auth"></a> `opts._auth` * Type: String * Default: null ** DEPRECATED ** This is a legacy authentication token supported only for compatibility. Please use [`opts.token`](#opts-token) instead. # verror: rich JavaScript errors This module provides several classes in support of Joyent's [Best Practices for Error Handling in Node.js](http://www.joyent.com/developers/node/design/errors). If you find any of the behavior here confusing or surprising, check out that document first. The error classes here support: * printf-style arguments for the message * chains of causes * properties to provide extra information about the error * creating your own subclasses that support all of these The classes here are: * **VError**, for chaining errors while preserving each one's error message. This is useful in servers and command-line utilities when you want to propagate an error up a call stack, but allow various levels to add their own context. See examples below. * **WError**, for wrapping errors while hiding the lower-level messages from the top-level error. This is useful for API endpoints where you don't want to expose internal error messages, but you still want to preserve the error chain for logging and debugging. * **SError**, which is just like VError but interprets printf-style arguments more strictly. * **MultiError**, which is just an Error that encapsulates one or more other errors. (This is used for parallel operations that return several errors.) # Quick start First, install the package: npm install verror If nothing else, you can use VError as a drop-in replacement for the built-in JavaScript Error class, with the addition of printf-style messages: ```javascript var err = new VError('missing file: "%s"', '/etc/passwd'); console.log(err.message); ``` This prints: missing file: "/etc/passwd" You can also pass a `cause` argument, which is any other Error object: ```javascript var fs = require('fs'); var filename = '/nonexistent'; fs.stat(filename, function (err1) { var err2 = new VError(err1, 'stat "%s"', filename); console.error(err2.message); }); ``` This prints out: stat "/nonexistent": ENOENT, stat '/nonexistent' which resembles how Unix programs typically report errors: $ sort /nonexistent sort: open failed: /nonexistent: No such file or directory To match the Unixy feel, when you print out the error, just prepend the program's name to the VError's `message`. Or just call [node-cmdutil.fail(your_verror)](https://github.com/joyent/node-cmdutil), which does this for you. You can get the next-level Error using `err.cause()`: ```javascript console.error(err2.cause().message); ``` prints: ENOENT, stat '/nonexistent' Of course, you can chain these as many times as you want, and it works with any kind of Error: ```javascript var err1 = new Error('No such file or directory'); var err2 = new VError(err1, 'failed to stat "%s"', '/junk'); var err3 = new VError(err2, 'request failed'); console.error(err3.message); ``` This prints: request failed: failed to stat "/junk": No such file or directory The idea is that each layer in the stack annotates the error with a description of what it was doing. The end result is a message that explains what happened at each level. You can also decorate Error objects with additional information so that callers can not only handle each kind of error differently, but also construct their own error messages (e.g., to localize them, format them, group them by type, and so on). See the example below. # Deeper dive The two main goals for VError are: * **Make it easy to construct clear, complete error messages intended for people.** Clear error messages greatly improve both user experience and debuggability, so we wanted to make it easy to build them. That's why the constructor takes printf-style arguments. * **Make it easy to construct objects with programmatically-accessible metadata** (which we call _informational properties_). Instead of just saying "connection refused while connecting to 192.168.1.2:80", you can add properties like `"ip": "192.168.1.2"` and `"tcpPort": 80`. This can be used for feeding into monitoring systems, analyzing large numbers of Errors (as from a log file), or localizing error messages. To really make this useful, it also needs to be easy to compose Errors: higher-level code should be able to augment the Errors reported by lower-level code to provide a more complete description of what happened. Instead of saying "connection refused", you can say "operation X failed: connection refused". That's why VError supports `causes`. In order for all this to work, programmers need to know that it's generally safe to wrap lower-level Errors with higher-level ones. If you have existing code that handles Errors produced by a library, you should be able to wrap those Errors with a VError to add information without breaking the error handling code. There are two obvious ways that this could break such consumers: * The error's name might change. People typically use `name` to determine what kind of Error they've got. To ensure compatibility, you can create VErrors with custom names, but this approach isn't great because it prevents you from representing complex failures. For this reason, VError provides `findCauseByName`, which essentially asks: does this Error _or any of its causes_ have this specific type? If error handling code uses `findCauseByName`, then subsystems can construct very specific causal chains for debuggability and still let people handle simple cases easily. There's an example below. * The error's properties might change. People often hang additional properties off of Error objects. If we wrap an existing Error in a new Error, those properties would be lost unless we copied them. But there are a variety of both standard and non-standard Error properties that should _not_ be copied in this way: most obviously `name`, `message`, and `stack`, but also `fileName`, `lineNumber`, and a few others. Plus, it's useful for some Error subclasses to have their own private properties -- and there'd be no way to know whether these should be copied. For these reasons, VError first-classes these information properties. You have to provide them in the constructor, you can only fetch them with the `info()` function, and VError takes care of making sure properties from causes wind up in the `info()` output. Let's put this all together with an example from the node-fast RPC library. node-fast implements a simple RPC protocol for Node programs. There's a server and client interface, and clients make RPC requests to servers. Let's say the server fails with an UnauthorizedError with message "user 'bob' is not authorized". The client wraps all server errors with a FastServerError. The client also wraps all request errors with a FastRequestError that includes the name of the RPC call being made. The result of this failed RPC might look like this: name: FastRequestError message: "request failed: server error: user 'bob' is not authorized" rpcMsgid: <unique identifier for this request> rpcMethod: GetObject cause: name: FastServerError message: "server error: user 'bob' is not authorized" cause: name: UnauthorizedError message: "user 'bob' is not authorized" rpcUser: "bob" When the caller uses `VError.info()`, the information properties are collapsed so that it looks like this: message: "request failed: server error: user 'bob' is not authorized" rpcMsgid: <unique identifier for this request> rpcMethod: GetObject rpcUser: "bob" Taking this apart: * The error's message is a complete description of the problem. The caller can report this directly to its caller, which can potentially make its way back to an end user (if appropriate). It can also be logged. * The caller can tell that the request failed on the server, rather than as a result of a client problem (e.g., failure to serialize the request), a transport problem (e.g., failure to connect to the server), or something else (e.g., a timeout). They do this using `findCauseByName('FastServerError')` rather than checking the `name` field directly. * If the caller logs this error, the logs can be analyzed to aggregate errors by cause, by RPC method name, by user, or whatever. Or the error can be correlated with other events for the same rpcMsgid. * It wasn't very hard for any part of the code to contribute to this Error. Each part of the stack has just a few lines to provide exactly what it knows, with very little boilerplate. It's not expected that you'd use these complex forms all the time. Despite supporting the complex case above, you can still just do: new VError("my service isn't working"); for the simple cases. # Reference: VError, WError, SError VError, WError, and SError are convenient drop-in replacements for `Error` that support printf-style arguments, first-class causes, informational properties, and other useful features. ## Constructors The VError constructor has several forms: ```javascript /* * This is the most general form. You can specify any supported options * (including "cause" and "info") this way. */ new VError(options, sprintf_args...) /* * This is a useful shorthand when the only option you need is "cause". */ new VError(cause, sprintf_args...) /* * This is a useful shorthand when you don't need any options at all. */ new VError(sprintf_args...) ``` All of these forms construct a new VError that behaves just like the built-in JavaScript `Error` class, with some additional methods described below. In the first form, `options` is a plain object with any of the following optional properties: Option name | Type | Meaning ---------------- | ---------------- | ------- `name` | string | Describes what kind of error this is. This is intended for programmatic use to distinguish between different kinds of errors. Note that in modern versions of Node.js, this name is ignored in the `stack` property value, but callers can still use the `name` property to get at it. `cause` | any Error object | Indicates that the new error was caused by `cause`. See `cause()` below. If unspecified, the cause will be `null`. `strict` | boolean | If true, then `null` and `undefined` values in `sprintf_args` are passed through to `sprintf()`. Otherwise, these are replaced with the strings `'null'`, and '`undefined`', respectively. `constructorOpt` | function | If specified, then the stack trace for this error ends at function `constructorOpt`. Functions called by `constructorOpt` will not show up in the stack. This is useful when this class is subclassed. `info` | object | Specifies arbitrary informational properties that are available through the `VError.info(err)` static class method. See that method for details. The second form is equivalent to using the first form with the specified `cause` as the error's cause. This form is distinguished from the first form because the first argument is an Error. The third form is equivalent to using the first form with all default option values. This form is distinguished from the other forms because the first argument is not an object or an Error. The `WError` constructor is used exactly the same way as the `VError` constructor. The `SError` constructor is also used the same way as the `VError` constructor except that in all cases, the `strict` property is overriden to `true. ## Public properties `VError`, `WError`, and `SError` all provide the same public properties as JavaScript's built-in Error objects. Property name | Type | Meaning ------------- | ------ | ------- `name` | string | Programmatically-usable name of the error. `message` | string | Human-readable summary of the failure. Programmatically-accessible details are provided through `VError.info(err)` class method. `stack` | string | Human-readable stack trace where the Error was constructed. For all of these classes, the printf-style arguments passed to the constructor are processed with `sprintf()` to form a message. For `WError`, this becomes the complete `message` property. For `SError` and `VError`, this message is prepended to the message of the cause, if any (with a suitable separator), and the result becomes the `message` property. The `stack` property is managed entirely by the underlying JavaScript implementation. It's generally implemented using a getter function because constructing the human-readable stack trace is somewhat expensive. ## Class methods The following methods are defined on the `VError` class and as exported functions on the `verror` module. They're defined this way rather than using methods on VError instances so that they can be used on Errors not created with `VError`. ### `VError.cause(err)` The `cause()` function returns the next Error in the cause chain for `err`, or `null` if there is no next error. See the `cause` argument to the constructor. Errors can have arbitrarily long cause chains. You can walk the `cause` chain by invoking `VError.cause(err)` on each subsequent return value. If `err` is not a `VError`, the cause is `null`. ### `VError.info(err)` Returns an object with all of the extra error information that's been associated with this Error and all of its causes. These are the properties passed in using the `info` option to the constructor. Properties not specified in the constructor for this Error are implicitly inherited from this error's cause. These properties are intended to provide programmatically-accessible metadata about the error. For an error that indicates a failure to resolve a DNS name, informational properties might include the DNS name to be resolved, or even the list of resolvers used to resolve it. The values of these properties should generally be plain objects (i.e., consisting only of null, undefined, numbers, booleans, strings, and objects and arrays containing only other plain objects). ### `VError.fullStack(err)` Returns a string containing the full stack trace, with all nested errors recursively reported as `'caused by:' + err.stack`. ### `VError.findCauseByName(err, name)` The `findCauseByName()` function traverses the cause chain for `err`, looking for an error whose `name` property matches the passed in `name` value. If no match is found, `null` is returned. If all you want is to know _whether_ there's a cause (and you don't care what it is), you can use `VError.hasCauseWithName(err, name)`. If a vanilla error or a non-VError error is passed in, then there is no cause chain to traverse. In this scenario, the function will check the `name` property of only `err`. ### `VError.hasCauseWithName(err, name)` Returns true if and only if `VError.findCauseByName(err, name)` would return a non-null value. This essentially determines whether `err` has any cause in its cause chain that has name `name`. ### `VError.errorFromList(errors)` Given an array of Error objects (possibly empty), return a single error representing the whole collection of errors. If the list has: * 0 elements, returns `null` * 1 element, returns the sole error * more than 1 element, returns a MultiError referencing the whole list This is useful for cases where an operation may produce any number of errors, and you ultimately want to implement the usual `callback(err)` pattern. You can accumulate the errors in an array and then invoke `callback(VError.errorFromList(errors))` when the operation is complete. ### `VError.errorForEach(err, func)` Convenience function for iterating an error that may itself be a MultiError. In all cases, `err` must be an Error. If `err` is a MultiError, then `func` is invoked as `func(errorN)` for each of the underlying errors of the MultiError. If `err` is any other kind of error, `func` is invoked once as `func(err)`. In all cases, `func` is invoked synchronously. This is useful for cases where an operation may produce any number of warnings that may be encapsulated with a MultiError -- but may not be. This function does not iterate an error's cause chain. ## Examples The "Demo" section above covers several basic cases. Here's a more advanced case: ```javascript var err1 = new VError('something bad happened'); /* ... */ var err2 = new VError({ 'name': 'ConnectionError', 'cause': err1, 'info': { 'errno': 'ECONNREFUSED', 'remote_ip': '127.0.0.1', 'port': 215 } }, 'failed to connect to "%s:%d"', '127.0.0.1', 215); console.log(err2.message); console.log(err2.name); console.log(VError.info(err2)); console.log(err2.stack); ``` This outputs: failed to connect to "127.0.0.1:215": something bad happened ConnectionError { errno: 'ECONNREFUSED', remote_ip: '127.0.0.1', port: 215 } ConnectionError: failed to connect to "127.0.0.1:215": something bad happened at Object.<anonymous> (/home/dap/node-verror/examples/info.js:5:12) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:935:3 Information properties are inherited up the cause chain, with values at the top of the chain overriding same-named values lower in the chain. To continue that example: ```javascript var err3 = new VError({ 'name': 'RequestError', 'cause': err2, 'info': { 'errno': 'EBADREQUEST' } }, 'request failed'); console.log(err3.message); console.log(err3.name); console.log(VError.info(err3)); console.log(err3.stack); ``` This outputs: request failed: failed to connect to "127.0.0.1:215": something bad happened RequestError { errno: 'EBADREQUEST', remote_ip: '127.0.0.1', port: 215 } RequestError: request failed: failed to connect to "127.0.0.1:215": something bad happened at Object.<anonymous> (/home/dap/node-verror/examples/info.js:20:12) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:935:3 You can also print the complete stack trace of combined `Error`s by using `VError.fullStack(err).` ```javascript var err1 = new VError('something bad happened'); /* ... */ var err2 = new VError(err1, 'something really bad happened here'); console.log(VError.fullStack(err2)); ``` This outputs: VError: something really bad happened here: something bad happened at Object.<anonymous> (/home/dap/node-verror/examples/fullStack.js:5:12) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Function.Module.runMain (module.js:441:10) at startup (node.js:139:18) at node.js:968:3 caused by: VError: something bad happened at Object.<anonymous> (/home/dap/node-verror/examples/fullStack.js:3:12) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Function.Module.runMain (module.js:441:10) at startup (node.js:139:18) at node.js:968:3 `VError.fullStack` is also safe to use on regular `Error`s, so feel free to use it whenever you need to extract the stack trace from an `Error`, regardless if it's a `VError` or not. # Reference: MultiError MultiError is an Error class that represents a group of Errors. This is used when you logically need to provide a single Error, but you want to preserve information about multiple underying Errors. A common case is when you execute several operations in parallel and some of them fail. MultiErrors are constructed as: ```javascript new MultiError(error_list) ``` `error_list` is an array of at least one `Error` object. The cause of the MultiError is the first error provided. None of the other `VError` options are supported. The `message` for a MultiError consists the `message` from the first error, prepended with a message indicating that there were other errors. For example: ```javascript err = new MultiError([ new Error('failed to resolve DNS name "abc.example.com"'), new Error('failed to resolve DNS name "def.example.com"'), ]); console.error(err.message); ``` outputs: first of 2 errors: failed to resolve DNS name "abc.example.com" See the convenience function `VError.errorFromList`, which is sometimes simpler to use than this constructor. ## Public methods ### `errors()` Returns an array of the errors used to construct this MultiError. # Contributing See separate [contribution guidelines](CONTRIBUTING.md). [![npm](https://img.shields.io/npm/v/npx.svg)](https://npm.im/npx) [![license](https://img.shields.io/npm/l/npx.svg)](https://npm.im/npx) [![Travis](https://img.shields.io/travis/npm/npx.svg)](https://travis-ci.org/npm/npx) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/npx?svg=true)](https://ci.appveyor.com/project/npm/npx) [![Coverage Status](https://coveralls.io/repos/github/npm/npx/badge.svg?branch=latest)](https://coveralls.io/github/npm/npx?branch=latest) # npx(1) -- execute npm package binaries ## SYNOPSIS `npx [options] <command>[@version] [command-arg]...` `npx [options] [-p|--package <pkg>]... <command> [command-arg]...` `npx [options] -c '<command-string>'` `npx --shell-auto-fallback [shell]` ## INSTALL `npm install -g npx` ## DESCRIPTION Executes `<command>` either from a local `node_modules/.bin`, or from a central cache, installing any packages needed in order for `<command>` to run. By default, `npx` will check whether `<command>` exists in `$PATH`, or in the local project binaries, and execute that. If `<command>` is not found, it will be installed prior to execution. Unless a `--package` option is specified, `npx` will try to guess the name of the binary to invoke depending on the specifier provided. All package specifiers understood by `npm` may be used with `npx`, including git specifiers, remote tarballs, local directories, or scoped packages. If a full specifier is included, or if `--package` is used, npx will always use a freshly-installed, temporary version of the package. This can also be forced with the `--ignore-existing` flag. * `-p, --package <package>` - define the package to be installed. This defaults to the value of `<command>`. This is only needed for packages with multiple binaries if you want to call one of the other executables, or where the binary name does not match the package name. If this option is provided `<command>` will be executed as-is, without interpreting `@version` if it's there. Multiple `--package` options may be provided, and all the packages specified will be installed. * `--no-install` - If passed to `npx`, it will only try to run `<command>` if it already exists in the current path or in `$prefix/node_modules/.bin`. It won't try to install missing commands. * `--cache <path>` - set the location of the npm cache. Defaults to npm's own cache settings. * `--userconfig <path>` - path to the user configuration file to pass to npm. Defaults to whatever npm's current default is. * `-c <string>` - Execute `<string>` inside an `npm run-script`-like shell environment, with all the usual environment variables available. Only the first item in `<string>` will be automatically used as `<command>`. Any others _must_ use `-p`. * `--shell <string>` - The shell to invoke the command with, if any. * `--shell-auto-fallback [<shell>]` - Generates shell code to override your shell's "command not found" handler with one that calls `npx`. Tries to figure out your shell, or you can pass its name (either `bash`, `fish`, or `zsh`) as an option. See below for how to install. * `--ignore-existing` - If this flag is set, npx will not look in `$PATH`, or in the current package's `node_modules/.bin` for an existing version before deciding whether to install. Binaries in those paths will still be available for execution, but will be shadowed by any packages requested by this install. * `-q, --quiet` - Suppressed any output from npx itself (progress bars, error messages, install reports). Subcommand output itself will not be silenced. * `-n, --node-arg` - Extra node argument to supply to node when binary is a node script. You can supply this option multiple times to add more arguments. * `-v, --version` - Show the current npx version. ## EXAMPLES ### Running a project-local bin ``` $ npm i -D webpack $ npx webpack ... ``` ### One-off invocation without local installation ``` $ npm rm webpack $ npx webpack -- ... $ cat package.json ...webpack not in "devDependencies"... ``` ### Invoking a command from a github repository ``` $ npx github:piuccio/cowsay ...or... $ npx git+ssh://my.hosted.git:cowsay.git#semver:^1 ...etc... ``` ### Execute a full shell command using one npx call w/ multiple packages ``` $ npx -p lolcatjs -p cowsay -c \ 'echo "$npm_package_name@$npm_package_version" | cowsay | lolcatjs' ... _____ < your-cool-package@1.2.3 > ----- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || ``` ### Run node binary with --inspect ``` $ npx --node-arg=--inspect cowsay Debugger listening on ws://127.0.0.1:9229/.... ``` ### Specify a node version to run npm scripts (or anything else!) ``` npx -p node@8 npm run build ``` ## SHELL AUTO FALLBACK You can configure `npx` to run as your default fallback command when you type something in the command line with an `@` but the command is not found. This includes installing packages that were not found in the local prefix either. For example: ``` $ npm@4 --version (stderr) npm@4 not found. Trying with npx... 4.6.1 $ asdfasdfasf zsh: command not found: asfdasdfasdf ``` Currently, `zsh`, `bash` (>= 4), and `fish` are supported. You can access these completion scripts using `npx --shell-auto-fallback <shell>`. To install permanently, add the relevant line below to your `~/.bashrc`, `~/.zshrc`, `~/.config/fish/config.fish`, or as needed. To install just for the shell session, simply run the line. You can optionally pass through `--no-install` when generating the fallback to prevent it from installing packages if the command is missing. ### For bash@>=4: ``` $ source <(npx --shell-auto-fallback bash) ``` ### For zsh: ``` $ source <(npx --shell-auto-fallback zsh) ``` ### For fish: ``` $ source (npx --shell-auto-fallback fish | psub) ``` ## ACKNOWLEDGEMENTS Huge thanks to [Kwyn Meagher](https://blog.kwyn.io) for generously donating the package name in the main npm registry. Previously `npx` was used for a Tessel board Neopixels library, which can now be found under [`npx-tessel`](https://npm.im/npx-tessel). ## AUTHOR Written by [Kat Marchan](https://github.com/zkat). ## REPORTING BUGS Please file any relevant issues [on Github.](https://github.com/npm/npx) ## LICENSE This work is released by its authors into the public domain under CC0-1.0. See `LICENSE.md` for details. ## SEE ALSO * `npm(1)` * `npm-run-script(1)` * `npm-config(7)` # parallel-transform [Transform stream](http://nodejs.org/api/stream.html#stream_class_stream_transform_1) for Node.js that allows you to run your transforms in parallel without changing the order of the output. npm install parallel-transform It is easy to use ``` js var transform = require('parallel-transform'); var stream = transform(10, function(data, callback) { // 10 is the parallism level setTimeout(function() { callback(null, data); }, Math.random() * 1000); }); for (var i = 0; i < 10; i++) { stream.write(''+i); } stream.end(); stream.on('data', function(data) { console.log(data); // prints 0,1,2,... }); stream.on('end', function() { console.log('stream has ended'); }); ``` If you run the above example you'll notice that it runs in parallel (does not take ~1 second between each print) and that the order is preserved ## Stream options All transforms are Node 0.10 streams. Per default they are created with the options `{objectMode:true}`. If you want to use your own stream options pass them as the second parameter ``` js var stream = transform(10, {objectMode:false}, function(data, callback) { // data is now a buffer callback(null, data); }); fs.createReadStream('filename').pipe(stream).pipe(process.stdout); ``` ### Unordered Passing the option `{ordered:false}` will output the data as soon as it's processed by a transform, without waiting to respect the order. ## License MIT smart-buffer [![Build Status](https://travis-ci.org/JoshGlazebrook/smart-buffer.svg?branch=master)](https://travis-ci.org/JoshGlazebrook/smart-buffer) [![Coverage Status](https://coveralls.io/repos/github/JoshGlazebrook/smart-buffer/badge.svg?branch=master)](https://coveralls.io/github/JoshGlazebrook/smart-buffer?branch=master) ============= smart-buffer is a Buffer wrapper that adds automatic read & write offset tracking, string operations, data insertions, and more. ![stats](https://nodei.co/npm/smart-buffer.png?downloads=true&downloadRank=true&stars=true "stats") **Key Features**: * Proxies all of the Buffer write and read functions * Keeps track of read and write offsets automatically * Grows the internal Buffer as needed * Useful string operations. (Null terminating strings) * Allows for inserting values at specific points in the Buffer * Built in TypeScript * Type Definitions Provided * Browser Support (using Webpack/Browserify) * Full test coverage **Requirements**: * Node v4.0+ is supported at this time. (Versions prior to 2.0 will work on node 0.10) ## Breaking Changes in v4.0 * Old constructor patterns have been completely removed. It's now required to use the SmartBuffer.fromXXX() factory constructors. * rewind(), skip(), moveTo() have been removed. (see [offsets](#offsets)) * Internal private properties are now prefixed with underscores (_) * **All** writeXXX() methods that are given an offset will now **overwrite data** instead of insert. (see [write vs insert](#write-vs-insert)) * insertXXX() methods have been added for when you want to insert data at a specific offset (this replaces the old behavior of writeXXX() when an offset was provided) ## Looking for v3 docs? Legacy documentation for version 3 and prior can be found [here](https://github.com/JoshGlazebrook/smart-buffer/blob/master/docs/README_v3.md). ## Installing: `yarn add smart-buffer` or `npm install smart-buffer` Note: The published NPM package includes the built javascript library. If you cloned this repo and wish to build the library manually use: `npm run build` ## Using smart-buffer ```javascript // Javascript const SmartBuffer = require('smart-buffer').SmartBuffer; // Typescript import { SmartBuffer, SmartBufferOptions} from 'smart-buffer'; ``` ### Simple Example Building a packet that uses the following protocol specification: `[PacketType:2][PacketLength:2][Data:XX]` To build this packet using the vanilla Buffer class, you would have to count up the length of the data payload beforehand. You would also need to keep track of the current "cursor" position in your Buffer so you write everything in the right places. With smart-buffer you don't have to do either of those things. ```javascript function createLoginPacket(username, password, age, country) { const packet = new SmartBuffer(); packet.writeUInt16LE(0x0060); // Some packet type packet.writeStringNT(username); packet.writeStringNT(password); packet.writeUInt8(age); packet.writeStringNT(country); packet.insertUInt16LE(packet.length - 2, 2); return packet.toBuffer(); } ``` With the above function, you now can do this: ```javascript const login = createLoginPacket("Josh", "secret123", 22, "United States"); // <Buffer 60 00 1e 00 4a 6f 73 68 00 73 65 63 72 65 74 31 32 33 00 16 55 6e 69 74 65 64 20 53 74 61 74 65 73 00> ``` Notice that the `[PacketLength:2]` value (1e 00) was inserted at position 2. Reading back the packet we created above is just as easy: ```javascript const reader = SmartBuffer.fromBuffer(login); const logininfo = { packetType: reader.readUInt16LE(), packetLength: reader.readUInt16LE(), username: reader.readStringNT(), password: reader.readStringNT(), age: reader.readUInt8(), country: reader.readStringNT() }; /* { packetType: 96, (0x0060) packetLength: 30, username: 'Josh', password: 'secret123', age: 22, country: 'United States' } */ ``` ## Write vs Insert In prior versions of SmartBuffer, .writeXXX(value, offset) calls would insert data when an offset was provided. In version 4, this will now overwrite the data at the offset position. To insert data there are now corresponding .insertXXX(value, offset) methods. **SmartBuffer v3**: ```javascript const buff = SmartBuffer.fromBuffer(new Buffer([1,2,3,4,5,6])); buff.writeInt8(7, 2); console.log(buff.toBuffer()) // <Buffer 01 02 07 03 04 05 06> ``` **SmartBuffer v4**: ```javascript const buff = SmartBuffer.fromBuffer(new Buffer([1,2,3,4,5,6])); buff.writeInt8(7, 2); console.log(buff.toBuffer()); // <Buffer 01 02 07 04 05 06> ``` To insert you instead should use: ```javascript const buff = SmartBuffer.fromBuffer(new Buffer([1,2,3,4,5,6])); buff.insertInt8(7, 2); console.log(buff.toBuffer()); // <Buffer 01 02 07 03 04 05 06> ``` **Note:** Insert/Writing to a position beyond the currently tracked internal Buffer will zero pad to your offset. ## Constructing a smart-buffer There are a few different ways to construct a SmartBuffer instance. ```javascript // Creating SmartBuffer from existing Buffer const buff = SmartBuffer.fromBuffer(buffer); // Creates instance from buffer. (Uses default utf8 encoding) const buff = SmartBuffer.fromBuffer(buffer, 'ascii'); // Creates instance from buffer with ascii encoding for strings. // Creating SmartBuffer with specified internal Buffer size. (Note: this is not a hard cap, the internal buffer will grow as needed). const buff = SmartBuffer.fromSize(1024); // Creates instance with internal Buffer size of 1024. const buff = SmartBuffer.fromSize(1024, 'utf8'); // Creates instance with internal Buffer size of 1024, and utf8 encoding for strings. // Creating SmartBuffer with options object. This one specifies size and encoding. const buff = SmartBuffer.fromOptions({ size: 1024, encoding: 'ascii' }); // Creating SmartBuffer with options object. This one specified an existing Buffer. const buff = SmartBuffer.fromOptions({ buff: buffer }); // Creating SmartBuffer from a string. const buff = SmartBuffer.fromBuffer(Buffer.from('some string', 'utf8')); // Just want a regular SmartBuffer with all default options? const buff = new SmartBuffer(); ``` # Api Reference: **Note:** SmartBuffer is fully documented with Typescript definitions as well as jsdocs so your favorite editor/IDE will have intellisense. **Table of Contents** 1. [Constructing](#constructing) 2. **Numbers** 1. [Integers](#integers) 2. [Floating Points](#floating-point-numbers) 3. **Strings** 1. [Strings](#strings) 2. [Null Terminated Strings](#null-terminated-strings) 4. [Buffers](#buffers) 5. [Offsets](#offsets) 6. [Other](#other) ## Constructing ### constructor() ### constructor([options]) - ```options``` *{SmartBufferOptions}* An optional options object to construct a SmartBuffer with. Examples: ```javascript const buff = new SmartBuffer(); const buff = new SmartBuffer({ size: 1024, encoding: 'ascii' }); ``` ### Class Method: fromBuffer(buffer[, encoding]) - ```buffer``` *{Buffer}* The Buffer instance to wrap. - ```encoding``` *{string}* The string encoding to use. ```Default: 'utf8'``` Examples: ```javascript const someBuffer = Buffer.from('some string'); const buff = SmartBuffer.fromBuffer(someBuffer); // Defaults to utf8 const buff = SmartBuffer.fromBuffer(someBuffer, 'ascii'); ``` ### Class Method: fromSize(size[, encoding]) - ```size``` *{number}* The size to initialize the internal Buffer. - ```encoding``` *{string}* The string encoding to use. ```Default: 'utf8'``` Examples: ```javascript const buff = SmartBuffer.fromSize(1024); // Defaults to utf8 const buff = SmartBuffer.fromSize(1024, 'ascii'); ``` ### Class Method: fromOptions(options) - ```options``` *{SmartBufferOptions}* The Buffer instance to wrap. ```typescript interface SmartBufferOptions { encoding?: BufferEncoding; // Defaults to utf8 size?: number; // Defaults to 4096 buff?: Buffer; } ``` Examples: ```javascript const buff = SmartBuffer.fromOptions({ size: 1024 }; const buff = SmartBuffer.fromOptions({ size: 1024, encoding: 'utf8' }); const buff = SmartBuffer.fromOptions({ encoding: 'utf8' }); const someBuff = Buffer.from('some string', 'utf8'); const buff = SmartBuffer.fromOptions({ buffer: someBuff, encoding: 'utf8' }); ``` ## Integers ### readInt8([offset]) - ```offset``` *{number}* Optional position to start reading data from. **Default**: ```Auto managed offset``` - Returns *{number}* Read a Int8 value. ### buff.readInt16BE([offset]) ### buff.readInt16LE([offset]) ### buff.readUInt16BE([offset]) ### buff.readUInt16LE([offset]) - ```offset``` *{number}* Optional position to start reading data from. **Default**: ```Auto managed offset``` - Returns *{number}* Read a 16 bit integer value. ### buff.readInt32BE([offset]) ### buff.readInt32LE([offset]) ### buff.readUInt32BE([offset]) ### buff.readUInt32LE([offset]) - ```offset``` *{number}* Optional position to start reading data from. **Default**: ```Auto managed offset``` - Returns *{number}* Read a 32 bit integer value. ### buff.writeInt8(value[, offset]) ### buff.writeUInt8(value[, offset]) - ```value``` *{number}* The value to write. - ```offset``` *{number}* An optional offset to write this value to. **Default:** ```Auto managed offset``` - Returns *{this}* Write a Int8 value. ### buff.insertInt8(value, offset) ### buff.insertUInt8(value, offset) - ```value``` *{number}* The value to insert. - ```offset``` *{number}* The offset to insert this data at. - Returns *{this}* Insert a Int8 value. ### buff.writeInt16BE(value[, offset]) ### buff.writeInt16LE(value[, offset]) ### buff.writeUInt16BE(value[, offset]) ### buff.writeUInt16LE(value[, offset]) - ```value``` *{number}* The value to write. - ```offset``` *{number}* An optional offset to write this value to. **Default:** ```Auto managed offset``` - Returns *{this}* Write a 16 bit integer value. ### buff.insertInt16BE(value, offset) ### buff.insertInt16LE(value, offset) ### buff.insertUInt16BE(value, offset) ### buff.insertUInt16LE(value, offset) - ```value``` *{number}* The value to insert. - ```offset``` *{number}* The offset to insert this data at. - Returns *{this}* Insert a 16 bit integer value. ### buff.writeInt32BE(value[, offset]) ### buff.writeInt32LE(value[, offset]) ### buff.writeUInt32BE(value[, offset]) ### buff.writeUInt32LE(value[, offset]) - ```value``` *{number}* The value to write. - ```offset``` *{number}* An optional offset to write this value to. **Default:** ```Auto managed offset``` - Returns *{this}* Write a 32 bit integer value. ### buff.insertInt32BE(value, offset) ### buff.insertInt32LE(value, offset) ### buff.insertUInt32BE(value, offset) ### buff.nsertUInt32LE(value, offset) - ```value``` *{number}* The value to insert. - ```offset``` *{number}* The offset to insert this data at. - Returns *{this}* Insert a 32 bit integer value. ## Floating Point Numbers ### buff.readFloatBE([offset]) ### buff.readFloatLE([offset]) - ```offset``` *{number}* Optional position to start reading data from. **Default**: ```Auto managed offset``` - Returns *{number}* Read a Float value. ### buff.eadDoubleBE([offset]) ### buff.readDoubleLE([offset]) - ```offset``` *{number}* Optional position to start reading data from. **Default**: ```Auto managed offset``` - Returns *{number}* Read a Double value. ### buff.writeFloatBE(value[, offset]) ### buff.writeFloatLE(value[, offset]) - ```value``` *{number}* The value to write. - ```offset``` *{number}* An optional offset to write this value to. **Default:** ```Auto managed offset``` - Returns *{this}* Write a Float value. ### buff.insertFloatBE(value, offset) ### buff.insertFloatLE(value, offset) - ```value``` *{number}* The value to insert. - ```offset``` *{number}* The offset to insert this data at. - Returns *{this}* Insert a Float value. ### buff.writeDoubleBE(value[, offset]) ### buff.writeDoubleLE(value[, offset]) - ```value``` *{number}* The value to write. - ```offset``` *{number}* An optional offset to write this value to. **Default:** ```Auto managed offset``` - Returns *{this}* Write a Double value. ### buff.insertDoubleBE(value, offset) ### buff.insertDoubleLE(value, offset) - ```value``` *{number}* The value to insert. - ```offset``` *{number}* The offset to insert this data at. - Returns *{this}* Insert a Double value. ## Strings ### buff.readString() ### buff.readString(size[, encoding]) ### buff.readString(encoding) - ```size``` *{number}* The number of bytes to read. **Default:** ```Reads to the end of the Buffer.``` - ```encoding``` *{string}* The string encoding to use. **Default:** ```utf8```. Read a string value. Examples: ```javascript const buff = SmartBuffer.fromBuffer(Buffer.from('hello there', 'utf8')); buff.readString(); // 'hello there' buff.readString(2); // 'he' buff.readString(2, 'utf8'); // 'he' buff.readString('utf8'); // 'hello there' ``` ### buff.writeString(value) ### buff.writeString(value[, offset]) ### buff.writeString(value[, encoding]) ### buff.writeString(value[, offset[, encoding]]) - ```value``` *{string}* The string value to write. - ```offset``` *{number}* The offset to write this value to. **Default:** ```Auto managed offset``` - ```encoding``` *{string}* An optional string encoding to use. **Default:** ```utf8``` Write a string value. Examples: ```javascript buff.writeString('hello'); // Auto managed offset buff.writeString('hello', 2); buff.writeString('hello', 'utf8') // Auto managed offset buff.writeString('hello', 2, 'utf8'); ``` ### buff.insertString(value, offset[, encoding]) - ```value``` *{string}* The string value to write. - ```offset``` *{number}* The offset to write this value to. - ```encoding``` *{string}* An optional string encoding to use. **Default:** ```utf8``` Insert a string value. Examples: ```javascript buff.insertString('hello', 2); buff.insertString('hello', 2, 'utf8'); ``` ## Null Terminated Strings ### buff.readStringNT() ### buff.readStringNT(encoding) - ```encoding``` *{string}* The string encoding to use. **Default:** ```utf8```. Read a null terminated string value. (If a null is not found, it will read to the end of the Buffer). Examples: ```javascript const buff = SmartBuffer.fromBuffer(Buffer.from('hello\0 there', 'utf8')); buff.readStringNT(); // 'hello' // If we called this again: buff.readStringNT(); // ' there' ``` ### buff.writeStringNT(value) ### buff.writeStringNT(value[, offset]) ### buff.writeStringNT(value[, encoding]) ### buff.writeStringNT(value[, offset[, encoding]]) - ```value``` *{string}* The string value to write. - ```offset``` *{number}* The offset to write this value to. **Default:** ```Auto managed offset``` - ```encoding``` *{string}* An optional string encoding to use. **Default:** ```utf8``` Write a null terminated string value. Examples: ```javascript buff.writeStringNT('hello'); // Auto managed offset <Buffer 68 65 6c 6c 6f 00> buff.writeStringNT('hello', 2); // <Buffer 00 00 68 65 6c 6c 6f 00> buff.writeStringNT('hello', 'utf8') // Auto managed offset buff.writeStringNT('hello', 2, 'utf8'); ``` ### buff.insertStringNT(value, offset[, encoding]) - ```value``` *{string}* The string value to write. - ```offset``` *{number}* The offset to write this value to. - ```encoding``` *{string}* An optional string encoding to use. **Default:** ```utf8``` Insert a null terminated string value. Examples: ```javascript buff.insertStringNT('hello', 2); buff.insertStringNT('hello', 2, 'utf8'); ``` ## Buffers ### buff.readBuffer([length]) - ```length``` *{number}* The number of bytes to read into a Buffer. **Default:** ```Reads to the end of the Buffer``` Read a Buffer of a specified size. ### buff.writeBuffer(value[, offset]) - ```value``` *{Buffer}* The buffer value to write. - ```offset``` *{number}* An optional offset to write the value to. **Default:** ```Auto managed offset``` ### buff.insertBuffer(value, offset) - ```value``` *{Buffer}* The buffer value to write. - ```offset``` *{number}* The offset to write the value to. ### buff.readBufferNT() Read a null terminated Buffer. ### buff.writeBufferNT(value[, offset]) - ```value``` *{Buffer}* The buffer value to write. - ```offset``` *{number}* An optional offset to write the value to. **Default:** ```Auto managed offset``` Write a null terminated Buffer. ### buff.insertBufferNT(value, offset) - ```value``` *{Buffer}* The buffer value to write. - ```offset``` *{number}* The offset to write the value to. Insert a null terminated Buffer. ## Offsets ### buff.readOffset ### buff.readOffset(offset) - ```offset``` *{number}* The new read offset value to set. - Returns: ```The current read offset``` Gets or sets the current read offset. Examples: ```javascript const currentOffset = buff.readOffset; // 5 buff.readOffset = 10; console.log(buff.readOffset) // 10 ``` ### buff.writeOffset ### buff.writeOffset(offset) - ```offset``` *{number}* The new write offset value to set. - Returns: ```The current write offset``` Gets or sets the current write offset. Examples: ```javascript const currentOffset = buff.writeOffset; // 5 buff.writeOffset = 10; console.log(buff.writeOffset) // 10 ``` ### buff.encoding ### buff.encoding(encoding) - ```encoding``` *{string}* The new string encoding to set. - Returns: ```The current string encoding``` Gets or sets the current string encoding. Examples: ```javascript const currentEncoding = buff.encoding; // 'utf8' buff.encoding = 'ascii'; console.log(buff.encoding) // 'ascii' ``` ## Other ### buff.clear() Clear and resets the SmartBuffer instance. ### buff.remaining() - Returns ```Remaining data left to be read``` Gets the number of remaining bytes to be read. ### buff.internalBuffer - Returns: *{Buffer}* Gets the internally managed Buffer (Includes unmanaged data). Examples: ```javascript const buff = SmartBuffer.fromSize(16); buff.writeString('hello'); console.log(buff.InternalBuffer); // <Buffer 68 65 6c 6c 6f 00 00 00 00 00 00 00 00 00 00 00> ``` ### buff.toBuffer() - Returns: *{Buffer}* Gets a sliced Buffer instance of the internally managed Buffer. (Only includes managed data) Examples: ```javascript const buff = SmartBuffer.fromSize(16); buff.writeString('hello'); console.log(buff.toBuffer()); // <Buffer 68 65 6c 6c 6f> ``` ### buff.toString([encoding]) - ```encoding``` *{string}* The string encoding to use when converting to a string. **Default:** ```utf8``` - Returns *{string}* Gets a string representation of all data in the SmartBuffer. ### buff.destroy() Destroys the SmartBuffer instance. ## License This work is licensed under the [MIT license](http://en.wikipedia.org/wiki/MIT_License). Deep Extend =========== Recursive object extending. [![Build Status](https://api.travis-ci.org/unclechu/node-deep-extend.svg?branch=master)](https://travis-ci.org/unclechu/node-deep-extend) [![NPM](https://nodei.co/npm/deep-extend.png?downloads=true&downloadRank=true&stars=true)](https://nodei.co/npm/deep-extend/) Install ------- ```bash $ npm install deep-extend ``` Usage ----- ```javascript var deepExtend = require('deep-extend'); var obj1 = { a: 1, b: 2, d: { a: 1, b: [], c: { test1: 123, test2: 321 } }, f: 5, g: 123, i: 321, j: [1, 2] }; var obj2 = { b: 3, c: 5, d: { b: { first: 'one', second: 'two' }, c: { test2: 222 } }, e: { one: 1, two: 2 }, f: [], g: (void 0), h: /abc/g, i: null, j: [3, 4] }; deepExtend(obj1, obj2); console.log(obj1); /* { a: 1, b: 3, d: { a: 1, b: { first: 'one', second: 'two' }, c: { test1: 123, test2: 222 } }, f: [], g: undefined, c: 5, e: { one: 1, two: 2 }, h: /abc/g, i: null, j: [3, 4] } */ ``` Unit testing ------------ ```bash $ npm test ``` Changelog --------- [CHANGELOG.md](./CHANGELOG.md) Any issues? ----------- Please, report about issues [here](https://github.com/unclechu/node-deep-extend/issues). License ------- [MIT](./LICENSE) # defaults A simple one level options merge utility ## install `npm install defaults` ## use ```javascript var defaults = require('defaults'); var handle = function(options, fn) { options = defaults(options, { timeout: 100 }); setTimeout(function() { fn(options); }, options.timeout); } handle({ timeout: 1000 }, function() { // we're here 1000 ms later }); handle({ timeout: 10000 }, function() { // we're here 10s later }); ``` ## summary this module exports a function that takes 2 arguments: `options` and `defaults`. When called, it overrides all of `undefined` properties in `options` with the clones of properties defined in `defaults` Sidecases: if called with a falsy `options` value, options will be initialized to a new object before being merged onto. ## license [MIT](LICENSE) # npm-init An initter you init wit, innit? ## More stuff here Blerp derp herp lerg borgle pop munch efemerate baz foo a gandt synergy jorka chatt slurm. # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode Technical Report #51](https://github.com/mathiasbynens/unicode-tr51). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # assert-plus This library is a super small wrapper over node's assert module that has two things: (1) the ability to disable assertions with the environment variable NODE\_NDEBUG, and (2) some API wrappers for argument testing. Like `assert.string(myArg, 'myArg')`. As a simple example, most of my code looks like this: ```javascript var assert = require('assert-plus'); function fooAccount(options, callback) { assert.object(options, 'options'); assert.number(options.id, 'options.id'); assert.bool(options.isManager, 'options.isManager'); assert.string(options.name, 'options.name'); assert.arrayOfString(options.email, 'options.email'); assert.func(callback, 'callback'); // Do stuff callback(null, {}); } ``` # API All methods that *aren't* part of node's core assert API are simply assumed to take an argument, and then a string 'name' that's not a message; `AssertionError` will be thrown if the assertion fails with a message like: AssertionError: foo (string) is required at test (/home/mark/work/foo/foo.js:3:9) at Object.<anonymous> (/home/mark/work/foo/foo.js:15:1) at Module._compile (module.js:446:26) at Object..js (module.js:464:10) at Module.load (module.js:353:31) at Function._load (module.js:311:12) at Array.0 (module.js:484:10) at EventEmitter._tickCallback (node.js:190:38) from: ```javascript function test(foo) { assert.string(foo, 'foo'); } ``` There you go. You can check that arrays are of a homogeneous type with `Arrayof$Type`: ```javascript function test(foo) { assert.arrayOfString(foo, 'foo'); } ``` You can assert IFF an argument is not `undefined` (i.e., an optional arg): ```javascript assert.optionalString(foo, 'foo'); ``` Lastly, you can opt-out of assertion checking altogether by setting the environment variable `NODE_NDEBUG=1`. This is pseudo-useful if you have lots of assertions, and don't want to pay `typeof ()` taxes to v8 in production. Be advised: The standard functions re-exported from `assert` are also disabled in assert-plus if NDEBUG is specified. Using them directly from the `assert` module avoids this behavior. The complete list of APIs is: * assert.array * assert.bool * assert.buffer * assert.func * assert.number * assert.finite * assert.object * assert.string * assert.stream * assert.date * assert.regexp * assert.uuid * assert.arrayOfArray * assert.arrayOfBool * assert.arrayOfBuffer * assert.arrayOfFunc * assert.arrayOfNumber * assert.arrayOfFinite * assert.arrayOfObject * assert.arrayOfString * assert.arrayOfStream * assert.arrayOfDate * assert.arrayOfRegexp * assert.arrayOfUuid * assert.optionalArray * assert.optionalBool * assert.optionalBuffer * assert.optionalFunc * assert.optionalNumber * assert.optionalFinite * assert.optionalObject * assert.optionalString * assert.optionalStream * assert.optionalDate * assert.optionalRegexp * assert.optionalUuid * assert.optionalArrayOfArray * assert.optionalArrayOfBool * assert.optionalArrayOfBuffer * assert.optionalArrayOfFunc * assert.optionalArrayOfNumber * assert.optionalArrayOfFinite * assert.optionalArrayOfObject * assert.optionalArrayOfString * assert.optionalArrayOfStream * assert.optionalArrayOfDate * assert.optionalArrayOfRegexp * assert.optionalArrayOfUuid * assert.AssertionError * assert.fail * assert.ok * assert.equal * assert.notEqual * assert.deepEqual * assert.notDeepEqual * assert.strictEqual * assert.notStrictEqual * assert.throws * assert.doesNotThrow * assert.ifError # Installation npm install assert-plus ## License The MIT License (MIT) Copyright (c) 2012 Mark Cavage Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## Bugs See <https://github.com/mcavage/node-assert-plus/issues>. # iferr Higher-order functions for easier error handling. `if (err) return cb(err);` be gone! ## Install ```bash npm install iferr ``` ## Use ### JavaScript/ES6 example ```js var iferr = require('iferr'); function get_friends_count(id, cb) { User.load_user(id, iferr(cb, user => user.load_friends(iferr(cb, friends => cb(null, friends.length) )) )) } ``` ### JavaScript/ES5 example ```js var iferr = require('iferr'); function get_friends_count(id, cb) { User.load_user(id, iferr(cb, function(user) { user.load_friends(iferr(cb, function(friends) { cb(null, friends.length) })) })) } ``` ### CoffeeScript example ```coffee iferr = require 'iferr' get_friends_count = (id, cb) -> User.load_user id, iferr cb, (user) -> user.load_friends iferr cb, (friends) -> cb null, friends.length ``` (TODO: document tiferr, throwerr and printerr) ## License MIT sshpk ========= Parse, convert, fingerprint and use SSH keys (both public and private) in pure node -- no `ssh-keygen` or other external dependencies. Supports RSA, DSA, ECDSA (nistp-\*) and ED25519 key types, in PEM (PKCS#1, PKCS#8) and OpenSSH formats. This library has been extracted from [`node-http-signature`](https://github.com/joyent/node-http-signature) (work by [Mark Cavage](https://github.com/mcavage) and [Dave Eddy](https://github.com/bahamas10)) and [`node-ssh-fingerprint`](https://github.com/bahamas10/node-ssh-fingerprint) (work by Dave Eddy), with additions (including ECDSA support) by [Alex Wilson](https://github.com/arekinath). Install ------- ``` npm install sshpk ``` Examples -------- ```js var sshpk = require('sshpk'); var fs = require('fs'); /* Read in an OpenSSH-format public key */ var keyPub = fs.readFileSync('id_rsa.pub'); var key = sshpk.parseKey(keyPub, 'ssh'); /* Get metadata about the key */ console.log('type => %s', key.type); console.log('size => %d bits', key.size); console.log('comment => %s', key.comment); /* Compute key fingerprints, in new OpenSSH (>6.7) format, and old MD5 */ console.log('fingerprint => %s', key.fingerprint().toString()); console.log('old-style fingerprint => %s', key.fingerprint('md5').toString()); ``` Example output: ``` type => rsa size => 2048 bits comment => foo@foo.com fingerprint => SHA256:PYC9kPVC6J873CSIbfp0LwYeczP/W4ffObNCuDJ1u5w old-style fingerprint => a0:c8:ad:6c:32:9a:32:fa:59:cc:a9:8c:0a:0d:6e:bd ``` More examples: converting between formats: ```js /* Read in a PEM public key */ var keyPem = fs.readFileSync('id_rsa.pem'); var key = sshpk.parseKey(keyPem, 'pem'); /* Convert to PEM PKCS#8 public key format */ var pemBuf = key.toBuffer('pkcs8'); /* Convert to SSH public key format (and return as a string) */ var sshKey = key.toString('ssh'); ``` Signing and verifying: ```js /* Read in an OpenSSH/PEM *private* key */ var keyPriv = fs.readFileSync('id_ecdsa'); var key = sshpk.parsePrivateKey(keyPriv, 'pem'); var data = 'some data'; /* Sign some data with the key */ var s = key.createSign('sha1'); s.update(data); var signature = s.sign(); /* Now load the public key (could also use just key.toPublic()) */ var keyPub = fs.readFileSync('id_ecdsa.pub'); key = sshpk.parseKey(keyPub, 'ssh'); /* Make a crypto.Verifier with this key */ var v = key.createVerify('sha1'); v.update(data); var valid = v.verify(signature); /* => true! */ ``` Matching fingerprints with keys: ```js var fp = sshpk.parseFingerprint('SHA256:PYC9kPVC6J873CSIbfp0LwYeczP/W4ffObNCuDJ1u5w'); var keys = [sshpk.parseKey(...), sshpk.parseKey(...), ...]; keys.forEach(function (key) { if (fp.matches(key)) console.log('found it!'); }); ``` Usage ----- ## Public keys ### `parseKey(data[, format = 'auto'[, options]])` Parses a key from a given data format and returns a new `Key` object. Parameters - `data` -- Either a Buffer or String, containing the key - `format` -- String name of format to use, valid options are: - `auto`: choose automatically from all below - `pem`: supports both PKCS#1 and PKCS#8 - `ssh`: standard OpenSSH format, - `pkcs1`, `pkcs8`: variants of `pem` - `rfc4253`: raw OpenSSH wire format - `openssh`: new post-OpenSSH 6.5 internal format, produced by `ssh-keygen -o` - `options` -- Optional Object, extra options, with keys: - `filename` -- Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messages - `passphrase` -- Optional String, encryption passphrase used to decrypt an encrypted PEM file ### `Key.isKey(obj)` Returns `true` if the given object is a valid `Key` object created by a version of `sshpk` compatible with this one. Parameters - `obj` -- Object to identify ### `Key#type` String, the type of key. Valid options are `rsa`, `dsa`, `ecdsa`. ### `Key#size` Integer, "size" of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use. ### `Key#comment` Optional string, a key comment used by some formats (eg the `ssh` format). ### `Key#curve` Only present if `this.type === 'ecdsa'`, string containing the name of the named curve used with this key. Possible values include `nistp256`, `nistp384` and `nistp521`. ### `Key#toBuffer([format = 'ssh'])` Convert the key into a given data format and return the serialized key as a Buffer. Parameters - `format` -- String name of format to use, for valid options see `parseKey()` ### `Key#toString([format = 'ssh])` Same as `this.toBuffer(format).toString()`. ### `Key#fingerprint([algorithm = 'sha256'])` Creates a new `Fingerprint` object representing this Key's fingerprint. Parameters - `algorithm` -- String name of hash algorithm to use, valid options are `md5`, `sha1`, `sha256`, `sha384`, `sha512` ### `Key#createVerify([hashAlgorithm])` Creates a `crypto.Verifier` specialized to use this Key (and the correct public key algorithm to match it). The returned Verifier has the same API as a regular one, except that the `verify()` function takes only the target signature as an argument. Parameters - `hashAlgorithm` -- optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including `sha1`, `sha256`. `v.verify(signature[, format])` Parameters - `signature` -- either a Signature object, or a Buffer or String - `format` -- optional String, name of format to interpret given String with. Not valid if `signature` is a Signature or Buffer. ### `Key#createDiffieHellman()` ### `Key#createDH()` Creates a Diffie-Hellman key exchange object initialized with this key and all necessary parameters. This has the same API as a `crypto.DiffieHellman` instance, except that functions take `Key` and `PrivateKey` objects as arguments, and return them where indicated for. This is only valid for keys belonging to a cryptosystem that supports DHE or a close analogue (i.e. `dsa`, `ecdsa` and `curve25519` keys). An attempt to call this function on other keys will yield an `Error`. ## Private keys ### `parsePrivateKey(data[, format = 'auto'[, options]])` Parses a private key from a given data format and returns a new `PrivateKey` object. Parameters - `data` -- Either a Buffer or String, containing the key - `format` -- String name of format to use, valid options are: - `auto`: choose automatically from all below - `pem`: supports both PKCS#1 and PKCS#8 - `ssh`, `openssh`: new post-OpenSSH 6.5 internal format, produced by `ssh-keygen -o` - `pkcs1`, `pkcs8`: variants of `pem` - `rfc4253`: raw OpenSSH wire format - `options` -- Optional Object, extra options, with keys: - `filename` -- Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messages - `passphrase` -- Optional String, encryption passphrase used to decrypt an encrypted PEM file ### `generatePrivateKey(type[, options])` Generates a new private key of a certain key type, from random data. Parameters - `type` -- String, type of key to generate. Currently supported are `'ecdsa'` and `'ed25519'` - `options` -- optional Object, with keys: - `curve` -- optional String, for `'ecdsa'` keys, specifies the curve to use. If ECDSA is specified and this option is not given, defaults to using `'nistp256'`. ### `PrivateKey.isPrivateKey(obj)` Returns `true` if the given object is a valid `PrivateKey` object created by a version of `sshpk` compatible with this one. Parameters - `obj` -- Object to identify ### `PrivateKey#type` String, the type of key. Valid options are `rsa`, `dsa`, `ecdsa`. ### `PrivateKey#size` Integer, "size" of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use. ### `PrivateKey#curve` Only present if `this.type === 'ecdsa'`, string containing the name of the named curve used with this key. Possible values include `nistp256`, `nistp384` and `nistp521`. ### `PrivateKey#toBuffer([format = 'pkcs1'])` Convert the key into a given data format and return the serialized key as a Buffer. Parameters - `format` -- String name of format to use, valid options are listed under `parsePrivateKey`. Note that ED25519 keys default to `openssh` format instead (as they have no `pkcs1` representation). ### `PrivateKey#toString([format = 'pkcs1'])` Same as `this.toBuffer(format).toString()`. ### `PrivateKey#toPublic()` Extract just the public part of this private key, and return it as a `Key` object. ### `PrivateKey#fingerprint([algorithm = 'sha256'])` Same as `this.toPublic().fingerprint()`. ### `PrivateKey#createVerify([hashAlgorithm])` Same as `this.toPublic().createVerify()`. ### `PrivateKey#createSign([hashAlgorithm])` Creates a `crypto.Sign` specialized to use this PrivateKey (and the correct key algorithm to match it). The returned Signer has the same API as a regular one, except that the `sign()` function takes no arguments, and returns a `Signature` object. Parameters - `hashAlgorithm` -- optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including `sha1`, `sha256`. `v.sign()` Parameters - none ### `PrivateKey#derive(newType)` Derives a related key of type `newType` from this key. Currently this is only supported to change between `ed25519` and `curve25519` keys which are stored with the same private key (but usually distinct public keys in order to avoid degenerate keys that lead to a weak Diffie-Hellman exchange). Parameters - `newType` -- String, type of key to derive, either `ed25519` or `curve25519` ## Fingerprints ### `parseFingerprint(fingerprint[, algorithms])` Pre-parses a fingerprint, creating a `Fingerprint` object that can be used to quickly locate a key by using the `Fingerprint#matches` function. Parameters - `fingerprint` -- String, the fingerprint value, in any supported format - `algorithms` -- Optional list of strings, names of hash algorithms to limit support to. If `fingerprint` uses a hash algorithm not on this list, throws `InvalidAlgorithmError`. ### `Fingerprint.isFingerprint(obj)` Returns `true` if the given object is a valid `Fingerprint` object created by a version of `sshpk` compatible with this one. Parameters - `obj` -- Object to identify ### `Fingerprint#toString([format])` Returns a fingerprint as a string, in the given format. Parameters - `format` -- Optional String, format to use, valid options are `hex` and `base64`. If this `Fingerprint` uses the `md5` algorithm, the default format is `hex`. Otherwise, the default is `base64`. ### `Fingerprint#matches(key)` Verifies whether or not this `Fingerprint` matches a given `Key`. This function uses double-hashing to avoid leaking timing information. Returns a boolean. Parameters - `key` -- a `Key` object, the key to match this fingerprint against ## Signatures ### `parseSignature(signature, algorithm, format)` Parses a signature in a given format, creating a `Signature` object. Useful for converting between the SSH and ASN.1 (PKCS/OpenSSL) signature formats, and also returned as output from `PrivateKey#createSign().sign()`. A Signature object can also be passed to a verifier produced by `Key#createVerify()` and it will automatically be converted internally into the correct format for verification. Parameters - `signature` -- a Buffer (binary) or String (base64), data of the actual signature in the given format - `algorithm` -- a String, name of the algorithm to be used, possible values are `rsa`, `dsa`, `ecdsa` - `format` -- a String, either `asn1` or `ssh` ### `Signature.isSignature(obj)` Returns `true` if the given object is a valid `Signature` object created by a version of `sshpk` compatible with this one. Parameters - `obj` -- Object to identify ### `Signature#toBuffer([format = 'asn1'])` Converts a Signature to the given format and returns it as a Buffer. Parameters - `format` -- a String, either `asn1` or `ssh` ### `Signature#toString([format = 'asn1'])` Same as `this.toBuffer(format).toString('base64')`. ## Certificates `sshpk` includes basic support for parsing certificates in X.509 (PEM) format and the OpenSSH certificate format. This feature is intended to be used mainly to access basic metadata about certificates, extract public keys from them, and also to generate simple self-signed certificates from an existing key. Notably, there is no implementation of CA chain-of-trust verification, and only very minimal support for key usage restrictions. Please do the security world a favour, and DO NOT use this code for certificate verification in the traditional X.509 CA chain style. ### `parseCertificate(data, format)` Parameters - `data` -- a Buffer or String - `format` -- a String, format to use, one of `'openssh'`, `'pem'` (X.509 in a PEM wrapper), or `'x509'` (raw DER encoded) ### `createSelfSignedCertificate(subject, privateKey[, options])` Parameters - `subject` -- an Identity, the subject of the certificate - `privateKey` -- a PrivateKey, the key of the subject: will be used both to be placed in the certificate and also to sign it (since this is a self-signed certificate) - `options` -- optional Object, with keys: - `lifetime` -- optional Number, lifetime of the certificate from now in seconds - `validFrom`, `validUntil` -- optional Dates, beginning and end of certificate validity period. If given `lifetime` will be ignored - `serial` -- optional Buffer, the serial number of the certificate - `purposes` -- optional Array of String, X.509 key usage restrictions ### `createCertificate(subject, key, issuer, issuerKey[, options])` Parameters - `subject` -- an Identity, the subject of the certificate - `key` -- a Key, the public key of the subject - `issuer` -- an Identity, the issuer of the certificate who will sign it - `issuerKey` -- a PrivateKey, the issuer's private key for signing - `options` -- optional Object, with keys: - `lifetime` -- optional Number, lifetime of the certificate from now in seconds - `validFrom`, `validUntil` -- optional Dates, beginning and end of certificate validity period. If given `lifetime` will be ignored - `serial` -- optional Buffer, the serial number of the certificate - `purposes` -- optional Array of String, X.509 key usage restrictions ### `Certificate#subjects` Array of `Identity` instances describing the subject of this certificate. ### `Certificate#issuer` The `Identity` of the Certificate's issuer (signer). ### `Certificate#subjectKey` The public key of the subject of the certificate, as a `Key` instance. ### `Certificate#issuerKey` The public key of the signing issuer of this certificate, as a `Key` instance. May be `undefined` if the issuer's key is unknown (e.g. on an X509 certificate). ### `Certificate#serial` The serial number of the certificate. As this is normally a 64-bit or wider integer, it is returned as a Buffer. ### `Certificate#purposes` Array of Strings indicating the X.509 key usage purposes that this certificate is valid for. The possible strings at the moment are: * `'signature'` -- key can be used for digital signatures * `'identity'` -- key can be used to attest about the identity of the signer (X.509 calls this `nonRepudiation`) * `'codeSigning'` -- key can be used to sign executable code * `'keyEncryption'` -- key can be used to encrypt other keys * `'encryption'` -- key can be used to encrypt data (only applies for RSA) * `'keyAgreement'` -- key can be used for key exchange protocols such as Diffie-Hellman * `'ca'` -- key can be used to sign other certificates (is a Certificate Authority) * `'crl'` -- key can be used to sign Certificate Revocation Lists (CRLs) ### `Certificate#isExpired([when])` Tests whether the Certificate is currently expired (i.e. the `validFrom` and `validUntil` dates specify a range of time that does not include the current time). Parameters - `when` -- optional Date, if specified, tests whether the Certificate was or will be expired at the specified time instead of now Returns a Boolean. ### `Certificate#isSignedByKey(key)` Tests whether the Certificate was validly signed by the given (public) Key. Parameters - `key` -- a Key instance Returns a Boolean. ### `Certificate#isSignedBy(certificate)` Tests whether this Certificate was validly signed by the subject of the given certificate. Also tests that the issuer Identity of this Certificate and the subject Identity of the other Certificate are equivalent. Parameters - `certificate` -- another Certificate instance Returns a Boolean. ### `Certificate#fingerprint([hashAlgo])` Returns the X509-style fingerprint of the entire certificate (as a Fingerprint instance). This matches what a web-browser or similar would display as the certificate fingerprint and should not be confused with the fingerprint of the subject's public key. Parameters - `hashAlgo` -- an optional String, any hash function name ### `Certificate#toBuffer([format])` Serializes the Certificate to a Buffer and returns it. Parameters - `format` -- an optional String, output format, one of `'openssh'`, `'pem'` or `'x509'`. Defaults to `'x509'`. Returns a Buffer. ### `Certificate#toString([format])` - `format` -- an optional String, output format, one of `'openssh'`, `'pem'` or `'x509'`. Defaults to `'pem'`. Returns a String. ## Certificate identities ### `identityForHost(hostname)` Constructs a host-type Identity for a given hostname. Parameters - `hostname` -- the fully qualified DNS name of the host Returns an Identity instance. ### `identityForUser(uid)` Constructs a user-type Identity for a given UID. Parameters - `uid` -- a String, user identifier (login name) Returns an Identity instance. ### `identityForEmail(email)` Constructs an email-type Identity for a given email address. Parameters - `email` -- a String, email address Returns an Identity instance. ### `identityFromDN(dn)` Parses an LDAP-style DN string (e.g. `'CN=foo, C=US'`) and turns it into an Identity instance. Parameters - `dn` -- a String Returns an Identity instance. ### `Identity#toString()` Returns the identity as an LDAP-style DN string. e.g. `'CN=foo, O=bar corp, C=us'` ### `Identity#type` The type of identity. One of `'host'`, `'user'`, `'email'` or `'unknown'` ### `Identity#hostname` ### `Identity#uid` ### `Identity#email` Set when `type` is `'host'`, `'user'`, or `'email'`, respectively. Strings. ### `Identity#cn` The value of the first `CN=` in the DN, if any. Errors ------ ### `InvalidAlgorithmError` The specified algorithm is not valid, either because it is not supported, or because it was not included on a list of allowed algorithms. Thrown by `Fingerprint.parse`, `Key#fingerprint`. Properties - `algorithm` -- the algorithm that could not be validated ### `FingerprintFormatError` The fingerprint string given could not be parsed as a supported fingerprint format, or the specified fingerprint format is invalid. Thrown by `Fingerprint.parse`, `Fingerprint#toString`. Properties - `fingerprint` -- if caused by a fingerprint, the string value given - `format` -- if caused by an invalid format specification, the string value given ### `KeyParseError` The key data given could not be parsed as a valid key. Properties - `keyName` -- `filename` that was given to `parseKey` - `format` -- the `format` that was trying to parse the key (see `parseKey`) - `innerErr` -- the inner Error thrown by the format parser ### `KeyEncryptedError` The key is encrypted with a symmetric key (ie, it is password protected). The parsing operation would succeed if it was given the `passphrase` option. Properties - `keyName` -- `filename` that was given to `parseKey` - `format` -- the `format` that was trying to parse the key (currently can only be `"pem"`) ### `CertificateParseError` The certificate data given could not be parsed as a valid certificate. Properties - `certName` -- `filename` that was given to `parseCertificate` - `format` -- the `format` that was trying to parse the key (see `parseCertificate`) - `innerErr` -- the inner Error thrown by the format parser Friends of sshpk ---------------- * [`sshpk-agent`](https://github.com/arekinath/node-sshpk-agent) is a library for speaking the `ssh-agent` protocol from node.js, which uses `sshpk` util-deprecate ============== ### The Node.js `util.deprecate()` function with browser support In Node.js, this module simply re-exports the `util.deprecate()` function. In the web browser (i.e. via browserify), a browser-specific implementation of the `util.deprecate()` function is used. ## API A `deprecate()` function is the only thing exposed by this module. ``` javascript // setup: exports.foo = deprecate(foo, 'foo() is deprecated, use bar() instead'); // users see: foo(); // foo() is deprecated, use bar() instead foo(); foo(); ``` ## License (The MIT License) Copyright (c) 2014 Nathan Rajlich <nathan@tootallnate.net> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # sha Check and get file hashes (using any algorithm) [![Build Status](https://img.shields.io/travis/ForbesLindesay/sha/master.svg)](https://travis-ci.org/ForbesLindesay/sha) [![Dependency Status](https://img.shields.io/david/ForbesLindesay/sha.svg)](https://david-dm.org/ForbesLindesay/sha) [![NPM version](https://img.shields.io/npm/v/sha.svg)](https://www.npmjs.com/package/sha) ## Installation $ npm install sha ## API ### check(fileName, expected, [options,] cb) / checkSync(filename, expected, [options]) Asynchronously check that `fileName` has a "hash" of `expected`. The callback will be called with either `null` or an error (indicating that they did not match). Options: - algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` ### get(fileName, [options,] cb) / getSync(filename, [options]) Asynchronously get the "hash" of `fileName`. The callback will be called with an optional `error` object and the (lower cased) hex digest of the hash. Options: - algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` ### stream(expected, [options]) Check the hash of a stream without ever buffering it. This is a pass through stream so you can do things like: ```js fs.createReadStream('src') .pipe(sha.stream('expected')) .pipe(fs.createWriteStream('dest')) ``` `dest` will be a complete copy of `src` and an error will be emitted if the hash did not match `'expected'`. Options: - algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` ## License You may use this software under the BSD or MIT. Take your pick. If you want me to release it under another license, open a pull request. # extsprintf: extended POSIX-style sprintf Stripped down version of s[n]printf(3c). We make a best effort to throw an exception when given a format string we don't understand, rather than ignoring it, so that we won't break existing programs if/when we go implement the rest of this. This implementation currently supports specifying * field alignment ('-' flag), * zero-pad ('0' flag) * always show numeric sign ('+' flag), * field width * conversions for strings, decimal integers, and floats (numbers). * argument size specifiers. These are all accepted but ignored, since Javascript has no notion of the physical size of an argument. Everything else is currently unsupported, most notably: precision, unsigned numbers, non-decimal numbers, and characters. Besides the usual POSIX conversions, this implementation supports: * `%j`: pretty-print a JSON object (using node's "inspect") * `%r`: pretty-print an Error object # Example First, install it: # npm install extsprintf Now, use it: var mod_extsprintf = require('extsprintf'); console.log(mod_extsprintf.sprintf('hello %25s', 'world')); outputs: hello world # Also supported **printf**: same args as sprintf, but prints the result to stdout **fprintf**: same args as sprintf, preceded by a Node stream. Prints the result to the given stream. aproba ====== A ridiculously light-weight function argument validator ``` var validate = require("aproba") function myfunc(a, b, c) { // `a` must be a string, `b` a number, `c` a function validate('SNF', arguments) // [a,b,c] is also valid } myfunc('test', 23, function () {}) // ok myfunc(123, 23, function () {}) // type error myfunc('test', 23) // missing arg error myfunc('test', 23, function () {}, true) // too many args error ``` Valid types are: | type | description | :--: | :---------- | * | matches any type | A | `Array.isArray` OR an `arguments` object | S | typeof == string | N | typeof == number | F | typeof == function | O | typeof == object and not type A and not type E | B | typeof == boolean | E | `instanceof Error` OR `null` **(special: see below)** | Z | == `null` Validation failures throw one of three exception types, distinguished by a `code` property of `EMISSINGARG`, `EINVALIDTYPE` or `ETOOMANYARGS`. If you pass in an invalid type then it will throw with a code of `EUNKNOWNTYPE`. If an **error** argument is found and is not null then the remaining arguments are optional. That is, if you say `ESO` then that's like using a non-magical `E` in: `E|ESO|ZSO`. ### But I have optional arguments?! You can provide more than one signature by separating them with pipes `|`. If any signature matches the arguments then they'll be considered valid. So for example, say you wanted to write a signature for `fs.createWriteStream`. The docs for it describe it thusly: ``` fs.createWriteStream(path[, options]) ``` This would be a signature of `SO|S`. That is, a string and and object, or just a string. Now, if you read the full `fs` docs, you'll see that actually path can ALSO be a buffer. And options can be a string, that is: ``` path <String> | <Buffer> options <String> | <Object> ``` To reproduce this you have to fully enumerate all of the possible combinations and that implies a signature of `SO|SS|OO|OS|S|O`. The awkwardness is a feature: It reminds you of the complexity you're adding to your API when you do this sort of thing. ### Browser support This has no dependencies and should work in browsers, though you'll have noisier stack traces. ### Why this exists I wanted a very simple argument validator. It needed to do two things: 1. Be more concise and easier to use than assertions 2. Not encourage an infinite bikeshed of DSLs This is why types are specified by a single character and there's no such thing as an optional argument. This is not intended to validate user data. This is specifically about asserting the interface of your functions. If you need greater validation, I encourage you to write them by hand or look elsewhere. # copy-concurrently Copy files, directories and symlinks ``` const copy = require('copy-concurrently') copy('/path/to/thing', '/new/path/thing').then(() => { // this is now copied }).catch(err => { // oh noooo }) ``` Copies files, directories and symlinks. Ownership is maintained when running as root, permissions are always maintained. On Windows, if symlinks are unavailable then junctions will be used. ## PUBLIC INTERFACE ### copy(from, to, [options]) → Promise Recursively copies `from` to `to` and resolves its promise when finished. If `to` already exists then the promise will be rejected with an `EEXIST` error. Options are: * maxConcurrency – (Default: `1`) The maximum number of concurrent copies to do at once. * recurseWith - (Default: `copy.item`) The function to call on each file after recursing into a directory. * isWindows - (Default: `process.platform === 'win32'`) If true enables Windows symlink semantics. This requires an extra `stat` to determine if the destination of a symlink is a file or directory. If symlinking a directory fails then we'll try making a junction instead. Options can also include dependency injection: * Promise - (Default: `global.Promise`) The promise implementation to use, defaults to Node's. * fs - (Default: `require('fs')`) The filesystem module to use. Can be used to use `graceful-fs` or to inject a mock. * writeStreamAtomic - (Default: `require('fs-write-stream-atomic')`) The implementation of `writeStreamAtomic` to use. Used to inject a mock. * getuid - (Default: `process.getuid`) A function that returns the current UID. Used to inject a mock. ## EXTENSION INTERFACE Ordinarily you'd only call `copy` above. But it's possible to use it's component functions directly. This is useful if, say, you're writing [move-concurently](https://npmjs.com/package/move-concurrently). ### copy.file(from, to, options) → Promise Copies an ordinary file `from` to destination `to`. Uses `fs-write-stream-atomic` to ensure that the file is either entirely copied or not at all. Options are: * uid, gid - (Optional) If `getuid()` is `0` then this and gid will be used to set the user and group of `to`. If uid is present then gid must be too. * mode - (Optional) If set then `to` will have its perms set to `mode`. * fs - (Default: `require('fs')`) The filesystem module to use. Can be used to use `graceful-fs` or to inject a mock. * Promise - (Default: `global.Promise`) The promise implementation to use, defaults to Node's. * writeStreamAtomic - (Default `require('fs-write-stream-atomic')`) The implementation of `writeStreamAtomic` to use. Used to inject a mock. ### copy.symlink(from, to, options) → Promise Copies a symlink `from` to destination `to`. If you're using Windows and symlinking fails and what you're linking is a directory then junctions will be tried instead. Options are: * top - The top level the copy is being run from. This is used to determine if the symlink destination is within the set of files we're copying or outside it. * fs - (Default: `require('fs')`) The filesystem module to use. Can be used to use `graceful-fs` or to inject a mock. * Promise - (Default: `global.Promise`) The promise implementation to use, defaults to Node's. * isWindows - (Default: `process.platform === 'win32'`) If true enables Windows symlink semantics. This requires an extra `stat` to determine if the destination of a symlink is a file or directory. If symlinking a directory fails then we'll try making a junction instead. ### copy.recurse(from, to, options) → Promise Reads all of the files in directory `from` and adds them to the `queue` using `recurseWith` (by default `copy.item`). Options are: * queue - A [`run-queue`](https://npmjs.com/package/run-queue) object to add files found inside `from` to. * recurseWith - (Default: `copy.item`) The function to call on each file after recursing into a directory. * uid, gid - (Optional) If `getuid()` is `0` then this and gid will be used to set the user and group of `to`. If uid is present then gid must be too. * mode - (Optional) If set then `to` will have its perms set to `mode`. * fs - (Default: `require('fs')`) The filesystem module to use. Can be used to use `graceful-fs` or to inject a mock. * getuid - (Default: `process.getuid`) A function that returns the current UID. Used to inject a mock. ### copy.item(from, to, options) → Promise Copies some kind of `from` to destination `to`. This looks at the filetype and calls `copy.file`, `copy.symlink` or `copy.recurse` as appropriate. Symlink copies are queued with a priority such that they happen after all file and directory copies as you can't create a junction on windows to a file that doesn't exist yet. Options are: * top - The top level the copy is being run from. This is used to determine if the symlink destination is within the set of files we're copying or outside it. * queue - The [`run-queue`](https://npmjs.com/package/run-queue) object to pass to `copy.recurse` if `from` is a directory. * recurseWith - (Default: `copy.item`) The function to call on each file after recursing into a directory. * uid, gid - (Optional) If `getuid()` is `0` then this and gid will be used to set the user and group of `to`. If uid is present then gid must be too. * mode - (Optional) If set then `to` will have its perms set to `mode`. * fs - (Default: `require('fs')`) The filesystem module to use. Can be used to use `graceful-fs` or to inject a mock. * getuid - (Default: `process.getuid`) A function that returns the current UID. Used to inject a mock. * isWindows - (Default: `process.platform === 'win32'`) If true enables Windows symlink semantics. This requires an extra `stat` to determine if the destination of a symlink is a file or directory. If symlinking a directory fails then we'll try making a junction instead. * Promise - (Default: `global.Promise`) The promise implementation to use, defaults to Node's. * writeStreamAtomic - (Default `require('fs-write-stream-atomic')`) The implementation of `writeStreamAtomic` to use. Used to inject a mock. # meant ![Build status](https://github.com/watilde/meant/workflows/Node.js%20CI/badge.svg) Like the `Did you mean?` in git for npm ## API ### meant(item, list) + item {String} A key for finding an approximate value + list {Array} A list for comparing with the item ```js const meant = require('meant') const result = meant('foa', ['foo', 'bar', 'baz']) // => [ 'foo' ] ``` ## Installation Download node at [nodejs.org](http://nodejs.org) and install it, if you haven't already. ```sh npm install meant --save ``` ## Tests ```sh npm install npm test ``` ``` > meant@1.0.0 test /Users/watilde/Development/meant > standard && tap test.js TAP version 13 # Subtest: test.js # Subtest: test vs ['tast', 'tbst', 'tcst', 'foo'] ok 1 - list has tast ok 2 - list has tbst ok 3 - list has tcst ok 4 - list doesn't have foo 1..4 ok 1 - test vs ['tast', 'tbst', 'tcst', 'foo'] # time=11.816ms 1..1 # time=44.006ms ok 1 - test.js # time=249.154ms 1..1 # time=267.371ms ``` ## Dependencies None ## Dev Dependencies - [standard](https://github.com/feross/standard): JavaScript Standard Style - [standard-version](https://github.com/conventional-changelog/standard-version): replacement for `npm version` with automatic CHANGELOG generation - [tap](https://github.com/tapjs/node-tap): A Test-Anything-Protocol library ## License MIT _Generated by [package-json-to-readme](https://github.com/zeke/package-json-to-readme)_ # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings Some things cause tar to emit a warning, but should usually not cause the entire operation to fail. There are three ways to handle warnings: 1. **Ignore them** (default) Invalid entries won't be put in the archive, and invalid entries won't be unpacked. This is usually fine, but can hide failures that you might care about. 2. **Notice them** Add an `onwarn` function to the options, or listen to the `'warn'` event on any tar stream. The function will get called as `onwarn(message, data)`. Handle as appropriate. 3. **Explode them.** Set `strict: true` in the options object, and `warn` messages will be emitted as `'error'` events instead. If there's no `error` handler, this causes the program to crash. If used with a promise-returning/callback-taking method, then it'll send the error to the promise/callback. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary other time-based operations. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary other time-based operations. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary other time-based operations. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary other time-based operations. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. #### abort(message, error) Stop all parsing activities. This is called when there are zlib errors. It also emits a warning with the message and error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary other time-based operations. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary other time-based operations. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary other time-based operations. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(message, data)` for any warnings encountered. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. # flush-write-stream A write stream constructor that supports a flush function that is called before `finish` is emitted ``` npm install flush-write-stream ``` [![build status](http://img.shields.io/travis/mafintosh/flush-write-stream.svg?style=flat)](http://travis-ci.org/mafintosh/flush-write-stream) ## Usage ``` js var writer = require('flush-write-stream') var ws = writer(write, flush) ws.on('finish', function () { console.log('finished') }) ws.write('hello') ws.write('world') ws.end() function write (data, enc, cb) { // i am your normal ._write method console.log('writing', data.toString()) cb() } function flush (cb) { // i am called before finish is emitted setTimeout(cb, 1000) // wait 1 sec } ``` If you run the above it will produce the following output ``` writing hello writing world (nothing happens for 1 sec) finished ``` ## API #### `var ws = writer([options], write, [flush])` Create a new writable stream. Options are forwarded to the stream constructor. #### `var ws = writer.obj([options], write, [flush])` Same as the above except `objectMode` is set to `true` per default. ## License MIT # lodash.restparam v3.6.1 The [modern build](https://github.com/lodash/lodash/wiki/Build-Differences) of [lodash’s](https://lodash.com/) `_.restParam` exported as a [Node.js](http://nodejs.org/)/[io.js](https://iojs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.restparam ``` In Node.js/io.js: ```js var restParam = require('lodash.restparam'); ``` See the [documentation](https://lodash.com/docs#restParam) or [package source](https://github.com/lodash/lodash/blob/3.6.1-npm-packages/lodash.restparam) for more details. # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). # libnpmconfig [![npm version](https://img.shields.io/npm/v/libnpmconfig.svg)](https://npm.im/libnpmconfig) [![license](https://img.shields.io/npm/l/libnpmconfig.svg)](https://npm.im/libnpmconfig) [![Travis](https://img.shields.io/travis/npm/libnpmconfig.svg)](https://travis-ci.org/npm/libnpmconfig) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/libnpmconfig?svg=true)](https://ci.appveyor.com/project/zkat/libnpmconfig) [![Coverage Status](https://coveralls.io/repos/github/npm/libnpmconfig/badge.svg?branch=latest)](https://coveralls.io/github/npm/libnpmconfig?branch=latest) [`libnpmconfig`](https://github.com/npm/libnpmconfig) is a Node.js library for programmatically managing npm's configuration files and data. ## Example ```js const config = require('libnpmconfig') console.log('configured registry:', config.read({ registry: 'https://default.registry/' })) // => configured registry: https://registry.npmjs.org ``` ## Install `$ npm install libnpmconfig` ## Table of Contents * [Example](#example) * [Install](#install) * [API](#api) ### API ##### <a name="read"></a> `> read(cliOpts, builtinOpts)` Reads configurations from the filesystem and the env and returns a [`figgy-pudding`](https://npm.im/figgy-pudding) object with the configuration values. If `cliOpts` is provided, it will be merged with the returned config pudding, shadowing any read values. These are intended as CLI-provided options. Do your own `process.argv` parsing, though. If `builtinOpts.cwd` is provided, it will be used instead of `process.cwd()` as the starting point for config searching. # rc The non-configurable configuration loader for lazy people. ## Usage The only option is to pass rc the name of your app, and your default configuration. ```javascript var conf = require('rc')(appname, { //defaults go here. port: 2468, //defaults which are objects will be merged, not replaced views: { engine: 'jade' } }); ``` `rc` will return your configuration options merged with the defaults you specify. If you pass in a predefined defaults object, it will be mutated: ```javascript var conf = {}; require('rc')(appname, conf); ``` If `rc` finds any config files for your app, the returned config object will have a `configs` array containing their paths: ```javascript var appCfg = require('rc')(appname, conf); appCfg.configs[0] // /etc/appnamerc appCfg.configs[1] // /home/dominictarr/.config/appname appCfg.config // same as appCfg.configs[appCfg.configs.length - 1] ``` ## Standards Given your application name (`appname`), rc will look in all the obvious places for configuration. * command line arguments, parsed by minimist _(e.g. `--foo baz`, also nested: `--foo.bar=baz`)_ * environment variables prefixed with `${appname}_` * or use "\_\_" to indicate nested properties <br/> _(e.g. `appname_foo__bar__baz` => `foo.bar.baz`)_ * if you passed an option `--config file` then from that file * a local `.${appname}rc` or the first found looking in `./ ../ ../../ ../../../` etc. * `$HOME/.${appname}rc` * `$HOME/.${appname}/config` * `$HOME/.config/${appname}` * `$HOME/.config/${appname}/config` * `/etc/${appname}rc` * `/etc/${appname}/config` * the defaults object you passed in. All configuration sources that were found will be flattened into one object, so that sources **earlier** in this list override later ones. ## Configuration File Formats Configuration files (e.g. `.appnamerc`) may be in either [json](http://json.org/example) or [ini](http://en.wikipedia.org/wiki/INI_file) format. **No** file extension (`.json` or `.ini`) should be used. The example configurations below are equivalent: #### Formatted as `ini` ``` ; You can include comments in `ini` format if you want. dependsOn=0.10.0 ; `rc` has built-in support for ini sections, see? [commands] www = ./commands/www console = ./commands/repl ; You can even do nested sections [generators.options] engine = ejs [generators.modules] new = generate-new engine = generate-backend ``` #### Formatted as `json` ```javascript { // You can even comment your JSON, if you want "dependsOn": "0.10.0", "commands": { "www": "./commands/www", "console": "./commands/repl" }, "generators": { "options": { "engine": "ejs" }, "modules": { "new": "generate-new", "backend": "generate-backend" } } } ``` Comments are stripped from JSON config via [strip-json-comments](https://github.com/sindresorhus/strip-json-comments). > Since ini, and env variables do not have a standard for types, your application needs be prepared for strings. To ensure that string representations of booleans and numbers are always converted into their proper types (especially useful if you intend to do strict `===` comparisons), consider using a module such as [parse-strings-in-object](https://github.com/anselanza/parse-strings-in-object) to wrap the config object returned from rc. ## Simple example demonstrating precedence Assume you have an application like this (notice the hard-coded defaults passed to rc): ``` const conf = require('rc')('myapp', { port: 12345, mode: 'test' }); console.log(JSON.stringify(conf, null, 2)); ``` You also have a file `config.json`, with these contents: ``` { "port": 9000, "foo": "from config json", "something": "else" } ``` And a file `.myapprc` in the same folder, with these contents: ``` { "port": "3001", "foo": "bar" } ``` Here is the expected output from various commands: `node .` ``` { "port": "3001", "mode": "test", "foo": "bar", "_": [], "configs": [ "/Users/stephen/repos/conftest/.myapprc" ], "config": "/Users/stephen/repos/conftest/.myapprc" } ``` *Default `mode` from hard-coded object is retained, but port is overridden by `.myapprc` file (automatically found based on appname match), and `foo` is added.* `node . --foo baz` ``` { "port": "3001", "mode": "test", "foo": "baz", "_": [], "configs": [ "/Users/stephen/repos/conftest/.myapprc" ], "config": "/Users/stephen/repos/conftest/.myapprc" } ``` *Same result as above but `foo` is overridden because command-line arguments take precedence over `.myapprc` file.* `node . --foo barbar --config config.json` ``` { "port": 9000, "mode": "test", "foo": "barbar", "something": "else", "_": [], "config": "config.json", "configs": [ "/Users/stephen/repos/conftest/.myapprc", "config.json" ] } ``` *Now the `port` comes from the `config.json` file specified (overriding the value from `.myapprc`), and `foo` value is overriden by command-line despite also being specified in the `config.json` file.* ## Advanced Usage #### Pass in your own `argv` You may pass in your own `argv` as the third argument to `rc`. This is in case you want to [use your own command-line opts parser](https://github.com/dominictarr/rc/pull/12). ```javascript require('rc')(appname, defaults, customArgvParser); ``` ## Pass in your own parser If you have a special need to use a non-standard parser, you can do so by passing in the parser as the 4th argument. (leave the 3rd as null to get the default args parser) ```javascript require('rc')(appname, defaults, null, parser); ``` This may also be used to force a more strict format, such as strict, valid JSON only. ## Note on Performance `rc` is running `fs.statSync`-- so make sure you don't use it in a hot code path (e.g. a request handler) ## License Multi-licensed under the two-clause BSD License, MIT License, or Apache License, version 2.0 # iferr Higher-order functions for easier error handling. `if (err) return cb(err);` be gone! ## Install ```bash npm install iferr ``` ## Use ### JavaScript example ```js var iferr = require('iferr'); function get_friends_count(id, cb) { User.load_user(id, iferr(cb, function(user) { user.load_friends(iferr(cb, function(friends) { cb(null, friends.length); })); })); } ``` ### CoffeeScript example ```coffee iferr = require 'iferr' get_friends_count = (id, cb) -> User.load_user id, iferr cb, (user) -> user.load_friends iferr cb, (friends) -> cb null, friends.length ``` (TODO: document tiferr, throwerr and printerr) ## License MIT [![view on npm](https://img.shields.io/npm/v/byte-size.svg)](https://www.npmjs.org/package/byte-size) [![npm module downloads](https://img.shields.io/npm/dt/byte-size.svg)](https://www.npmjs.org/package/byte-size) [![Build Status](https://travis-ci.org/75lb/byte-size.svg?branch=master)](https://travis-ci.org/75lb/byte-size) [![Coverage Status](https://coveralls.io/repos/github/75lb/byte-size/badge.svg?branch=master)](https://coveralls.io/github/75lb/byte-size?branch=master) [![Dependency Status](https://david-dm.org/75lb/byte-size.svg)](https://david-dm.org/75lb/byte-size) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](https://github.com/feross/standard) <a name="module_byte-size"></a> ## byte-size An isomorphic, load-anywhere function to convert a bytes value into a more human-readable format. Choose between [metric or IEC units](https://en.wikipedia.org/wiki/Gigabyte), summarised below. Value | Metric ----- | ------------- 1000 | kB kilobyte 1000^2 | MB megabyte 1000^3 | GB gigabyte 1000^4 | TB terabyte 1000^5 | PB petabyte 1000^6 | EB exabyte 1000^7 | ZB zettabyte 1000^8 | YB yottabyte Value | IEC ----- | ------------ 1024 | KiB kibibyte 1024^2 | MiB mebibyte 1024^3 | GiB gibibyte 1024^4 | TiB tebibyte 1024^5 | PiB pebibyte 1024^6 | EiB exbibyte 1024^7 | ZiB zebibyte 1024^8 | YiB yobibyte Value | Metric (octet) ----- | ------------- 1000 | ko kilooctet 1000^2 | Mo megaoctet 1000^3 | Go gigaoctet 1000^4 | To teraoctet 1000^5 | Po petaoctet 1000^6 | Eo exaoctet 1000^7 | Zo zettaoctet 1000^8 | Yo yottaoctet Value | IEC (octet) ----- | ------------ 1024 | Kio kilooctet 1024^2 | Mio mebioctet 1024^3 | Gio gibioctet 1024^4 | Tio tebioctet 1024^5 | Pio pebioctet 1024^6 | Eio exbioctet 1024^7 | Zio zebioctet 1024^8 | Yio yobioctet **Example** ```js const byteSize = require('byte-size') ``` <a name="exp_module_byte-size--byteSize"></a> ### byteSize(bytes, [options]) ⇒ <code>Object</code> ⏏ **Kind**: Exported function | Param | Type | Default | Description | | --- | --- | --- | --- | | bytes | <code>number</code> | | the bytes value to convert. | | [options] | <code>object</code> | | optional config. | | [options.precision] | <code>number</code> | <code>1</code> | number of decimal places. | | [options.units] | <code>string</code> | <code>&quot;metric&quot;</code> | select `'metric'`, `'iec'`, `'metric_octet'` or `'iec_octet'` units. | **Example** ```js > const byteSize = require('byte-size') > byteSize(1580) { value: '1.6', unit: 'kB' } > byteSize(1580, { units: 'iec' }) { value: '1.5', unit: 'KiB' } > byteSize(1580, { units: 'iec', precision: 3 }) { value: '1.543', unit: 'KiB' } > byteSize(1580, { units: 'iec', precision: 0 }) { value: '2', unit: 'KiB' } > byteSize(1580, { units: 'metric_octet' }) { value: '1.6', unit: 'ko' } > byteSize(1580, { units: 'iec_octet' }) { value: '1.5', unit: 'Kio' } > byteSize(1580, { units: 'iec_octet' }).toString() '1.5 Kio' > const { value, unit } = byteSize(1580, { units: 'iec_octet' }) > `${value} ${unit}` '1.5 Kio' ``` ### Load anywhere This library is compatible with Node.js, the Web and any style of module loader. It can be loaded anywhere, natively without transpilation. Node.js: ```js const byteSize = require('byte-size') ``` Within Node.js with ECMAScript Module support enabled: ```js import byteSize from 'byte-size' ``` Within a modern browser ECMAScript Module: ```js import byteSize from './node_modules/byte-size/index.mjs' ``` Old browser (adds `window.byteSize`): ```html <script nomodule src="./node_modules/byte-size/dist/index.js"></script> ``` * * * &copy; 2014-18 Lloyd Brookes \<75pound@gmail.com\>. Documented by [jsdoc-to-markdown](https://github.com/jsdoc2md/jsdoc-to-markdown). # stream-iterate Iterate through the values in a stream. ``` npm install stream-iterate ``` [![build status](http://img.shields.io/travis/mafintosh/stream-iterate.svg?style=flat)](http://travis-ci.org/mafintosh/stream-iterate) ## Usage ``` js var iterate = require('stream-iterate') var from = require('from2') var stream = from.obj(['a', 'b', 'c']) var read = iterate(stream) loop() // recursively iterates through each item in the stream function loop () { read(function (err, data, next) { console.log(err, data) next() loop() }) } ``` If you don't call `next` and call `read` again the same `(err, value)` pair will be returned. You can use this module to implement stuff like [a streaming merge sort](https://github.com/mafintosh/stream-iterate/blob/master/test.js#L5-L47). ## License [MIT](LICENSE) # has > Object.prototype.hasOwnProperty.call shortcut ## Installation ```sh npm install --save has ``` ## Usage ```js var has = require('has'); has({}, 'hasOwnProperty'); // false has(Object.prototype, 'hasOwnProperty'); // true ``` # lodash._bindcallback v3.0.1 The [modern build](https://github.com/lodash/lodash/wiki/Build-Differences) of [lodash’s](https://lodash.com/) internal `bindCallback` exported as a [Node.js](http://nodejs.org/)/[io.js](https://iojs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._bindcallback ``` In Node.js/io.js: ```js var bindCallback = require('lodash._bindcallback'); ``` See the [package source](https://github.com/lodash/lodash/blob/3.0.1-npm-packages/lodash._bindcallback) for more details. # npm-package-arg [![Build Status](https://travis-ci.org/npm/npm-package-arg.svg?branch=master)](https://travis-ci.org/npm/npm-package-arg) Parses package name and specifier passed to commands like `npm install` or `npm cache add`, or as found in `package.json` dependency sections. ## EXAMPLES ```javascript var assert = require("assert") var npa = require("npm-package-arg") // Pass in the descriptor, and it'll return an object try { var parsed = npa("@bar/foo@1.2") } catch (ex) { … } ``` ## USING `var npa = require('npm-package-arg')` ### var result = npa(*arg*[, *where*]) * *arg* - a string that you might pass to `npm install`, like: `foo@1.2`, `@bar/foo@1.2`, `foo@user/foo`, `http://x.com/foo.tgz`, `git+https://github.com/user/foo`, `bitbucket:user/foo`, `foo.tar.gz`, `../foo/bar/` or `bar`. If the *arg* you provide doesn't have a specifier part, eg `foo` then the specifier will default to `latest`. * *where* - Optionally the path to resolve file paths relative to. Defaults to `process.cwd()` **Throws** if the package name is invalid, a dist-tag is invalid or a URL's protocol is not supported. ### var result = npa.resolve(*name*, *spec*[, *where*]) * *name* - The name of the module you want to install. For example: `foo` or `@bar/foo`. * *spec* - The specifier indicating where and how you can get this module. Something like: `1.2`, `^1.7.17`, `http://x.com/foo.tgz`, `git+https://github.com/user/foo`, `bitbucket:user/foo`, `file:foo.tar.gz` or `file:../foo/bar/`. If not included then the default is `latest`. * *where* - Optionally the path to resolve file paths relative to. Defaults to `process.cwd()` **Throws** if the package name is invalid, a dist-tag is invalid or a URL's protocol is not supported. ## RESULT OBJECT The objects that are returned by npm-package-arg contain the following keys: * `type` - One of the following strings: * `git` - A git repo * `tag` - A tagged version, like `"foo@latest"` * `version` - A specific version number, like `"foo@1.2.3"` * `range` - A version range, like `"foo@2.x"` * `file` - A local `.tar.gz`, `.tar` or `.tgz` file. * `directory` - A local directory. * `remote` - An http url (presumably to a tgz) * `registry` - If true this specifier refers to a resource hosted on a registry. This is true for `tag`, `version` and `range` types. * `name` - If known, the `name` field expected in the resulting pkg. * `scope` - If a name is something like `@org/module` then the `scope` field will be set to `@org`. If it doesn't have a scoped name, then scope is `null`. * `escapedName` - A version of `name` escaped to match the npm scoped packages specification. Mostly used when making requests against a registry. When `name` is `null`, `escapedName` will also be `null`. * `rawSpec` - The specifier part that was parsed out in calls to `npa(arg)`, or the value of `spec` in calls to `npa.resolve(name, spec). * `saveSpec` - The normalized specifier, for saving to package.json files. `null` for registry dependencies. * `fetchSpec` - The version of the specifier to be used to fetch this resource. `null` for shortcuts to hosted git dependencies as there isn't just one URL to try with them. * `gitRange` - If set, this is a semver specifier to match against git tags with * `gitCommittish` - If set, this is the specific committish to use with a git dependency. * `hosted` - If `from === 'hosted'` then this will be a `hosted-git-info` object. This property is not included when serializing the object as JSON. * `raw` - The original un-modified string that was provided. If called as `npa.resolve(name, spec)` then this will be `name + '@' + spec`. # unpipe [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Node.js Version][node-image]][node-url] [![Build Status][travis-image]][travis-url] [![Test Coverage][coveralls-image]][coveralls-url] Unpipe a stream from all destinations. ## Installation ```sh $ npm install unpipe ``` ## API ```js var unpipe = require('unpipe') ``` ### unpipe(stream) Unpipes all destinations from a given stream. With stream 2+, this is equivalent to `stream.unpipe()`. When used with streams 1 style streams (typically Node.js 0.8 and below), this module attempts to undo the actions done in `stream.pipe(dest)`. ## License [MIT](LICENSE) [npm-image]: https://img.shields.io/npm/v/unpipe.svg [npm-url]: https://npmjs.org/package/unpipe [node-image]: https://img.shields.io/node/v/unpipe.svg [node-url]: http://nodejs.org/download/ [travis-image]: https://img.shields.io/travis/stream-utils/unpipe.svg [travis-url]: https://travis-ci.org/stream-utils/unpipe [coveralls-image]: https://img.shields.io/coveralls/stream-utils/unpipe.svg [coveralls-url]: https://coveralls.io/r/stream-utils/unpipe?branch=master [downloads-image]: https://img.shields.io/npm/dm/unpipe.svg [downloads-url]: https://npmjs.org/package/unpipe # isarray `Array#isArray` for older browsers. ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. http-proxy-agent ================ ### An HTTP(s) proxy `http.Agent` implementation for HTTP [![Build Status](https://travis-ci.org/TooTallNate/node-http-proxy-agent.svg?branch=master)](https://travis-ci.org/TooTallNate/node-http-proxy-agent) This module provides an `http.Agent` implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in `http` module. __Note:__ For HTTP proxy usage with the `https` module, check out [`node-https-proxy-agent`](https://github.com/TooTallNate/node-https-proxy-agent). Installation ------------ Install with `npm`: ``` bash $ npm install http-proxy-agent ``` Example ------- ``` js var url = require('url'); var http = require('http'); var HttpProxyAgent = require('http-proxy-agent'); // HTTP/HTTPS proxy to connect to var proxy = process.env.http_proxy || 'http://168.63.76.32:3128'; console.log('using proxy server %j', proxy); // HTTP endpoint for the proxy to connect to var endpoint = process.argv[2] || 'http://nodejs.org/api/'; console.log('attempting to GET %j', endpoint); var opts = url.parse(endpoint); // create an instance of the `HttpProxyAgent` class with the proxy server information var agent = new HttpProxyAgent(proxy); opts.agent = agent; http.get(opts, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` License ------- (The MIT License) Copyright (c) 2013 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # promzard A prompting wizard for building files from specialized PromZard modules. Used by `npm init`. A reimplementation of @SubStack's [prompter](https://github.com/substack/node-prompter), which does not use AST traversal. From another point of view, it's a reimplementation of [@Marak](https://github.com/marak)'s [wizard](https://github.com/Marak/wizard) which doesn't use schemas. The goal is a nice drop-in enhancement for `npm init`. ## Usage ```javascript var promzard = require('promzard') promzard(inputFile, optionalContextAdditions, function (er, data) { // .. you know what you doing .. }) ``` In the `inputFile` you can have something like this: ```javascript var fs = require('fs') module.exports = { "greeting": prompt("Who shall you greet?", "world", function (who) { return "Hello, " + who }), "filename": __filename, "directory": function (cb) { fs.readdir(__dirname, cb) } } ``` When run, promzard will display the prompts and resolve the async functions in order, and then either give you an error, or the resolved data, ready to be dropped into a JSON file or some other place. ### promzard(inputFile, ctx, callback) The inputFile is just a node module. You can require() things, set module.exports, etc. Whatever that module exports is the result, and it is walked over to call any functions as described below. The only caveat is that you must give PromZard the full absolute path to the module (you can get this via Node's `require.resolve`.) Also, the `prompt` function is injected into the context object, so watch out. Whatever you put in that `ctx` will of course also be available in the module. You can get quite fancy with this, passing in existing configs and so on. ### Class: promzard.PromZard(file, ctx) Just like the `promzard` function, but the EventEmitter that makes it all happen. Emits either a `data` event with the data, or a `error` event if it blows up. If `error` is emitted, then `data` never will be. ### prompt(...) In the promzard input module, you can call the `prompt` function. This prompts the user to input some data. The arguments are interpreted based on type: 1. `string` The first string encountered is the prompt. The second is the default value. 2. `function` A transformer function which receives the data and returns something else. More than meets the eye. 3. `object` The `prompt` member is the prompt, the `default` member is the default value, and the `transform` is the transformer. Whatever the final value is, that's what will be put on the resulting object. ### Functions If there are any functions on the promzard input module's exports, then promzard will call each of them with a callback. This way, your module can do asynchronous actions if necessary to validate or ascertain whatever needs verification. The functions are called in the context of the ctx object, and are given a single argument, which is a callback that should be called with either an error, or the result to assign to that spot. In the async function, you can also call prompt() and return the result of the prompt in the callback. For example, this works fine in a promzard module: ``` exports.asyncPrompt = function (cb) { fs.stat(someFile, function (er, st) { // if there's an error, no prompt, just error // otherwise prompt and use the actual file size as the default cb(er, prompt('file size', st.size)) }) } ``` You can also return other async functions in the async function callback. Though that's a bit silly, it could be a handy way to reuse functionality in some cases. ### Sync vs Async The `prompt()` function is not synchronous, though it appears that way. It just returns a token that is swapped out when the data object is walked over asynchronously later, and returns a token. For that reason, prompt() calls whose results don't end up on the data object are never shown to the user. For example, this will only prompt once: ``` exports.promptThreeTimes = prompt('prompt me once', 'shame on you') exports.promptThreeTimes = prompt('prompt me twice', 'um....') exports.promptThreeTimes = prompt('you cant prompt me again') ``` ### Isn't this exactly the sort of 'looks sync' that you said was bad about other libraries? Yeah, sorta. I wouldn't use promzard for anything more complicated than a wizard that spits out prompts to set up a config file or something. Maybe there are other use cases I haven't considered. # libnpmorg [![npm version](https://img.shields.io/npm/v/libnpmorg.svg)](https://npm.im/libnpmorg) [![license](https://img.shields.io/npm/l/libnpmorg.svg)](https://npm.im/libnpmorg) [![Travis](https://img.shields.io/travis/npm/libnpmorg.svg)](https://travis-ci.org/npm/libnpmorg) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/libnpmorg?svg=true)](https://ci.appveyor.com/project/zkat/libnpmorg) [![Coverage Status](https://coveralls.io/repos/github/npm/libnpmorg/badge.svg?branch=latest)](https://coveralls.io/github/npm/libnpmorg?branch=latest) [`libnpmorg`](https://github.com/npm/libnpmorg) is a Node.js library for programmatically accessing the [npm Org membership API](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#membership-detail). ## Example ```js const org = require('libnpmorg') console.log(await org.ls('myorg', {token: 'deadbeef'})) => Roster { zkat: 'developer', iarna: 'admin', isaacs: 'owner' } ``` ## Install `$ npm install libnpmorg` ## Table of Contents * [Example](#example) * [Install](#install) * [API](#api) * [hook opts](#opts) * [`set()`](#set) * [`rm()`](#rm) * [`ls()`](#ls) * [`ls.stream()`](#ls-stream) ### API #### <a name="opts"></a> `opts` for `libnpmorg` commands `libnpmorg` uses [`npm-registry-fetch`](https://npm.im/npm-registry-fetch). All options are passed through directly to that library, so please refer to [its own `opts` documentation](https://www.npmjs.com/package/npm-registry-fetch#fetch-options) for options that can be passed in. A couple of options of note for those in a hurry: * `opts.token` - can be passed in and will be used as the authentication token for the registry. For other ways to pass in auth details, see the n-r-f docs. * `opts.otp` - certain operations will require an OTP token to be passed in. If a `libnpmorg` command fails with `err.code === EOTP`, please retry the request with `{otp: <2fa token>}` * `opts.Promise` - If you pass this in, the Promises returned by `libnpmorg` commands will use this Promise class instead. For example: `{Promise: require('bluebird')}` #### <a name="set"></a> `> org.set(org, user, [role], [opts]) -> Promise` The returned Promise resolves to a [Membership Detail](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#membership-detail) object. The `role` is optional and should be one of `admin`, `owner`, or `developer`. `developer` is the default if no `role` is provided. `org` and `user` must be scope names for the org name and user name respectively. They can optionally be prefixed with `@`. See also: [`PUT /-/org/:scope/user`](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#org-membership-replace) ##### Example ```javascript await org.set('@myorg', '@myuser', 'admin', {token: 'deadbeef'}) => MembershipDetail { org: { name: 'myorg', size: 15 }, user: 'myuser', role: 'admin' } ``` #### <a name="rm"></a> `> org.rm(org, user, [opts]) -> Promise` The Promise resolves to `null` on success. `org` and `user` must be scope names for the org name and user name respectively. They can optionally be prefixed with `@`. See also: [`DELETE /-/org/:scope/user`](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#org-membership-delete) ##### Example ```javascript await org.rm('myorg', 'myuser', {token: 'deadbeef'}) ``` #### <a name="ls"></a> `> org.ls(org, [opts]) -> Promise` The Promise resolves to a [Roster](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#roster) object. `org` must be a scope name for an org, and can be optionally prefixed with `@`. See also: [`GET /-/org/:scope/user`](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#org-roster) ##### Example ```javascript await org.ls('myorg', {token: 'deadbeef'}) => Roster { zkat: 'developer', iarna: 'admin', isaacs: 'owner' } ``` #### <a name="ls-stream"></a> `> org.ls.stream(org, [opts]) -> Stream` Returns a stream of entries for a [Roster](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#roster), with each emitted entry in `[key, value]` format. `org` must be a scope name for an org, and can be optionally prefixed with `@`. The returned stream is a valid `Symbol.asyncIterator`. See also: [`GET /-/org/:scope/user`](https://github.com/npm/registry/blob/master/docs/orgs/memberships.md#org-roster) ##### Example ```javascript for await (let [user, role] of org.ls.stream('myorg', {token: 'deadbeef'})) { console.log(`user: ${user} (${role})`) } => user: zkat (developer) user: iarna (admin) user: isaacs (owner) ``` # read-cmd-shim Figure out what a [`cmd-shim`](https://github.com/ForbesLindesay/cmd-shim) is pointing at. This acts as the equivalent of [`fs.readlink`](https://nodejs.org/api/fs.html#fs_fs_readlink_path_callback). ### Usage ``` var readCmdShim = require('read-cmd-shim') readCmdShim('/path/to/shim.cmd', function (er, destination) { … }) var destination = readCmdShim.sync('/path/to/shim.cmd') ``` ### readCmdShim(path, callback) Reads the `cmd-shim` located at `path` and calls back with the _relative_ path that the shim points at. Consider this as roughly the equivalent of `fs.readlink`. This can read both `.cmd` style that are run by the Windows Command Prompt and Powershell, and the kind without any extension that are used by Cygwin. This can return errors that `fs.readFile` returns, except that they'll include a stack trace from where `readCmdShim` was called. Plus it can return a special `ENOTASHIM` exception, when it can't find a cmd-shim in the file referenced by `path`. This should only happen if you pass in a non-command shim. ### readCmdShim.sync(path) Same as above but synchronous. Errors are thrown. # isStream [![Build Status](https://secure.travis-ci.org/rvagg/isstream.png)](http://travis-ci.org/rvagg/isstream) **Test if an object is a `Stream`** [![NPM](https://nodei.co/npm/isstream.svg)](https://nodei.co/npm/isstream/) The missing `Stream.isStream(obj)`: determine if an object is standard Node.js `Stream`. Works for Node-core `Stream` objects (for 0.8, 0.10, 0.11, and in theory, older and newer versions) and all versions of **[readable-stream](https://github.com/isaacs/readable-stream)**. ## Usage: ```js var isStream = require('isstream') var Stream = require('stream') isStream(new Stream()) // true isStream({}) // false isStream(new Stream.Readable()) // true isStream(new Stream.Writable()) // true isStream(new Stream.Duplex()) // true isStream(new Stream.Transform()) // true isStream(new Stream.PassThrough()) // true ``` ## But wait! There's more! You can also test for `isReadable(obj)`, `isWritable(obj)` and `isDuplex(obj)` to test for implementations of Streams2 (and Streams3) base classes. ```js var isReadable = require('isstream').isReadable var isWritable = require('isstream').isWritable var isDuplex = require('isstream').isDuplex var Stream = require('stream') isReadable(new Stream()) // false isWritable(new Stream()) // false isDuplex(new Stream()) // false isReadable(new Stream.Readable()) // true isReadable(new Stream.Writable()) // false isReadable(new Stream.Duplex()) // true isReadable(new Stream.Transform()) // true isReadable(new Stream.PassThrough()) // true isWritable(new Stream.Readable()) // false isWritable(new Stream.Writable()) // true isWritable(new Stream.Duplex()) // true isWritable(new Stream.Transform()) // true isWritable(new Stream.PassThrough()) // true isDuplex(new Stream.Readable()) // false isDuplex(new Stream.Writable()) // false isDuplex(new Stream.Duplex()) // true isDuplex(new Stream.Transform()) // true isDuplex(new Stream.PassThrough()) // true ``` *Reminder: when implementing your own streams, please [use **readable-stream** rather than core streams](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html).* ## License **isStream** is Copyright (c) 2015 Rod Vagg [@rvagg](https://twitter.com/rvagg) and licenced under the MIT licence. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE.md file for more details. # performance-now [![Build Status](https://travis-ci.org/braveg1rl/performance-now.png?branch=master)](https://travis-ci.org/braveg1rl/performance-now) [![Dependency Status](https://david-dm.org/braveg1rl/performance-now.png)](https://david-dm.org/braveg1rl/performance-now) Implements a function similar to `performance.now` (based on `process.hrtime`). Modern browsers have a `window.performance` object with - among others - a `now` method which gives time in milliseconds, but with sub-millisecond precision. This module offers the same function based on the Node.js native `process.hrtime` function. Using `process.hrtime` means that the reported time will be monotonically increasing, and not subject to clock-drift. According to the [High Resolution Time specification](http://www.w3.org/TR/hr-time/), the number of milliseconds reported by `performance.now` should be relative to the value of `performance.timing.navigationStart`. In the current version of the module (2.0) the reported time is relative to the time the current Node process has started (inferred from `process.uptime()`). Version 1.0 reported a different time. The reported time was relative to the time the module was loaded (i.e. the time it was first `require`d). If you need this functionality, version 1.0 is still available on NPM. ## Example usage ```javascript var now = require("performance-now") var start = now() var end = now() console.log(start.toFixed(3)) // the number of milliseconds the current node process is running console.log((start-end).toFixed(3)) // ~ 0.002 on my system ``` Running the now function two times right after each other yields a time difference of a few microseconds. Given this overhead, I think it's best to assume that the precision of intervals computed with this method is not higher than 10 microseconds, if you don't know the exact overhead on your own system. ## License performance-now is released under the [MIT License](http://opensource.org/licenses/MIT). Copyright (c) 2017 Braveg1rl # from2 [![Flattr this!](https://api.flattr.com/button/flattr-badge-large.png)](https://flattr.com/submit/auto?user_id=hughskennedy&url=http://github.com/hughsk/from2&title=from2&description=hughsk/from2%20on%20GitHub&language=en_GB&tags=flattr,github,javascript&category=software)[![experimental](http://hughsk.github.io/stability-badges/dist/experimental.svg)](http://github.com/hughsk/stability-badges) # `from2` is a high-level module for creating readable streams that properly handle backpressure. Convience wrapper for [readable-stream](http://github.com/isaacs/readable-stream)'s `ReadableStream` base class, with an API lifted from [from](http://github.com/dominictarr/from) and [through2](http://github.com/rvagg/through2). ## Usage ## [![from2](https://nodei.co/npm/from2.png?mini=true)](https://nodei.co/npm/from2) ### `stream = from2([opts], read)` ### Where `opts` are the options to pass on to the `ReadableStream` constructor, and `read(size, next)` is called when data is requested from the stream. * `size` is the recommended amount of data (in bytes) to retrieve. * `next(err)` should be called when you're ready to emit more data. For example, here's a readable stream that emits the contents of a given string: ``` javascript var from = require('from2') module.exports = fromString function fromString(string) { return from(function(size, next) { // if there's no more content // left in the string, close the stream. if (string.length <= 0) return this.push(null) // Pull in a new chunk of text, // removing it from the string. var chunk = string.slice(0, size) string = string.slice(size) // Emit "chunk" from the stream. next(null, chunk) }) } // pipe "hello world" out // to stdout. fromString('hello world').pipe(process.stdout) ``` ### `stream = from2.obj([opts], read)` ### Shorthand for `from2({ objectMode: true }, read)`. ### `createStream = from2.ctor([opts], read)` ### If you're creating similar streams in quick succession you can improve performance by generating a stream **constructor** that you can reuse instead of creating one-off streams on each call. Takes the same options as `from2`, instead returning a constructor which you can use to create new streams. ## License ## MIT. See [LICENSE.md](http://github.com/hughsk/from2/blob/master/LICENSE.md) for details. # signal-exit [![Build Status](https://travis-ci.org/tapjs/signal-exit.png)](https://travis-ci.org/tapjs/signal-exit) [![Coverage](https://coveralls.io/repos/tapjs/signal-exit/badge.svg?branch=master)](https://coveralls.io/r/tapjs/signal-exit?branch=master) [![NPM version](https://img.shields.io/npm/v/signal-exit.svg)](https://www.npmjs.com/package/signal-exit) [![Windows Tests](https://img.shields.io/appveyor/ci/bcoe/signal-exit/master.svg?label=Windows%20Tests)](https://ci.appveyor.com/project/bcoe/signal-exit) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) When you want to fire an event no matter how a process exits: * reaching the end of execution. * explicitly having `process.exit(code)` called. * having `process.kill(pid, sig)` called. * receiving a fatal signal from outside the process Use `signal-exit`. ```js var onExit = require('signal-exit') onExit(function (code, signal) { console.log('process exited!') }) ``` ## API `var remove = onExit(function (code, signal) {}, options)` The return value of the function is a function that will remove the handler. Note that the function *only* fires for signals if the signal would cause the proces to exit. That is, there are no other listeners, and it is a fatal signal. ## Options * `alwaysLast`: Run this handler after any other signal or exit handlers. This causes `process.emit` to be monkeypatched. # colors.js [![Build Status](https://travis-ci.org/Marak/colors.js.svg?branch=master)](https://travis-ci.org/Marak/colors.js) [![version](https://img.shields.io/npm/v/colors.svg)](https://www.npmjs.org/package/colors) [![dependencies](https://david-dm.org/Marak/colors.js.svg)](https://david-dm.org/Marak/colors.js) [![devDependencies](https://david-dm.org/Marak/colors.js/dev-status.svg)](https://david-dm.org/Marak/colors.js#info=devDependencies) Please check out the [roadmap](ROADMAP.md) for upcoming features and releases. Please open Issues to provide feedback, and check the `develop` branch for the latest bleeding-edge updates. ## get color and style in your node.js console ![Demo](https://raw.githubusercontent.com/Marak/colors.js/master/screenshots/colors.png) ## Installation npm install colors ## colors and styles! ### text colors - black - red - green - yellow - blue - magenta - cyan - white - gray - grey ### background colors - bgBlack - bgRed - bgGreen - bgYellow - bgBlue - bgMagenta - bgCyan - bgWhite ### styles - reset - bold - dim - italic - underline - inverse - hidden - strikethrough ### extras - rainbow - zebra - america - trap - random ## Usage By popular demand, `colors` now ships with two types of usages! The super nifty way ```js var colors = require('colors'); console.log('hello'.green); // outputs green text console.log('i like cake and pies'.underline.red) // outputs red underlined text console.log('inverse the color'.inverse); // inverses the color console.log('OMG Rainbows!'.rainbow); // rainbow console.log('Run the trap'.trap); // Drops the bass ``` or a slightly less nifty way which doesn't extend `String.prototype` ```js var colors = require('colors/safe'); console.log(colors.green('hello')); // outputs green text console.log(colors.red.underline('i like cake and pies')) // outputs red underlined text console.log(colors.inverse('inverse the color')); // inverses the color console.log(colors.rainbow('OMG Rainbows!')); // rainbow console.log(colors.trap('Run the trap')); // Drops the bass ``` I prefer the first way. Some people seem to be afraid of extending `String.prototype` and prefer the second way. If you are writing good code you will never have an issue with the first approach. If you really don't want to touch `String.prototype`, the second usage will not touch `String` native object. ## Disabling Colors To disable colors you can pass the following arguments in the command line to your application: ```bash node myapp.js --no-color ``` ## Console.log [string substitution](http://nodejs.org/docs/latest/api/console.html#console_console_log_data) ```js var name = 'Marak'; console.log(colors.green('Hello %s'), name); // outputs -> 'Hello Marak' ``` ## Custom themes ### Using standard API ```js var colors = require('colors'); colors.setTheme({ silly: 'rainbow', input: 'grey', verbose: 'cyan', prompt: 'grey', info: 'green', data: 'grey', help: 'cyan', warn: 'yellow', debug: 'blue', error: 'red' }); // outputs red text console.log("this is an error".error); // outputs yellow text console.log("this is a warning".warn); ``` ### Using string safe API ```js var colors = require('colors/safe'); // set single property var error = colors.red; error('this is red'); // set theme colors.setTheme({ silly: 'rainbow', input: 'grey', verbose: 'cyan', prompt: 'grey', info: 'green', data: 'grey', help: 'cyan', warn: 'yellow', debug: 'blue', error: 'red' }); // outputs red text console.log(colors.error("this is an error")); // outputs yellow text console.log(colors.warn("this is a warning")); ``` ### Combining Colors ```javascript var colors = require('colors'); colors.setTheme({ custom: ['red', 'underline'] }); console.log('test'.custom); ``` *Protip: There is a secret undocumented style in `colors`. If you find the style you can summon him.* gauge ===== A nearly stateless terminal based horizontal gauge / progress bar. ```javascript var Gauge = require("gauge") var gauge = new Gauge() gauge.show("test", 0.20) gauge.pulse("this") gauge.hide() ``` ![](gauge-demo.gif) ### CHANGES FROM 1.x Gauge 2.x is breaking release, please see the [changelog] for details on what's changed if you were previously a user of this module. [changelog]: CHANGELOG.md ### THE GAUGE CLASS This is the typical interface to the module– it provides a pretty fire-and-forget interface to displaying your status information. ``` var Gauge = require("gauge") var gauge = new Gauge([stream], [options]) ``` * **stream** – *(optional, default STDERR)* A stream that progress bar updates are to be written to. Gauge honors backpressure and will pause most writing if it is indicated. * **options** – *(optional)* An option object. Constructs a new gauge. Gauges are drawn on a single line, and are not drawn if **stream** isn't a tty and a tty isn't explicitly provided. If **stream** is a terminal or if you pass in **tty** to **options** then we will detect terminal resizes and redraw to fit. We do this by watching for `resize` events on the tty. (To work around a bug in verisons of Node prior to 2.5.0, we watch for them on stdout if the tty is stderr.) Resizes to larger window sizes will be clean, but shrinking the window will always result in some cruft. **IMPORTANT:** If you prevously were passing in a non-tty stream but you still want output (for example, a stream wrapped by the `ansi` module) then you need to pass in the **tty** option below, as `gauge` needs access to the underlying tty in order to do things like terminal resizes and terminal width detection. The **options** object can have the following properties, all of which are optional: * **updateInterval**: How often gauge updates should be drawn, in miliseconds. * **fixedFramerate**: Defaults to false on node 0.8, true on everything else. When this is true a timer is created to trigger once every `updateInterval` ms, when false, updates are printed as soon as they come in but updates more often than `updateInterval` are ignored. The reason 0.8 doesn't have this set to true is that it can't `unref` its timer and so it would stop your program from exiting– if you want to use this feature with 0.8 just make sure you call `gauge.disable()` before you expect your program to exit. * **themes**: A themeset to use when selecting the theme to use. Defaults to `gauge/themes`, see the [themes] documentation for details. * **theme**: Select a theme for use, it can be a: * Theme object, in which case the **themes** is not used. * The name of a theme, which will be looked up in the current *themes* object. * A configuration object with any of `hasUnicode`, `hasColor` or `platform` keys, which if wlll be used to override our guesses when making a default theme selection. If no theme is selected then a default is picked using a combination of our best guesses at your OS, color support and unicode support. * **template**: Describes what you want your gauge to look like. The default is what npm uses. Detailed [documentation] is later in this document. * **hideCursor**: Defaults to true. If true, then the cursor will be hidden while the gauge is displayed. * **tty**: The tty that you're ultimately writing to. Defaults to the same as **stream**. This is used for detecting the width of the terminal and resizes. The width used is `tty.columns - 1`. If no tty is available then a width of `79` is assumed. * **enabled**: Defaults to true if `tty` is a TTY, false otherwise. If true the gauge starts enabled. If disabled then all update commands are ignored and no gauge will be printed until you call `.enable()`. * **Plumbing**: The class to use to actually generate the gauge for printing. This defaults to `require('gauge/plumbing')` and ordinarly you shouldn't need to override this. * **cleanupOnExit**: Defaults to true. Ordinarily we register an exit handler to make sure your cursor is turned back on and the progress bar erased when your process exits, even if you Ctrl-C out or otherwise exit unexpectedly. You can disable this and it won't register the exit handler. [has-unicode]: https://www.npmjs.com/package/has-unicode [themes]: #themes [documentation]: #templates #### `gauge.show(section | status, [completed])` The first argument is either the section, the name of the current thing contributing to progress, or an object with keys like **section**, **subsection** & **completed** (or any others you have types for in a custom template). If you don't want to update or set any of these you can pass `null` and it will be ignored. The second argument is the percent completed as a value between 0 and 1. Without it, completion is just not updated. You'll also note that completion can be passed in as part of a status object as the first argument. If both it and the completed argument are passed in, the completed argument wins. #### `gauge.hide([cb])` Removes the gauge from the terminal. Optionally, callback `cb` after IO has had an opportunity to happen (currently this just means after `setImmediate` has called back.) It turns out this is important when you're pausing the progress bar on one filehandle and printing to another– otherwise (with a big enough print) node can end up printing the "end progress bar" bits to the progress bar filehandle while other stuff is printing to another filehandle. These getting interleaved can cause corruption in some terminals. #### `gauge.pulse([subsection])` * **subsection** – *(optional)* The specific thing that triggered this pulse Spins the spinner in the gauge to show output. If **subsection** is included then it will be combined with the last name passed to `gauge.show`. #### `gauge.disable()` Hides the gauge and ignores further calls to `show` or `pulse`. #### `gauge.enable()` Shows the gauge and resumes updating when `show` or `pulse` is called. #### `gauge.isEnabled()` Returns true if the gauge is enabled. #### `gauge.setThemeset(themes)` Change the themeset to select a theme from. The same as the `themes` option used in the constructor. The theme will be reselected from this themeset. #### `gauge.setTheme(theme)` Change the active theme, will be displayed with the next show or pulse. This can be: * Theme object, in which case the **themes** is not used. * The name of a theme, which will be looked up in the current *themes* object. * A configuration object with any of `hasUnicode`, `hasColor` or `platform` keys, which if wlll be used to override our guesses when making a default theme selection. If no theme is selected then a default is picked using a combination of our best guesses at your OS, color support and unicode support. #### `gauge.setTemplate(template)` Change the active template, will be displayed with the next show or pulse ### Tracking Completion If you have more than one thing going on that you want to track completion of, you may find the related [are-we-there-yet] helpful. It's `change` event can be wired up to the `show` method to get a more traditional progress bar interface. [are-we-there-yet]: https://www.npmjs.com/package/are-we-there-yet ### THEMES ``` var themes = require('gauge/themes') // fetch the default color unicode theme for this platform var ourTheme = themes({hasUnicode: true, hasColor: true}) // fetch the default non-color unicode theme for osx var ourTheme = themes({hasUnicode: true, hasColor: false, platform: 'darwin'}) // create a new theme based on the color ascii theme for this platform // that brackets the progress bar with arrows var ourTheme = themes.newTheme(theme(hasUnicode: false, hasColor: true}), { preProgressbar: '→', postProgressbar: '←' }) ``` The object returned by `gauge/themes` is an instance of the `ThemeSet` class. ``` var ThemeSet = require('gauge/theme-set') var themes = new ThemeSet() // or var themes = require('gauge/themes') var mythemes = themes.newThemeset() // creates a new themeset based on the default themes ``` #### themes(opts) #### themes.getDefault(opts) Theme objects are a function that fetches the default theme based on platform, unicode and color support. Options is an object with the following properties: * **hasUnicode** - If true, fetch a unicode theme, if no unicode theme is available then a non-unicode theme will be used. * **hasColor** - If true, fetch a color theme, if no color theme is available a non-color theme will be used. * **platform** (optional) - Defaults to `process.platform`. If no platform match is available then `fallback` is used instead. If no compatible theme can be found then an error will be thrown with a `code` of `EMISSINGTHEME`. #### themes.addTheme(themeName, themeObj) #### themes.addTheme(themeName, [parentTheme], newTheme) Adds a named theme to the themeset. You can pass in either a theme object, as returned by `themes.newTheme` or the arguments you'd pass to `themes.newTheme`. #### themes.getThemeNames() Return a list of all of the names of the themes in this themeset. Suitable for use in `themes.getTheme(…)`. #### themes.getTheme(name) Returns the theme object from this theme set named `name`. If `name` does not exist in this themeset an error will be thrown with a `code` of `EMISSINGTHEME`. #### themes.setDefault([opts], themeName) `opts` is an object with the following properties. * **platform** - Defaults to `'fallback'`. If your theme is platform specific, specify that here with the platform from `process.platform`, eg, `win32`, `darwin`, etc. * **hasUnicode** - Defaults to `false`. If your theme uses unicode you should set this to true. * **hasColor** - Defaults to `false`. If your theme uses color you should set this to true. `themeName` is the name of the theme (as given to `addTheme`) to use for this set of `opts`. #### themes.newTheme([parentTheme,] newTheme) Create a new theme object based on `parentTheme`. If no `parentTheme` is provided then a minimal parentTheme that defines functions for rendering the activity indicator (spinner) and progress bar will be defined. (This fallback parent is defined in `gauge/base-theme`.) newTheme should be a bare object– we'll start by discussing the properties defined by the default themes: * **preProgressbar** - displayed prior to the progress bar, if the progress bar is displayed. * **postProgressbar** - displayed after the progress bar, if the progress bar is displayed. * **progressBarTheme** - The subtheme passed through to the progress bar renderer, it's an object with `complete` and `remaining` properties that are the strings you want repeated for those sections of the progress bar. * **activityIndicatorTheme** - The theme for the activity indicator (spinner), this can either be a string, in which each character is a different step, or an array of strings. * **preSubsection** - Displayed as a separator between the `section` and `subsection` when the latter is printed. More generally, themes can have any value that would be a valid value when rendering templates. The properties in the theme are used when their name matches a type in the template. Their values can be: * **strings & numbers** - They'll be included as is * **function (values, theme, width)** - Should return what you want in your output. *values* is an object with values provided via `gauge.show`, *theme* is the theme specific to this item (see below) or this theme object, and *width* is the number of characters wide your result should be. There are a couple of special prefixes: * **pre** - Is shown prior to the property, if its displayed. * **post** - Is shown after the property, if its displayed. And one special suffix: * **Theme** - Its value is passed to a function-type item as the theme. #### themes.addToAllThemes(theme) This *mixes-in* `theme` into all themes currently defined. It also adds it to the default parent theme for this themeset, so future themes added to this themeset will get the values from `theme` by default. #### themes.newThemeset() Copy the current themeset into a new one. This allows you to easily inherit one themeset from another. ### TEMPLATES A template is an array of objects and strings that, after being evaluated, will be turned into the gauge line. The default template is: ```javascript [ {type: 'progressbar', length: 20}, {type: 'activityIndicator', kerning: 1, length: 1}, {type: 'section', kerning: 1, default: ''}, {type: 'subsection', kerning: 1, default: ''} ] ``` The various template elements can either be **plain strings**, in which case they will be be included verbatum in the output, or objects with the following properties: * *type* can be any of the following plus any keys you pass into `gauge.show` plus any keys you have on a custom theme. * `section` – What big thing you're working on now. * `subsection` – What component of that thing is currently working. * `activityIndicator` – Shows a spinner using the `activityIndicatorTheme` from your active theme. * `progressbar` – A progress bar representing your current `completed` using the `progressbarTheme` from your active theme. * *kerning* – Number of spaces that must be between this item and other items, if this item is displayed at all. * *maxLength* – The maximum length for this element. If its value is longer it will be truncated. * *minLength* – The minimum length for this element. If its value is shorter it will be padded according to the *align* value. * *align* – (Default: left) Possible values "left", "right" and "center". Works as you'd expect from word processors. * *length* – Provides a single value for both *minLength* and *maxLength*. If both *length* and *minLength or *maxLength* are specifed then the latter take precedence. * *value* – A literal value to use for this template item. * *default* – A default value to use for this template item if a value wasn't otherwise passed in. ### PLUMBING This is the super simple, assume nothing, do no magic internals used by gauge to implement its ordinary interface. ``` var Plumbing = require('gauge/plumbing') var gauge = new Plumbing(theme, template, width) ``` * **theme**: The theme to use. * **template**: The template to use. * **width**: How wide your gauge should be #### `gauge.setTheme(theme)` Change the active theme. #### `gauge.setTemplate(template)` Change the active template. #### `gauge.setWidth(width)` Change the width to render at. #### `gauge.hide()` Return the string necessary to hide the progress bar #### `gauge.hideCursor()` Return a string to hide the cursor. #### `gauge.showCursor()` Return a string to show the cursor. #### `gauge.show(status)` Using `status` for values, render the provided template with the theme and return a string that is suitable for printing to update the gauge. # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; # `node-gyp` - Node.js native addon build tool [![Travis CI](https://travis-ci.com/nodejs/node-gyp.svg?branch=master)](https://travis-ci.com/nodejs/node-gyp) [![Build Status](https://github.com/nodejs/node-gyp/workflows/Python_tests/badge.svg)](https://github.com/nodejs/node-gyp/actions?workflow=Python_tests) `node-gyp` is a cross-platform command-line tool written in Node.js for compiling native addon modules for Node.js. It contains a fork of the [gyp](https://gyp.gsrc.io) project that was previously used by the Chromium team, extended to support the development of Node.js native addons. Note that `node-gyp` is _not_ used to build Node.js itself. Multiple target versions of Node.js are supported (i.e. `0.8`, ..., `4`, `5`, `6`, etc.), regardless of what version of Node.js is actually installed on your system (`node-gyp` downloads the necessary development files or headers for the target version). ## Features * The same build commands work on any of the supported platforms * Supports the targeting of different versions of Node.js ## Installation You can install `node-gyp` using `npm`: ``` bash $ npm install -g node-gyp ``` Depending on your operating system, you will need to install: ### On Unix * Python v2.7, v3.5, v3.6, or v3.7 * `make` * A proper C/C++ compiler toolchain, like [GCC](https://gcc.gnu.org) ### On macOS * Python v2.7, v3.5, v3.6, or v3.7 * [Xcode](https://developer.apple.com/xcode/download/) * You also need to install the `XCode Command Line Tools` by running `xcode-select --install`. Alternatively, if you already have the full Xcode installed, you can find them under the menu `Xcode -> Open Developer Tool -> More Developer Tools...`. This step will install `clang`, `clang++`, and `make`. * If your Mac has been _upgraded_ to macOS Catalina (10.15), please read [macOS_Catalina.md](macOS_Catalina.md). ### On Windows Install the current version of Python from the [Microsoft Store package](https://docs.python.org/3/using/windows.html#the-microsoft-store-package). #### Option 1 Install all the required tools and configurations using Microsoft's [windows-build-tools](https://github.com/felixrieseberg/windows-build-tools) using `npm install --global --production windows-build-tools` from an elevated PowerShell or CMD.exe (run as Administrator). #### Option 2 Install tools and configuration manually: * Install Visual C++ Build Environment: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) (using "Visual C++ build tools" workload) or [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community) (using the "Desktop development with C++" workload) * Launch cmd, `npm config set msvs_version 2017` If the above steps didn't work for you, please visit [Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules) for additional tips. To target native ARM64 Node.js on Windows 10 on ARM, add the components "Visual C++ compilers and libraries for ARM64" and "Visual C++ ATL for ARM64". ### Configuring Python Dependency `node-gyp` requires that you have installed a compatible version of Python, one of: v2.7, v3.5, v3.6, or v3.7. If you have multiple Python versions installed, you can identify which Python version `node-gyp` should use in one of the following ways: 1. by setting the `--python` command-line option, e.g.: ``` bash $ node-gyp <command> --python /path/to/executable/python ``` 2. If `node-gyp` is called by way of `npm`, *and* you have multiple versions of Python installed, then you can set `npm`'s 'python' config key to the appropriate value: ``` bash $ npm config set python /path/to/executable/python ``` 3. If the `PYTHON` environment variable is set to the path of a Python executable, then that version will be used, if it is a compatible version. 4. If the `NODE_GYP_FORCE_PYTHON` environment variable is set to the path of a Python executable, it will be used instead of any of the other configured or builtin Python search paths. If it's not a compatible version, no further searching will be done. ## How to Use To compile your native addon, first go to its root directory: ``` bash $ cd my_node_addon ``` The next step is to generate the appropriate project build files for the current platform. Use `configure` for that: ``` bash $ node-gyp configure ``` Auto-detection fails for Visual C++ Build Tools 2015, so `--msvs_version=2015` needs to be added (not needed when run by npm as configured above): ``` bash $ node-gyp configure --msvs_version=2015 ``` __Note__: The `configure` step looks for a `binding.gyp` file in the current directory to process. See below for instructions on creating a `binding.gyp` file. Now you will have either a `Makefile` (on Unix platforms) or a `vcxproj` file (on Windows) in the `build/` directory. Next, invoke the `build` command: ``` bash $ node-gyp build ``` Now you have your compiled `.node` bindings file! The compiled bindings end up in `build/Debug/` or `build/Release/`, depending on the build mode. At this point, you can require the `.node` file with Node.js and run your tests! __Note:__ To create a _Debug_ build of the bindings file, pass the `--debug` (or `-d`) switch when running either the `configure`, `build` or `rebuild` commands. ## The `binding.gyp` file A `binding.gyp` file describes the configuration to build your module, in a JSON-like format. This file gets placed in the root of your package, alongside `package.json`. A barebones `gyp` file appropriate for building a Node.js addon could look like: ```python { "targets": [ { "target_name": "binding", "sources": [ "src/binding.cc" ] } ] } ``` ## Further reading Some additional resources for Node.js native addons and writing `gyp` configuration files: * ["Going Native" a nodeschool.io tutorial](http://nodeschool.io/#goingnative) * ["Hello World" node addon example](https://github.com/nodejs/node/tree/master/test/addons/hello-world) * [gyp user documentation](https://gyp.gsrc.io/docs/UserDocumentation.md) * [gyp input format reference](https://gyp.gsrc.io/docs/InputFormatReference.md) * [*"binding.gyp" files out in the wild* wiki page](https://github.com/nodejs/node-gyp/wiki/%22binding.gyp%22-files-out-in-the-wild) ## Commands `node-gyp` responds to the following commands: | **Command** | **Description** |:--------------|:--------------------------------------------------------------- | `help` | Shows the help dialog | `build` | Invokes `make`/`msbuild.exe` and builds the native addon | `clean` | Removes the `build` directory if it exists | `configure` | Generates project build files for the current platform | `rebuild` | Runs `clean`, `configure` and `build` all in a row | `install` | Installs Node.js header files for the given version | `list` | Lists the currently installed Node.js header versions | `remove` | Removes the Node.js header files for the given version ## Command Options `node-gyp` accepts the following command options: | **Command** | **Description** |:----------------------------------|:------------------------------------------ | `-j n`, `--jobs n` | Run `make` in parallel. The value `max` will use all available CPU cores | `--target=v6.2.1` | Node.js version to build for (default is `process.version`) | `--silly`, `--loglevel=silly` | Log all progress to console | `--verbose`, `--loglevel=verbose` | Log most progress to console | `--silent`, `--loglevel=silent` | Don't log anything to console | `debug`, `--debug` | Make Debug build (default is `Release`) | `--release`, `--no-debug` | Make Release build | `-C $dir`, `--directory=$dir` | Run command in different directory | `--make=$make` | Override `make` command (e.g. `gmake`) | `--thin=yes` | Enable thin static libraries | `--arch=$arch` | Set target architecture (e.g. ia32) | `--tarball=$path` | Get headers from a local tarball | `--devdir=$path` | SDK download directory (default is OS cache directory) | `--ensure` | Don't reinstall headers if already present | `--dist-url=$url` | Download header tarball from custom URL | `--proxy=$url` | Set HTTP(S) proxy for downloading header tarball | `--noproxy=$urls` | Set urls to ignore proxies when downloading header tarball | `--cafile=$cafile` | Override default CA chain (to download tarball) | `--nodedir=$path` | Set the path to the node source code | `--python=$path` | Set path to the Python binary | `--msvs_version=$version` | Set Visual Studio version (Windows only) | `--solution=$solution` | Set Visual Studio Solution version (Windows only) ## Configuration ### Environment variables Use the form `npm_config_OPTION_NAME` for any of the command options listed above (dashes in option names should be replaced by underscores). For example, to set `devdir` equal to `/tmp/.gyp`, you would: Run this on Unix: ```bash $ export npm_config_devdir=/tmp/.gyp ``` Or this on Windows: ```console > set npm_config_devdir=c:\temp\.gyp ``` ### `npm` configuration Use the form `OPTION_NAME` for any of the command options listed above. For example, to set `devdir` equal to `/tmp/.gyp`, you would run: ```bash $ npm config set [--global] devdir /tmp/.gyp ``` **Note:** Configuration set via `npm` will only be used when `node-gyp` is run via `npm`, not when `node-gyp` is run directly. ## License `node-gyp` is available under the MIT license. See the [LICENSE file](LICENSE) for details. # json-parse-better-errors [![npm version](https://img.shields.io/npm/v/json-parse-better-errors.svg)](https://npm.im/json-parse-better-errors) [![license](https://img.shields.io/npm/l/json-parse-better-errors.svg)](https://npm.im/json-parse-better-errors) [![Travis](https://img.shields.io/travis/zkat/json-parse-better-errors.svg)](https://travis-ci.org/zkat/json-parse-better-errors) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/json-parse-better-errors?svg=true)](https://ci.appveyor.com/project/zkat/json-parse-better-errors) [![Coverage Status](https://coveralls.io/repos/github/zkat/json-parse-better-errors/badge.svg?branch=latest)](https://coveralls.io/github/zkat/json-parse-better-errors?branch=latest) [`json-parse-better-errors`](https://github.com/zkat/json-parse-better-errors) is a Node.js library for getting nicer errors out of `JSON.parse()`, including context and position of the parse errors. ## Install `$ npm install --save json-parse-better-errors` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [`parse`](#parse) ### Example ```javascript const parseJson = require('json-parse-better-errors') parseJson('"foo"') parseJson('garbage') // more useful error message ``` ### Features * Like JSON.parse, but the errors are better. ### Contributing The npm team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other. Please refer to the [Changelog](CHANGELOG.md) for project history details, too. Happy hacking! ### API #### <a name="parse"></a> `> parse(txt, ?reviver, ?context=20)` Works just like `JSON.parse`, but will include a bit more information when an error happens. # lodash.union v4.6.0 The [lodash](https://lodash.com/) method `_.union` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.union ``` In Node.js: ```js var union = require('lodash.union'); ``` See the [documentation](https://lodash.com/docs#union) or [package source](https://github.com/lodash/lodash/blob/4.6.0-npm-packages/lodash.union) for more details. aproba ====== A ridiculously light-weight function argument validator ``` var validate = require("aproba") function myfunc(a, b, c) { // `a` must be a string, `b` a number, `c` a function validate('SNF', arguments) // [a,b,c] is also valid } myfunc('test', 23, function () {}) // ok myfunc(123, 23, function () {}) // type error myfunc('test', 23) // missing arg error myfunc('test', 23, function () {}, true) // too many args error ``` Valid types are: | type | description | :--: | :---------- | * | matches any type | A | `Array.isArray` OR an `arguments` object | S | typeof == string | N | typeof == number | F | typeof == function | O | typeof == object and not type A and not type E | B | typeof == boolean | E | `instanceof Error` OR `null` **(special: see below)** | Z | == `null` Validation failures throw one of three exception types, distinguished by a `code` property of `EMISSINGARG`, `EINVALIDTYPE` or `ETOOMANYARGS`. If you pass in an invalid type then it will throw with a code of `EUNKNOWNTYPE`. If an **error** argument is found and is not null then the remaining arguments are optional. That is, if you say `ESO` then that's like using a non-magical `E` in: `E|ESO|ZSO`. ### But I have optional arguments?! You can provide more than one signature by separating them with pipes `|`. If any signature matches the arguments then they'll be considered valid. So for example, say you wanted to write a signature for `fs.createWriteStream`. The docs for it describe it thusly: ``` fs.createWriteStream(path[, options]) ``` This would be a signature of `SO|S`. That is, a string and and object, or just a string. Now, if you read the full `fs` docs, you'll see that actually path can ALSO be a buffer. And options can be a string, that is: ``` path <String> | <Buffer> options <String> | <Object> ``` To reproduce this you have to fully enumerate all of the possible combinations and that implies a signature of `SO|SS|OO|OS|S|O`. The awkwardness is a feature: It reminds you of the complexity you're adding to your API when you do this sort of thing. ### Browser support This has no dependencies and should work in browsers, though you'll have noisier stack traces. ### Why this exists I wanted a very simple argument validator. It needed to do two things: 1. Be more concise and easier to use than assertions 2. Not encourage an infinite bikeshed of DSLs This is why types are specified by a single character and there's no such thing as an optional argument. This is not intended to validate user data. This is specifically about asserting the interface of your functions. If you need greater validation, I encourage you to write them by hand or look elsewhere. # libnpmsearch [![npm version](https://img.shields.io/npm/v/libnpmsearch.svg)](https://npm.im/libnpmsearch) [![license](https://img.shields.io/npm/l/libnpmsearch.svg)](https://npm.im/libnpmsearch) [![Travis](https://img.shields.io/travis/npm/libnpmsearch.svg)](https://travis-ci.org/npm/libnpmsearch) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/libnpmsearch?svg=true)](https://ci.appveyor.com/project/zkat/libnpmsearch) [![Coverage Status](https://coveralls.io/repos/github/npm/libnpmsearch/badge.svg?branch=latest)](https://coveralls.io/github/npm/libnpmsearch?branch=latest) [`libnpmsearch`](https://github.com/npm/libnpmsearch) is a Node.js library for programmatically accessing the npm search endpoint. It does **not** support legacy search through `/-/all`. ## Example ```js const search = require('libnpmsearch') console.log(await search('libnpm')) => [ { name: 'libnpm', description: 'programmatic npm API', ...etc }, { name: 'libnpmsearch', description: 'Programmatic API for searching in npm and compatible registries', ...etc }, ...more ] ``` ## Install `$ npm install libnpmsearch` ## Table of Contents * [Example](#example) * [Install](#install) * [API](#api) * [search opts](#opts) * [`search()`](#search) * [`search.stream()`](#search-stream) ### API #### <a name="opts"></a> `opts` for `libnpmsearch` commands The following opts are used directly by `libnpmsearch` itself: * `opts.limit` - Number of results to limit the query to. Default: 20 * `opts.from` - Offset number for results. Used with `opts.limit` for pagination. Default: 0 * `opts.detailed` - If true, returns an object with `package`, `score`, and `searchScore` fields, with `package` being what would usually be returned, and the other two containing details about how that package scored. Useful for UIs. Default: false * `opts.sortBy` - Used as a shorthand to set `opts.quality`, `opts.maintenance`, and `opts.popularity` with values that prioritize each one. Should be one of `'optimal'`, `'quality'`, `'maintenance'`, or `'popularity'`. Default: `'optimal'` * `opts.maintenance` - Decimal number between `0` and `1` that defines the weight of `maintenance` metrics when scoring and sorting packages. Default: `0.65` (same as `opts.sortBy: 'optimal'`) * `opts.popularity` - Decimal number between `0` and `1` that defines the weight of `popularity` metrics when scoring and sorting packages. Default: `0.98` (same as `opts.sortBy: 'optimal'`) * `opts.quality` - Decimal number between `0` and `1` that defines the weight of `quality` metrics when scoring and sorting packages. Default: `0.5` (same as `opts.sortBy: 'optimal'`) `libnpmsearch` uses [`npm-registry-fetch`](https://npm.im/npm-registry-fetch). Most options are passed through directly to that library, so please refer to [its own `opts` documentation](https://www.npmjs.com/package/npm-registry-fetch#fetch-options) for options that can be passed in. A couple of options of note for those in a hurry: * `opts.token` - can be passed in and will be used as the authentication token for the registry. For other ways to pass in auth details, see the n-r-f docs. * `opts.Promise` - If you pass this in, the Promises returned by `libnpmsearch` commands will use this Promise class instead. For example: `{Promise: require('bluebird')}` #### <a name="search"></a> `> search(query, [opts]) -> Promise` `query` must be either a String or an Array of search terms. If `opts.limit` is provided, it will be sent to the API to constrain the number of returned results. You may receive more, or fewer results, at the endpoint's discretion. The returned Promise resolved to an Array of search results with the following format: ```js { name: String, version: SemverString, description: String || null, maintainers: [ { username: String, email: String }, ...etc ] || null, keywords: [String] || null, date: Date || null } ``` If `opts.limit` is provided, it will be sent to the API to constrain the number of returned results. You may receive more, or fewer results, at the endpoint's discretion. For streamed results, see [`search.stream`](#search-stream). ##### Example ```javascript await search('libnpm') => [ { name: 'libnpm', description: 'programmatic npm API', ...etc }, { name: 'libnpmsearch', description: 'Programmatic API for searching in npm and compatible registries', ...etc }, ...more ] ``` #### <a name="search-stream"></a> `> search.stream(query, [opts]) -> Stream` `query` must be either a String or an Array of search terms. If `opts.limit` is provided, it will be sent to the API to constrain the number of returned results. You may receive more, or fewer results, at the endpoint's discretion. The returned Stream emits one entry per search result, with each entry having the following format: ```js { name: String, version: SemverString, description: String || null, maintainers: [ { username: String, email: String }, ...etc ] || null, keywords: [String] || null, date: Date || null } ``` For getting results in one chunk, see [`search`](#search-stream). ##### Example ```javascript search.stream('libnpm').on('data', console.log) => // entry 1 { name: 'libnpm', description: 'programmatic npm API', ...etc } // entry 2 { name: 'libnpmsearch', description: 'Programmatic API for searching in npm and compatible registries', ...etc } // etc ``` snapCash ================== This [React] app was initialized with [create-near-app] Preview of the DApp Simple Money and memo sending application based on the NEAR. ![Screenshot](preview.png) ![Screenshot](preview2.png) Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `snapCash.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `snapCash.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account snapCash.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'snapCash.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages # iferr Higher-order functions for easier error handling. `if (err) return cb(err);` be gone! ## Install ```bash npm install iferr ``` ## Use ### JavaScript example ```js var iferr = require('iferr'); function get_friends_count(id, cb) { User.load_user(id, iferr(cb, function(user) { user.load_friends(iferr(cb, function(friends) { cb(null, friends.length); })); })); } ``` ### CoffeeScript example ```coffee iferr = require 'iferr' get_friends_count = (id, cb) -> User.load_user id, iferr cb, (user) -> user.load_friends iferr cb, (friends) -> cb null, friends.length ``` (TODO: document tiferr, throwerr and printerr) ## License MIT # core-util-is The `util.is*` functions introduced in Node v0.12. # promise-inflight One promise for multiple requests in flight to avoid async duplication ## USAGE ```javascript const inflight = require('promise-inflight') // some request that does some stuff function req(key) { // key is any random string. like a url or filename or whatever. return inflight(key, () => { // this is where you'd fetch the url or whatever return Promise.delay(100) }) } // only assigns a single setTimeout // when it dings, all thens get called with the same result. (There's only // one underlying promise.) req('foo').then(…) req('foo').then(…) req('foo').then(…) req('foo').then(…) ``` ## SEE ALSO * [inflight](https://npmjs.com/package/inflight) - For the callback based function on which this is based. ## STILL NEEDS Tests! # has-symbols <sup>[![Version Badge][2]][1]</sup> [![Build Status][3]][4] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] Determine if the JS environment has Symbol support. Supports spec, or shams. ## Example ```js var hasSymbols = require('has-symbols'); hasSymbols() === true; // if the environment has native Symbol support. Not polyfillable, not forgeable. var hasSymbolsKinda = require('has-symbols/shams'); hasSymbolsKinda() === true; // if the environment has a Symbol sham that mostly follows the spec. ``` ## Supported Symbol shams - get-own-property-symbols [npm](https://www.npmjs.com/package/get-own-property-symbols) | [github](https://github.com/WebReflection/get-own-property-symbols) - core-js [npm](https://www.npmjs.com/package/core-js) | [github](https://github.com/zloirock/core-js) ## Tests Simply clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/has-symbols [2]: http://versionbadg.es/ljharb/has-symbols.svg [3]: https://travis-ci.org/ljharb/has-symbols.svg [4]: https://travis-ci.org/ljharb/has-symbols [5]: https://david-dm.org/ljharb/has-symbols.svg [6]: https://david-dm.org/ljharb/has-symbols [7]: https://david-dm.org/ljharb/has-symbols/dev-status.svg [8]: https://david-dm.org/ljharb/has-symbols#info=devDependencies [9]: https://ci.testling.com/ljharb/has-symbols.png [10]: https://ci.testling.com/ljharb/has-symbols [11]: https://nodei.co/npm/has-symbols.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/has-symbols.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/has-symbols.svg [downloads-url]: http://npm-stat.com/charts.html?package=has-symbols # Controlling Flow: callbacks are easy ## What's actually hard? - Doing a bunch of things in a specific order. - Knowing when stuff is done. - Handling failures. - Breaking up functionality into parts (avoid nested inline callbacks) ## Common Mistakes - Abandoning convention and consistency. - Putting all callbacks inline. - Using libraries without grokking them. - Trying to make async code look sync. ## Define Conventions - Two kinds of functions: *actors* take action, *callbacks* get results. - Essentially the continuation pattern. Resulting code *looks* similar to fibers, but is *much* simpler to implement. - Node works this way in the lowlevel APIs already, and it's very flexible. ## Callbacks - Simple responders - Must always be prepared to handle errors, that's why it's the first argument. - Often inline anonymous, but not always. - Can trap and call other callbacks with modified data, or pass errors upwards. ## Actors - Last argument is a callback. - If any error occurs, and can't be handled, pass it to the callback and return. - Must not throw. Return value ignored. - return x ==> return cb(null, x) - throw er ==> return cb(er) ```javascript // return true if a path is either // a symlink or a directory. function isLinkOrDir (path, cb) { fs.lstat(path, function (er, s) { if (er) return cb(er) return cb(null, s.isDirectory() || s.isSymbolicLink()) }) } ``` # asyncMap ## Usecases - I have a list of 10 files, and need to read all of them, and then continue when they're all done. - I have a dozen URLs, and need to fetch them all, and then continue when they're all done. - I have 4 connected users, and need to send a message to all of them, and then continue when that's done. - I have a list of n things, and I need to dosomething with all of them, in parallel, and get the results once they're all complete. ## Solution ```javascript var asyncMap = require("slide").asyncMap function writeFiles (files, what, cb) { asyncMap(files, function (f, cb) { fs.writeFile(f, what, cb) }, cb) } writeFiles([my, file, list], "foo", cb) ``` # chain ## Usecases - I have to do a bunch of things, in order. Get db credentials out of a file, read the data from the db, write that data to another file. - If anything fails, do not continue. - I still have to provide an array of functions, which is a lot of boilerplate, and a pita if your functions take args like ```javascript function (cb) { blah(a, b, c, cb) } ``` - Results are discarded, which is a bit lame. - No way to branch. ## Solution - reduces boilerplate by converting an array of [fn, args] to an actor that takes no arguments (except cb) - A bit like Function#bind, but tailored for our use-case. - bindActor(obj, "method", a, b, c) - bindActor(fn, a, b, c) - bindActor(obj, fn, a, b, c) - branching, skipping over falsey arguments ```javascript chain([ doThing && [thing, a, b, c] , isFoo && [doFoo, "foo"] , subChain && [chain, [one, two]] ], cb) ``` - tracking results: results are stored in an optional array passed as argument, last result is always in results[results.length - 1]. - treat chain.first and chain.last as placeholders for the first/last result up until that point. ## Non-trivial example - Read number files in a directory - Add the results together - Ping a web service with the result - Write the response to a file - Delete the number files ```javascript var chain = require("slide").chain function myProgram (cb) { var res = [], last = chain.last, first = chain.first chain([ [fs, "readdir", "the-directory"] , [readFiles, "the-directory", last] , [sum, last] , [ping, "POST", "example.com", 80, "/foo", last] , [fs, "writeFile", "result.txt", last] , [rmFiles, "./the-directory", first] ], res, cb) } ``` # Conclusion: Convention Profits - Consistent API from top to bottom. - Sneak in at any point to inject functionality. Testable, reusable, ... - When ruby and python users whine, you can smile condescendingly. # readdir-scoped-modules Like `fs.readdir` but handling `@org/module` dirs as if they were a single entry. Used by npm. ## USAGE ```javascript var readdir = require('readdir-scoped-modules') readdir('node_modules', function (er, entries) { // entries will be something like // ['a', '@org/foo', '@org/bar'] }) ``` # Can I cache this? [![Build Status](https://travis-ci.org/pornel/http-cache-semantics.svg?branch=master)](https://travis-ci.org/pornel/http-cache-semantics) `CachePolicy` tells when responses can be reused from a cache, taking into account [HTTP RFC 7234](http://httpwg.org/specs/rfc7234.html) rules for user agents and shared caches. It's aware of many tricky details such as the `Vary` header, proxy revalidation, and authenticated responses. ## Usage Cacheability of an HTTP response depends on how it was requested, so both `request` and `response` are required to create the policy. ```js const policy = new CachePolicy(request, response, options); if (!policy.storable()) { // throw the response away, it's not usable at all return; } // Cache the data AND the policy object in your cache // (this is pseudocode, roll your own cache (lru-cache package works)) letsPretendThisIsSomeCache.set(request.url, {policy, response}, policy.timeToLive()); ``` ```js // And later, when you receive a new request: const {policy, response} = letsPretendThisIsSomeCache.get(newRequest.url); // It's not enough that it exists in the cache, it has to match the new request, too: if (policy && policy.satisfiesWithoutRevalidation(newRequest)) { // OK, the previous response can be used to respond to the `newRequest`. // Response headers have to be updated, e.g. to add Age and remove uncacheable headers. response.headers = policy.responseHeaders(); return response; } ``` It may be surprising, but it's not enough for an HTTP response to be [fresh](#yo-fresh) to satisfy a request. It may need to match request headers specified in `Vary`. Even a matching fresh response may still not be usable if the new request restricted cacheability, etc. The key method is `satisfiesWithoutRevalidation(newRequest)`, which checks whether the `newRequest` is compatible with the original request and whether all caching conditions are met. ### Constructor options Request and response must have a `headers` property with all header names in lower case. `url`, `status` and `method` are optional (defaults are any URL, status `200`, and `GET` method). ```js const request = { url: '/', method: 'GET', headers: { accept: '*/*', }, }; const response = { status: 200, headers: { 'cache-control': 'public, max-age=7234', }, }; const options = { shared: true, cacheHeuristic: 0.1, immutableMinTimeToLive: 24*3600*1000, // 24h ignoreCargoCult: false, }; ``` If `options.shared` is `true` (default), then the response is evaluated from a perspective of a shared cache (i.e. `private` is not cacheable and `s-maxage` is respected). If `options.shared` is `false`, then the response is evaluated from a perspective of a single-user cache (i.e. `private` is cacheable and `s-maxage` is ignored). `options.cacheHeuristic` is a fraction of response's age that is used as a fallback cache duration. The default is 0.1 (10%), e.g. if a file hasn't been modified for 100 days, it'll be cached for 100*0.1 = 10 days. `options.immutableMinTimeToLive` is a number of milliseconds to assume as the default time to cache responses with `Cache-Control: immutable`. Note that [per RFC](http://httpwg.org/http-extensions/immutable.html) these can become stale, so `max-age` still overrides the default. If `options.ignoreCargoCult` is true, common anti-cache directives will be completely ignored if the non-standard `pre-check` and `post-check` directives are present. These two useless directives are most commonly found in bad StackOverflow answers and PHP's "session limiter" defaults. ### `storable()` Returns `true` if the response can be stored in a cache. If it's `false` then you MUST NOT store either the request or the response. ### `satisfiesWithoutRevalidation(newRequest)` This is the most important method. Use this method to check whether the cached response is still fresh in the context of the new request. If it returns `true`, then the given `request` matches the original response this cache policy has been created with, and the response can be reused without contacting the server. Note that the old response can't be returned without being updated, see `responseHeaders()`. If it returns `false`, then the response may not be matching at all (e.g. it's for a different URL or method), or may require to be refreshed first (see `revalidationHeaders()`). ### `responseHeaders()` Returns updated, filtered set of response headers to return to clients receiving the cached response. This function is necessary, because proxies MUST always remove hop-by-hop headers (such as `TE` and `Connection`) and update response's `Age` to avoid doubling cache time. ```js cachedResponse.headers = cachePolicy.responseHeaders(cachedResponse); ``` ### `timeToLive()` Returns approximate time in *milliseconds* until the response becomes stale (i.e. not fresh). After that time (when `timeToLive() <= 0`) the response might not be usable without revalidation. However, there are exceptions, e.g. a client can explicitly allow stale responses, so always check with `satisfiesWithoutRevalidation()`. ### `toObject()`/`fromObject(json)` Chances are you'll want to store the `CachePolicy` object along with the cached response. `obj = policy.toObject()` gives a plain JSON-serializable object. `policy = CachePolicy.fromObject(obj)` creates an instance from it. ### Refreshing stale cache (revalidation) When a cached response has expired, it can be made fresh again by making a request to the origin server. The server may respond with status 304 (Not Modified) without sending the response body again, saving bandwidth. The following methods help perform the update efficiently and correctly. #### `revalidationHeaders(newRequest)` Returns updated, filtered set of request headers to send to the origin server to check if the cached response can be reused. These headers allow the origin server to return status 304 indicating the response is still fresh. All headers unrelated to caching are passed through as-is. Use this method when updating cache from the origin server. ```js updateRequest.headers = cachePolicy.revalidationHeaders(updateRequest); ``` #### `revalidatedPolicy(revalidationRequest, revalidationResponse)` Use this method to update the cache after receiving a new response from the origin server. It returns an object with two keys: * `policy` — A new `CachePolicy` with HTTP headers updated from `revalidationResponse`. You can always replace the old cached `CachePolicy` with the new one. * `modified` — Boolean indicating whether the response body has changed. * If `false`, then a valid 304 Not Modified response has been received, and you can reuse the old cached response body. * If `true`, you should use new response's body (if present), or make another request to the origin server without any conditional headers (i.e. don't use `revalidationHeaders()` this time) to get the new resource. ```js // When serving requests from cache: const {oldPolicy, oldResponse} = letsPretendThisIsSomeCache.get(newRequest.url); if (!oldPolicy.satisfiesWithoutRevalidation(newRequest)) { // Change the request to ask the origin server if the cached response can be used newRequest.headers = oldPolicy.revalidationHeaders(newRequest); // Send request to the origin server. The server may respond with status 304 const newResponse = await makeRequest(newResponse); // Create updated policy and combined response from the old and new data const {policy, modified} = oldPolicy.revalidatedPolicy(newRequest, newResponse); const response = modified ? newResponse : oldResponse; // Update the cache with the newer/fresher response letsPretendThisIsSomeCache.set(newRequest.url, {policy, response}, policy.timeToLive()); // And proceed returning cached response as usual response.headers = policy.responseHeaders(); return response; } ``` # Yo, FRESH ![satisfiesWithoutRevalidation](fresh.jpg) ## Used by * [ImageOptim API](https://imageoptim.com/api), [make-fetch-happen](https://github.com/zkat/make-fetch-happen), [cacheable-request](https://www.npmjs.com/package/cacheable-request), [npm/registry-fetch](https://github.com/npm/registry-fetch), [etc.](https://github.com/pornel/http-cache-semantics/network/dependents) ## Implemented * `Cache-Control` response header with all the quirks. * `Expires` with check for bad clocks. * `Pragma` response header. * `Age` response header. * `Vary` response header. * Default cacheability of statuses and methods. * Requests for stale data. * Filtering of hop-by-hop headers. * Basic revalidation request ## Unimplemented * Merging of range requests, If-Range (but correctly supports them as non-cacheable) * Revalidation of multiple representations # umask Convert umask from string &lt;-> number. ## Installation & Use ``` $ npm install -S umask var umask = require('umask'); console.log(umask.toString(18)); // 0022 console.log(umask.fromString('0777')) // 511 ``` ## API ### `toString( val )` Converts `val` to a 0-padded octal string. `val` is assumed to be a Number in the correct range (0..511) ### `fromString( val, [cb] )` Converts `val` to a Number that can be used as a umask. `val` can be of the following forms: * String containing octal number (leading 0) * String containing decimal number * Number In all cases above, the value obtained is then converted to an integer and checked against the legal `umask` range 0..511 `fromString` can be used as a simple converter, with no error feedback, by omitting the optional callback argument `cb`: ``` var mask = umask.fromString(val); // mask is now the umask descibed by val or // the default, 0022 (18 dec) ``` The callback arguments are `(err, val)` where `err` is either `null` or an Error object and `val` is either the converted umask or the default umask, `0022`. ``` umask.fromString(val, function (err, val) { if (err) { console.error("invalid umask: " + err.message) } /* do something with val */ }); ``` The callback, if provided, is always called **synchronously**. ### `validate( data, k, val )` This is a validation function of the form expected by `nopt`. If `val` is a valid umask, the function returns true and sets `data[k]`. If `val` is not a valid umask, the function returns false. The `validate` function is stricter than `fromString`: it only accepts Number or octal String values, and the String value must begin with `0`. The `validate` function does **not** accept Strings containing decimal numbers. # Maintainer Sam Mikes <smikes@cubane.com> # License MIT # Relative Date [![Build Status](https://travis-ci.org/wildlyinaccurate/tiny-relative-date.png?branch=master)](https://travis-ci.org/wildlyinaccurate/tiny-relative-date) Tiny function that provides relative, human-readable dates. ## Installation ``` npm install tiny-relative-date ``` ## Usage The module returns a `relativeDate` function with English translations by default. ```js const relativeDate = require('tiny-relative-date') ``` The `relativeDate` function accepts date strings or `Date` objects. ```js relativeDate('2017-06-25 09:00') // '12 hours ago' relativeDate(new Date()) // 'just now' ``` The value of "now" can also be passed as a second parameter. ```js const now = new Date('2017-06-25 08:00:00') const date = new Date('2017-06-25 07:00:00') relativeDate(date, now) // 'an hour ago' ``` ### Using a non-English locale The tiny-relative-date module can be initialised with a locale. See the [translations directory]('./translations') for a list of available locales. ```js const relativeDateFactory = require('tiny-relative-date/lib/factory') const deTranslations = require('tiny-relative-date/translations/de') const relativeDate = relativeDateFactory(deTranslations) relativeDate(new Date()) // 'gerade eben' ``` ### Using a custom locale You can also use a completely custom locale by passing a translations object instead of a locale string. Translations can be plain strings with a `{{time}}` placeholder, or they can be functions. See the **Adding new locales** section below for a list of translation keys. ```js const relativeDateFactory = require('tiny-relative-date/lib/factory') const relativeDate = relativeDateFactory({ hoursAgo: '{{time}}h ago', daysAgo: (days) => `${days * 24}h ago` }) relativeDate('2017-06-25 07:00:00') // '2h ago' relativeDate('2017-06-24 06:00:00') // '27h ago' ``` ## Contributing Contributions are welcome! Running this project locally requires Git and Node.js. ``` git clone git@github.com:wildlyinaccurate/tiny-relative-date.git cd tiny-relative-date/ npm install ``` Once you are set up, you can make changes to files in the `src/`, `spec/` and `translations/` directories. Build any changes you make by running ``` npm run build ``` And run the tests with ``` npm run test ``` ### Adding new locales If you would like to add a new locale, please create a JSON file in the `translations` directory and ensure it has the following keys: | Key | Default value ("en" locale) | |------------------------|-----------------------------| | `justNow` | just now | | `secondsAgo` | {{time}} seconds ago | | `aMinuteAgo` | a minute ago | | `minutesAgo` | {{time}} minutes ago | | `anHourAgo` | an hour ago | | `hoursAgo` | {{time}} hours ago | | `aDayAgo` | yesterday | | `daysAgo` | {{time}} days ago | | `aWeekAgo` | a week ago | | `weeksAgo` | {{time}} weeks ago | | `aMonthAgo` | a month ago | | `monthsAgo` | {{time}} months ago | | `aYearAgo` | a year ago | | `yearsAgo` | {{time}} years ago | | `overAYearAgo` | over a year ago | | `secondsFromNow` | {{time}} seconds from now | | `aMinuteFromNow` | a minute from now | | `minutesFromNow` | {{time}} minutes from now | | `anHourFromNow` | an hour from now | | `hoursFromNow` | {{time}} hours from now | | `aDayFromNow` | tomorrow | | `daysFromNow` | {{time}} days from now | | `aWeekFromNow` | a week from now | | `weeksFromNow` | {{time}} weeks from now | | `aMonthFromNow` | a month from now | | `monthsFromNow` | {{time}} months from now | | `aYearFromNow` | a year from now | | `yearsFromNow` | {{time}} years from now | | `overAYearFromNow` | over a year from now | # fs-write-stream-atomic Like `fs.createWriteStream(...)`, but atomic. Writes to a tmp file and does an atomic `fs.rename` to move it into place when it's done. First rule of debugging: **It's always a race condition.** ## USAGE ```javascript var fsWriteStreamAtomic = require('fs-write-stream-atomic') // options are optional. var write = fsWriteStreamAtomic('output.txt', options) var read = fs.createReadStream('input.txt') read.pipe(write) // When the write stream emits a 'finish' or 'close' event, // you can be sure that it is moved into place, and contains // all the bytes that were written to it, even if something else // was writing to `output.txt` at the same time. ``` ### `fsWriteStreamAtomic(filename, [options])` * `filename` {String} The file we want to write to * `options` {Object} * `chown` {Object} User and group to set ownership after write * `uid` {Number} * `gid` {Number} * `encoding` {String} default = 'utf8' * `mode` {Number} default = `0666` * `flags` {String} default = `'w'` # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # <img src="./icon.svg" height="25" /> corepack Corepack is a zero-runtime-dependency Node.js script that acts as a bridge between Node.js projects and the package managers they are intended to be used with during development. In practical terms, **Corepack will let you use Yarn and pnpm without having to install them** - just like what currently happens with npm, which is shipped by Node.js by default. **Important:** At the moment, Corepack only covers Yarn and pnpm. Given that we have little control on the npm project, we prefer to focus on the Yarn and pnpm use cases. As a result, Corepack doesn't have any effect at all on the way you use npm. ## How to Install ### Default Installs Corepack is distributed by default with Node.js 16.9, but is opt-in for the time being. Run `corepack enable` to install the required shims. ### Manual Installs <details> <summary>Click here to see how to install Corepack using npm</summary> First uninstall your global Yarn and pnpm binaries (just leave npm). In general, you'd do this by running the following command: ```shell npm uninstall -g yarn pnpm # That should be enough, but if you installed Yarn without going through npm it might # be more tedious - for example, you might need to run `brew uninstall yarn` as well. ``` Then install Corepack: ```shell npm install -g corepack ``` We do acknowledge the irony and overhead of using npm to install Corepack, which is at least part of why the preferred option is to use the Corepack version that is distributed along with Node.js itself. </details> ## Usage Just use your package managers as you usually would. Run `yarn install` in Yarn projects, `pnpm install` in pnpm projects, and `npm` in npm projects. Corepack will catch these calls, and depending on the situation: - **If the local project is configured for the package manager you're using**, Corepack will silently download and cache the latest compatible version. - **If the local project is configured for a different package manager**, Corepack will request you to run the command again using the right package manager - thus avoiding corruptions of your install artifacts. - **If the local project isn't configured for any package manager**, Corepack will assume that you know what you're doing, and will use whatever package manager version has been pinned as "known good release". Check the relevant section for more details. ## Known Good Releases When running Yarn or pnpm within projects that don't list a supported package manager, Corepack will default to a set of Known Good Releases. In a way, you can compare this to Node.js, where each version ships with a specific version of npm. The Known Good Releases can be updated system-wide using the `--activate` flag from the `corepack prepare` and `corepack hydrate` commands. ## Offline Workflow The utility commands detailed in the next section. - Either you can use the network while building your container image, in which case you'll simply run `corepack prepare` to make sure that your image includes the Last Known Good release for the specified package manager. - If you want to have *all* Last Known Good releases for all package managers, just use the `--all` flag which will do just that. - Or you're publishing your project to a system where the network is unavailable, in which case you'll preemptively generate a package manager archive from your local computer (using `corepack prepare -o`) before storing it somewhere your container will be able to access (for example within your repository). After that it'll just be a matter of running `corepack hydrate <path/to/corepack.tgz>` to setup the cache. ## Utility Commands ### `corepack <binary name>[@<version>] [... args]` This meta-command runs the specified package manager in the local folder. You can use it to force an install to run with a given version, which can be useful when looking for regressions. Note that those commands still check whether the local project is configured for the given package manager (ie you won't be able to run `corepack yarn install` on a project where the `packageManager` field references `pnpm`). ### `corepack enable [... name]` | Option | Description | | --- | --- | | `--install-directory` | Add the shims to the specified location | This command will detect where Node.js is installed and will create shims next to it for each of the specified package managers (or all of them if the command is called without parameters). Note that the npm shims will not be installed unless explicitly requested, as npm is currently distributed with Node.js through other means. ### `corepack disable [... name]` | Option | Description | | --- | --- | | `--install-directory` | Remove the shims to the specified location | This command will detect where Node.js is installed and will remove the shims from there. ### `corepack prepare [... name@version]` | Option | Description | | --- | --- | | `--all` | Prepare the "Last Known Good" version of all supported package managers | | `-o,--output` | Also generate an archive containing the package managers | | `--activate` | Also update the "Last Known Good" release | This command will download the given package managers (or the one configured for the local project if no argument is passed in parameter) and store it within the Corepack cache. If the `-o,--output` flag is set (optionally with a path as parameter), an archive will also be generated that can be used by the `corepack hydrate` command. ### `corepack hydrate <path/to/corepack.tgz>` | Option | Description | | --- | --- | | `--activate` | Also update the "Last Known Good" release | This command will retrieve the given package manager from the specified archive and will install it within the Corepack cache, ready to be used without further network interaction. ## Environment Variables - `COREPACK_ROOT` has no functional impact on Corepack itself; it's automatically being set in your environment by Corepack when it shells out to the underlying package managers, so that they can feature-detect its presence (useful for commands like `yarn init`). ## Contributing If you want to build corepack yourself, you can build the project like this: 1. Clone this repository 2. Run `yarn build` (no need for `yarn install`) 3. The `dist/` directory now contains the corepack build and the shims 4. Call `node ./dist/corepack --help` and behold You can also run the tests with `yarn jest` (still no install needed). ## Design Various tidbits about Corepack's design are explained in more details in [DESIGN.md](/DESIGN.md). ## License (MIT) > **Copyright © Corepack contributors** > > Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: > > The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. > > THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. Use this module to convert a username/groupname to a uid/gid number. Usage: ``` npm install uid-number ``` Then, in your node program: ```javascript var uidNumber = require("uid-number") uidNumber("isaacs", function (er, uid, gid) { // gid is null because we didn't ask for a group name // uid === 24561 because that's my number. }) ``` # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # QRCode Terminal Edition [![Build Status][travis-ci-img]][travis-ci-url] > Going where no QRCode has gone before. ![Basic Example][basic-example-img] # Node Library ## Install Can be installed with: $ npm install qrcode-terminal and used: var qrcode = require('qrcode-terminal'); ## Usage To display some data to the terminal just call: qrcode.generate('This will be a QRCode, eh!'); You can even specify the error level (default is 'L'): qrcode.setErrorLevel('Q'); qrcode.generate('This will be a QRCode with error level Q!'); If you don't want to display to the terminal but just want to string you can provide a callback: qrcode.generate('http://github.com', function (qrcode) { console.log(qrcode); }); If you want to display small output, provide `opts` with `small`: qrcode.generate('This will be a small QRCode, eh!', {small: true}); qrcode.generate('This will be a small QRCode, eh!', {small: true}, function (qrcode) { console.log(qrcode) }); # Command-Line ## Install $ npm install -g qrcode-terminal ## Usage $ qrcode-terminal --help $ qrcode-terminal 'http://github.com' $ echo 'http://github.com' | qrcode-terminal # Support - OS X - Linux - Windows # Server-side [node-qrcode][node-qrcode-url] is a popular server-side QRCode generator that renders to a `canvas` object. # Developing To setup the development envrionment run `npm install` To run tests run `npm test` # Contributers Gord Tanner <gtanner@gmail.com> Micheal Brooks <michael@michaelbrooks.ca> [travis-ci-img]: https://travis-ci.org/gtanner/qrcode-terminal.png [travis-ci-url]: https://travis-ci.org/gtanner/qrcode-terminal [basic-example-img]: https://raw.github.com/gtanner/qrcode-terminal/master/example/basic.png [node-qrcode-url]: https://github.com/soldair/node-qrcode # socks [![Build Status](https://travis-ci.org/JoshGlazebrook/socks.svg?branch=master)](https://travis-ci.org/JoshGlazebrook/socks) [![Coverage Status](https://coveralls.io/repos/github/JoshGlazebrook/socks/badge.svg?branch=master)](https://coveralls.io/github/JoshGlazebrook/socks?branch=v2) Fully featured SOCKS proxy client supporting SOCKSv4, SOCKSv4a, and SOCKSv5. Includes Bind and Associate functionality. ### Features * Supports SOCKS v4, v4a, and v5 protocols. * Supports the CONNECT, BIND, and ASSOCIATE commands. * Supports callbacks, promises, and events for proxy connection creation async flow control. * Supports proxy chaining (CONNECT only). * Supports user/pass authentication. * Built in UDP frame creation & parse functions. * Created with TypeScript, type definitions are provided. ### Requirements * Node.js v6.0+ (Please use [v1](https://github.com/JoshGlazebrook/socks/tree/82d83923ad960693d8b774cafe17443ded7ed584) for older versions of Node.js) ### Looking for v1? * Docs for v1 are available [here](https://github.com/JoshGlazebrook/socks/tree/82d83923ad960693d8b774cafe17443ded7ed584) ## Installation `yarn add socks` or `npm install --save socks` ## Usage ```typescript // TypeScript import { SocksClient, SocksClientOptions, SocksClientChainOptions } from 'socks'; // ES6 JavaScript import { SocksClient } from 'socks'; // Legacy JavaScript const SocksClient = require('socks').SocksClient; ``` ## Quick Start Example Connect to github.com (192.30.253.113) on port 80, using a SOCKS proxy. ```javascript const options = { proxy: { host: '159.203.75.200', // ipv4 or ipv6 or hostname port: 1080, type: 5 // Proxy version (4 or 5) }, command: 'connect', // SOCKS command (createConnection factory function only supports the connect command) destination: { host: '192.30.253.113', // github.com (hostname lookups are supported with SOCKS v4a and 5) port: 80 } }; // Async/Await try { const info = await SocksClient.createConnection(options); console.log(info.socket); // <Socket ...> (this is a raw net.Socket that is established to the destination host through the given proxy server) } catch (err) { // Handle errors } // Promises SocksClient.createConnection(options) .then(info => { console.log(info.socket); // <Socket ...> (this is a raw net.Socket that is established to the destination host through the given proxy server) }) .catch(err => { // Handle errors }); // Callbacks SocksClient.createConnection(options, (err, info) => { if (!err) { console.log(info.socket); // <Socket ...> (this is a raw net.Socket that is established to the destination host through the given proxy server) } else { // Handle errors } }); ``` ## Chaining Proxies **Note:** Chaining is only supported when using the SOCKS connect command, and chaining can only be done through the special factory chaining function. This example makes a proxy chain through two SOCKS proxies to ip-api.com. Once the connection to the destination is established it sends an HTTP request to get a JSON response that returns ip info for the requesting ip. ```javascript const options = { destination: { host: 'ip-api.com', // host names are supported with SOCKS v4a and SOCKS v5. port: 80 }, command: 'connect', // Only the connect command is supported when chaining proxies. proxies: [ // The chain order is the order in the proxies array, meaning the last proxy will establish a connection to the destination. { host: '159.203.75.235', // ipv4, ipv6, or hostname port: 1081, type: 5 }, { host: '104.131.124.203', // ipv4, ipv6, or hostname port: 1081, type: 5 } ] } // Async/Await try { const info = await SocksClient.createConnectionChain(options); console.log(info.socket); // <Socket ...> (this is a raw net.Socket that is established to the destination host through the given proxy servers) console.log(info.socket.remoteAddress) // The remote address of the returned socket is the first proxy in the chain. // 159.203.75.235 info.socket.write('GET /json HTTP/1.1\nHost: ip-api.com\n\n'); info.socket.on('data', (data) => { console.log(data.toString()); // ip-api.com sees that the last proxy in the chain (104.131.124.203) is connected to it. /* HTTP/1.1 200 OK Access-Control-Allow-Origin: * Content-Type: application/json; charset=utf-8 Date: Sun, 24 Dec 2017 03:47:51 GMT Content-Length: 300 { "as":"AS14061 Digital Ocean, Inc.", "city":"Clifton", "country":"United States", "countryCode":"US", "isp":"Digital Ocean", "lat":40.8326, "lon":-74.1307, "org":"Digital Ocean", "query":"104.131.124.203", "region":"NJ", "regionName":"New Jersey", "status":"success", "timezone":"America/New_York", "zip":"07014" } */ }); } catch (err) { // Handle errors } // Promises SocksClient.createConnectionChain(options) .then(info => { console.log(info.socket); // <Socket ...> (this is a raw net.Socket that is established to the destination host through the given proxy server) console.log(info.socket.remoteAddress) // The remote address of the returned socket is the first proxy in the chain. // 159.203.75.235 info.socket.write('GET /json HTTP/1.1\nHost: ip-api.com\n\n'); info.socket.on('data', (data) => { console.log(data.toString()); // ip-api.com sees that the last proxy in the chain (104.131.124.203) is connected to it. /* HTTP/1.1 200 OK Access-Control-Allow-Origin: * Content-Type: application/json; charset=utf-8 Date: Sun, 24 Dec 2017 03:47:51 GMT Content-Length: 300 { "as":"AS14061 Digital Ocean, Inc.", "city":"Clifton", "country":"United States", "countryCode":"US", "isp":"Digital Ocean", "lat":40.8326, "lon":-74.1307, "org":"Digital Ocean", "query":"104.131.124.203", "region":"NJ", "regionName":"New Jersey", "status":"success", "timezone":"America/New_York", "zip":"07014" } */ }); }) .catch(err => { // Handle errors }); // Callbacks SocksClient.createConnectionChain(options, (err, info) => { if (!err) { console.log(info.socket); // <Socket ...> (this is a raw net.Socket that is established to the destination host through the given proxy server) console.log(info.socket.remoteAddress) // The remote address of the returned socket is the first proxy in the chain. // 159.203.75.235 info.socket.write('GET /json HTTP/1.1\nHost: ip-api.com\n\n'); info.socket.on('data', (data) => { console.log(data.toString()); // ip-api.com sees that the last proxy in the chain (104.131.124.203) is connected to it. /* HTTP/1.1 200 OK Access-Control-Allow-Origin: * Content-Type: application/json; charset=utf-8 Date: Sun, 24 Dec 2017 03:47:51 GMT Content-Length: 300 { "as":"AS14061 Digital Ocean, Inc.", "city":"Clifton", "country":"United States", "countryCode":"US", "isp":"Digital Ocean", "lat":40.8326, "lon":-74.1307, "org":"Digital Ocean", "query":"104.131.124.203", "region":"NJ", "regionName":"New Jersey", "status":"success", "timezone":"America/New_York", "zip":"07014" } */ }); } else { // Handle errors } }); ``` ## Bind Example (TCP Relay) When the bind command is sent to a SOCKS v4/v5 proxy server, the proxy server starts listening on a new TCP port and the proxy relays then remote host information back to the client. When another remote client connects to the proxy server on this port the SOCKS proxy sends a notification that an incoming connection has been accepted to the initial client and a full duplex stream is now established to the initial client and the client that connected to that special port. ```javascript const options = { proxy: { host: '159.203.75.235', // ipv4, ipv6, or hostname port: 1081, type: 5 }, command: 'bind', // When using BIND, the destination should be the remote client that is expected to connect to the SOCKS proxy. Using 0.0.0.0 makes the Proxy accept any incoming connection on that port. destination: { host: '0.0.0.0', port: 0 } }; // Creates a new SocksClient instance. const client = new SocksClient(options); // When the SOCKS proxy has bound a new port and started listening, this event is fired. client.on('bound', info => { console.log(info.remoteHost); /* { host: "159.203.75.235", port: 57362 } */ }); // When a client connects to the newly bound port on the SOCKS proxy, this event is fired. client.on('established', info => { // info.remoteHost is the remote address of the client that connected to the SOCKS proxy. console.log(info.remoteHost); /* host: 67.171.34.23, port: 49823 */ console.log(info.socket); // <Socket ...> (This is a raw net.Socket that is a connection between the initial client and the remote client that connected to the proxy) // Handle received data... info.socket.on('data', data => { console.log('recv', data); }); }); // An error occurred trying to establish this SOCKS connection. client.on('error', err => { console.error(err); }); // Start connection to proxy client.connect(); ``` ## Associate Example (UDP Relay) When the associate command is sent to a SOCKS v5 proxy server, it sets up a UDP relay that allows the client to send UDP packets to a remote host through the proxy server, and also receive UDP packet responses back through the proxy server. ```javascript const options = { proxy: { host: '159.203.75.235', // ipv4, ipv6, or hostname port: 1081, type: 5 }, command: 'associate', // When using associate, the destination should be the remote client that is expected to send UDP packets to the proxy server to be forwarded. This should be your local ip, or optionally the wildcard address (0.0.0.0) UDP Client <-> Proxy <-> UDP Client destination: { host: '0.0.0.0', port: 0 } }; // Create a local UDP socket for sending packets to the proxy. const udpSocket = dgram.createSocket('udp4'); udpSocket.bind(); // Listen for incoming UDP packets from the proxy server. udpSocket.on('message', (message, rinfo) => { console.log(SocksClient.parseUDPFrame(message)); /* { frameNumber: 0, remoteHost: { host: '165.227.108.231', port: 4444 }, // The remote host that replied with a UDP packet data: <Buffer 74 65 73 74 0a> // The data } */ }); let client = new SocksClient(associateOptions); // When the UDP relay is established, this event is fired and includes the UDP relay port to send data to on the proxy server. client.on('established', info => { console.log(info.remoteHost); /* { host: '159.203.75.235', port: 44711 } */ // Send 'hello' to 165.227.108.231:4444 const packet = SocksClient.createUDPFrame({ remoteHost: { host: '165.227.108.231', port: 4444 }, data: Buffer.from(line) }); udpSocket.send(packet, info.remoteHost.port, info.remoteHost.host); }); // Start connection client.connect(); ``` **Note:** The associate TCP connection to the proxy must remain open for the UDP relay to work. ## Additional Examples [Documentation](docs/index.md) ## Migrating from v1 Looking for a guide to migrate from v1? Look [here](docs/migratingFromV1.md) ## Api Reference: **Note:** socks includes full TypeScript definitions. These can even be used without using TypeScript as most IDEs (such as VS Code) will use these type definition files for auto completion intellisense even in JavaScript files. * Class: SocksClient * [new SocksClient(options[, callback])](#new-socksclientoptions) * [Class Method: SocksClient.createConnection(options[, callback])](#class-method-socksclientcreateconnectionoptions-callback) * [Class Method: SocksClient.createConnectionChain(options[, callback])](#class-method-socksclientcreateconnectionchainoptions-callback) * [Class Method: SocksClient.createUDPFrame(options)](#class-method-socksclientcreateudpframedetails) * [Class Method: SocksClient.parseUDPFrame(data)](#class-method-socksclientparseudpframedata) * [Event: 'error'](#event-error) * [Event: 'bound'](#event-bound) * [Event: 'established'](#event-established) * [client.connect()](#clientconnect) * [client.socksClientOptions](#clientconnect) ### SocksClient SocksClient establishes SOCKS proxy connections to remote destination hosts. These proxy connections are fully transparent to the server and once established act as full duplex streams. SOCKS v4, v4a, and v5 are supported, as well as the connect, bind, and associate commands. SocksClient supports creating connections using callbacks, promises, and async/await flow control using two static factory functions createConnection and createConnectionChain. It also internally extends EventEmitter which results in allowing event handling based async flow control. **SOCKS Compatibility Table** | Socks Version | TCP | UDP | IPv4 | IPv6 | Hostname | | --- | :---: | :---: | :---: | :---: | :---: | | SOCKS v4 | ✅ | ❌ | ✅ | ❌ | ❌ | | SOCKS v4a | ✅ | ❌ | ✅ | ❌ | ✅ | | SOCKS v5 | ✅ | ✅ | ✅ | ✅ | ✅ | ### new SocksClient(options) * ```options``` {SocksClientOptions} - An object describing the SOCKS proxy to use, the command to send and establish, and the destination host to connect to. ### SocksClientOptions ```typescript { proxy: { host: '159.203.75.200', // ipv4, ipv6, or hostname port: 1080, type: 5 // Proxy version (4 or 5). For v4a, just use 4. // Optional fields userId: 'some username', // Used for SOCKS4 userId auth, and SOCKS5 user/pass auth in conjunction with password. password: 'some password' // Used in conjunction with userId for user/pass auth for SOCKS5 proxies. }, command: 'connect', // connect, bind, associate destination: { host: '192.30.253.113', // ipv4, ipv6, hostname. Hostnames work with v4a and v5. port: 80 }, // Optional fields timeout: 30000, // How long to wait to establish a proxy connection. (defaults to 30 seconds) set_tcp_nodelay: true // If true, will turn on the underlying sockets TCP_NODELAY option. } ``` ### Class Method: SocksClient.createConnection(options[, callback]) * ```options``` { SocksClientOptions } - An object describing the SOCKS proxy to use, the command to send and establish, and the destination host to connect to. * ```callback``` { Function } - Optional callback function that is called when the proxy connection is established, or an error occurs. * ```returns``` { Promise } - A Promise is returned that is resolved when the proxy connection is established, or rejected when an error occurs. Creates a new proxy connection through the given proxy to the given destination host. This factory function supports callbacks and promises for async flow control. **Note:** If a callback function is provided, the promise will always resolve regardless of an error occurring. Please be sure to exclusively use either promises or callbacks when using this factory function. ```typescript const options = { proxy: { host: '159.203.75.200', // ipv4, ipv6, or hostname port: 1080, type: 5 // Proxy version (4 or 5) }, command: 'connect', // connect, bind, associate destination: { host: '192.30.253.113', // ipv4, ipv6, or hostname port: 80 } } // Await/Async (uses a Promise) try { const info = await SocksClient.createConnection(options); console.log(info); /* { socket: <Socket ...>, // Raw net.Socket } */ / <Socket ...> (this is a raw net.Socket that is established to the destination host through the given proxy server) } catch (err) { // Handle error... } // Promise SocksClient.createConnection(options) .then(info => { console.log(info); /* { socket: <Socket ...>, // Raw net.Socket } */ }) .catch(err => { // Handle error... }); // Callback SocksClient.createConnection(options, (err, info) => { if (!err) { console.log(info); /* { socket: <Socket ...>, // Raw net.Socket } */ } else { // Handle error... } }); ``` ### Class Method: SocksClient.createConnectionChain(options[, callback]) * ```options``` { SocksClientChainOptions } - An object describing a list of SOCKS proxies to use, the command to send and establish, and the destination host to connect to. * ```callback``` { Function } - Optional callback function that is called when the proxy connection chain is established, or an error occurs. * ```returns``` { Promise } - A Promise is returned that is resolved when the proxy connection chain is established, or rejected when an error occurs. Creates a new proxy connection chain through a list of at least two SOCKS proxies to the given destination host. This factory method supports callbacks and promises for async flow control. **Note:** If a callback function is provided, the promise will always resolve regardless of an error occurring. Please be sure to exclusively use either promises or callbacks when using this factory function. **Note:** At least two proxies must be provided for the chain to be established. ```typescript const options = { proxies: [ // The chain order is the order in the proxies array, meaning the last proxy will establish a connection to the destination. { host: '159.203.75.235', // ipv4, ipv6, or hostname port: 1081, type: 5 }, { host: '104.131.124.203', // ipv4, ipv6, or hostname port: 1081, type: 5 } ] command: 'connect', // Only connect is supported in chaining mode. destination: { host: '192.30.253.113', // ipv4, ipv6, hostname port: 80 } } ``` ### Class Method: SocksClient.createUDPFrame(details) * ```details``` { SocksUDPFrameDetails } - An object containing the remote host, frame number, and frame data to use when creating a SOCKS UDP frame packet. * ```returns``` { Buffer } - A Buffer containing all of the UDP frame data. Creates a SOCKS UDP frame relay packet that is sent and received via a SOCKS proxy when using the associate command for UDP packet forwarding. **SocksUDPFrameDetails** ```typescript { frameNumber: 0, // The frame number (used for breaking up larger packets) remoteHost: { // The remote host to have the proxy send data to, or the remote host that send this data. host: '1.2.3.4', port: 1234 }, data: <Buffer 01 02 03 04...> // A Buffer instance of data to include in the packet (actual data sent to the remote host) } interface SocksUDPFrameDetails { // The frame number of the packet. frameNumber?: number; // The remote host. remoteHost: SocksRemoteHost; // The packet data. data: Buffer; } ``` ### Class Method: SocksClient.parseUDPFrame(data) * ```data``` { Buffer } - A Buffer instance containing SOCKS UDP frame data to parse. * ```returns``` { SocksUDPFrameDetails } - An object containing the remote host, frame number, and frame data of the SOCKS UDP frame. ```typescript const frame = SocksClient.parseUDPFrame(data); console.log(frame); /* { frameNumber: 0, remoteHost: { host: '1.2.3.4', port: 1234 }, data: <Buffer 01 02 03 04 ...> } */ ``` Parses a Buffer instance and returns the parsed SocksUDPFrameDetails object. ## Event: 'error' * ```err``` { SocksClientError } - An Error object containing an error message and the original SocksClientOptions. This event is emitted if an error occurs when trying to establish the proxy connection. ## Event: 'bound' * ```info``` { SocksClientBoundEvent } An object containing a Socket and SocksRemoteHost info. This event is emitted when using the BIND command on a remote SOCKS proxy server. This event indicates the proxy server is now listening for incoming connections on a specified port. **SocksClientBoundEvent** ```typescript { socket: net.Socket, // The underlying raw Socket remoteHost: { host: '1.2.3.4', // The remote host that is listening (usually the proxy itself) port: 4444 // The remote port the proxy is listening on for incoming connections (when using BIND). } } ``` ## Event: 'established' * ```info``` { SocksClientEstablishedEvent } An object containing a Socket and SocksRemoteHost info. This event is emitted when the following conditions are met: 1. When using the CONNECT command, and a proxy connection has been established to the remote host. 2. When using the BIND command, and an incoming connection has been accepted by the proxy and a TCP relay has been established. 3. When using the ASSOCIATE command, and a UDP relay has been established. When using BIND, 'bound' is first emitted to indicate the SOCKS server is waiting for an incoming connection, and provides the remote port the SOCKS server is listening on. When using ASSOCIATE, 'established' is emitted with the remote UDP port the SOCKS server is accepting UDP frame packets on. **SocksClientEstablishedEvent** ```typescript { socket: net.Socket, // The underlying raw Socket remoteHost: { host: '1.2.3.4', // The remote host that is listening (usually the proxy itself) port: 52738 // The remote port the proxy is listening on for incoming connections (when using BIND). } } ``` ## client.connect() Starts connecting to the remote SOCKS proxy server to establish a proxy connection to the destination host. ## client.socksClientOptions * ```returns``` { SocksClientOptions } The options that were passed to the SocksClient. Gets the options that were passed to the SocksClient when it was created. **SocksClientError** ```typescript { // Subclassed from Error. message: 'An error has occurred', options: { // SocksClientOptions } } ``` # Further Reading: Please read the SOCKS 5 specifications for more information on how to use BIND and Associate. http://www.ietf.org/rfc/rfc1928.txt # License This work is licensed under the [MIT license](http://en.wikipedia.org/wiki/MIT_License). # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. agent-base ========== ### Turn a function into an [`http.Agent`][http.Agent] instance [![Build Status](https://travis-ci.org/TooTallNate/node-agent-base.svg?branch=master)](https://travis-ci.org/TooTallNate/node-agent-base) This module provides an `http.Agent` generator. That is, you pass it an async callback function, and it returns a new `http.Agent` instance that will invoke the given callback function when sending outbound HTTP requests. #### Some subclasses: Here's some more interesting uses of `agent-base`. Send a pull request to list yours! * [`http-proxy-agent`][http-proxy-agent]: An HTTP(s) proxy `http.Agent` implementation for HTTP endpoints * [`https-proxy-agent`][https-proxy-agent]: An HTTP(s) proxy `http.Agent` implementation for HTTPS endpoints * [`pac-proxy-agent`][pac-proxy-agent]: A PAC file proxy `http.Agent` implementation for HTTP and HTTPS * [`socks-proxy-agent`][socks-proxy-agent]: A SOCKS (v4a) proxy `http.Agent` implementation for HTTP and HTTPS Installation ------------ Install with `npm`: ``` bash $ npm install agent-base ``` Example ------- Here's a minimal example that creates a new `net.Socket` connection to the server for every HTTP request (i.e. the equivalent of `agent: false` option): ```js var net = require('net'); var tls = require('tls'); var url = require('url'); var http = require('http'); var agent = require('agent-base'); var endpoint = 'http://nodejs.org/api/'; var parsed = url.parse(endpoint); // This is the important part! parsed.agent = agent(function (req, opts) { var socket; // `secureEndpoint` is true when using the https module if (opts.secureEndpoint) { socket = tls.connect(opts); } else { socket = net.connect(opts); } return socket; }); // Everything else works just like normal... http.get(parsed, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` Returning a Promise or using an `async` function is also supported: ```js agent(async function (req, opts) { await sleep(1000); // etc… }); ``` Return another `http.Agent` instance to "pass through" the responsibility for that HTTP request to that agent: ```js agent(function (req, opts) { return opts.secureEndpoint ? https.globalAgent : http.globalAgent; }); ``` API --- ## Agent(Function callback[, Object options]) → [http.Agent][] Creates a base `http.Agent` that will execute the callback function `callback` for every HTTP request that it is used as the `agent` for. The callback function is responsible for creating a `stream.Duplex` instance of some kind that will be used as the underlying socket in the HTTP request. The `options` object accepts the following properties: * `timeout` - Number - Timeout for the `callback()` function in milliseconds. Defaults to Infinity (optional). The callback function should have the following signature: ### callback(http.ClientRequest req, Object options, Function cb) → undefined The ClientRequest `req` can be accessed to read request headers and and the path, etc. The `options` object contains the options passed to the `http.request()`/`https.request()` function call, and is formatted to be directly passed to `net.connect()`/`tls.connect()`, or however else you want a Socket to be created. Pass the created socket to the callback function `cb` once created, and the HTTP request will continue to proceed. If the `https` module is used to invoke the HTTP request, then the `secureEndpoint` property on `options` _will be set to `true`_. License ------- (The MIT License) Copyright (c) 2013 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [http-proxy-agent]: https://github.com/TooTallNate/node-http-proxy-agent [https-proxy-agent]: https://github.com/TooTallNate/node-https-proxy-agent [pac-proxy-agent]: https://github.com/TooTallNate/node-pac-proxy-agent [socks-proxy-agent]: https://github.com/TooTallNate/node-socks-proxy-agent [http.Agent]: https://nodejs.org/api/http.html#http_class_http_agent # run-queue A promise based, dynamic priority queue runner, with concurrency limiting. ```js const RunQueue = require('run-queue') const queue = new RunQueue({ maxConcurrency: 1 }) queue.add(1, example, [-1]) for (let ii = 0; ii < 5; ++ii) { queue.add(0, example, [ii]) } const finished = [] queue.run().then( console.log(finished) }) function example (num, next) { setTimeout(() => { finished.push(num) next() }, 5 - Math.abs(num)) } ``` would output ``` [ 0, 1, 2, 3, 4, -1 ] ``` If you bump concurrency to `2`, then you get: ``` [ 1, 0, 3, 2, 4, -1 ] ``` The concurrency means that they don't finish in order, because some take longer than others. Each priority level must finish entirely before the next priority level is run. See [PRIORITIES](https://github.com/iarna/run-queue#priorities) below. This is even true if concurrency is set high enough that all of the regular queue can execute at once, for instance, with `maxConcurrency: 10`: ``` [ 4, 3, 2, 1, 0, -1 ] ``` ## API ### const queue = new RunQueue(options) Create a new queue. Options may contain: * maxConcurrency - (Default: `1`) The maximum number of jobs to execute at once. * Promise - (Default: global.Promise) The promise implementation to use. ### queue.add (prio, fn, args) Add a new job to the end of the queue at priority `prio` that will run `fn` with `args`. If `fn` is async then it should return a Promise. ### queue.run () Start running the job queue. Returns a Promise that resolves when either all the jobs are complete or a job ends in error (throws or returns a rejected promise). If a job ended in error then this Promise will be rejected with that error and no further queue running will be done. ## PRIORITIES Priorities are any integer value >= 0. Lowest is executed first. Priorities essentially represent distinct job queues. All jobs in a queue must complete before the next highest priority job queue is executed. This means that if you have two queues, `0` and `1` then ALL jobs in `0` must complete before ANY execute in `1`. If you add new `0` level jobs while `1` level jobs are running then it will switch back processing the `0` queue and won't execute any more `1` jobs till all of the new `0` jobs complete. # is-typedarray [![locked](http://badges.github.io/stability-badges/dist/locked.svg)](http://github.com/badges/stability-badges) Detect whether or not an object is a [Typed Array](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays). ## Usage [![NPM](https://nodei.co/npm/is-typedarray.png)](https://nodei.co/npm/is-typedarray/) ### isTypedArray(array) Returns `true` when array is a Typed Array, and `false` when it is not. ## License MIT. See [LICENSE.md](http://github.com/hughsk/is-typedarray/blob/master/LICENSE.md) for details. # psl (Public Suffix List) [![NPM](https://nodei.co/npm/psl.png?downloads=true&downloadRank=true)](https://nodei.co/npm/psl/) [![Greenkeeper badge](https://badges.greenkeeper.io/wrangr/psl.svg)](https://greenkeeper.io/) [![Build Status](https://travis-ci.org/wrangr/psl.svg?branch=master)](https://travis-ci.org/wrangr/psl) [![devDependency Status](https://david-dm.org/wrangr/psl/dev-status.png)](https://david-dm.org/wrangr/psl#info=devDependencies) `psl` is a `JavaScript` domain name parser based on the [Public Suffix List](https://publicsuffix.org/). This implementation is tested against the [test data hosted by Mozilla](http://mxr.mozilla.org/mozilla-central/source/netwerk/test/unit/data/test_psl.txt?raw=1) and kindly provided by [Comodo](https://www.comodo.com/). ## What is the Public Suffix List? The Public Suffix List is a cross-vendor initiative to provide an accurate list of domain name suffixes. The Public Suffix List is an initiative of the Mozilla Project, but is maintained as a community resource. It is available for use in any software, but was originally created to meet the needs of browser manufacturers. A "public suffix" is one under which Internet users can directly register names. Some examples of public suffixes are ".com", ".co.uk" and "pvt.k12.wy.us". The Public Suffix List is a list of all known public suffixes. Source: http://publicsuffix.org ## Installation ### Node.js ```sh npm install --save psl ``` ### Browser Download [psl.min.js](https://raw.githubusercontent.com/wrangr/psl/master/dist/psl.min.js) and include it in a script tag. ```html <script src="psl.min.js"></script> ``` This script is browserified and wrapped in a [umd](https://github.com/umdjs/umd) wrapper so you should be able to use it standalone or together with a module loader. ## API ### `psl.parse(domain)` Parse domain based on Public Suffix List. Returns an `Object` with the following properties: * `tld`: Top level domain (this is the _public suffix_). * `sld`: Second level domain (the first private part of the domain name). * `domain`: The domain name is the `sld` + `tld`. * `subdomain`: Optional parts left of the domain. #### Example: ```js var psl = require('psl'); // Parse domain without subdomain var parsed = psl.parse('google.com'); console.log(parsed.tld); // 'com' console.log(parsed.sld); // 'google' console.log(parsed.domain); // 'google.com' console.log(parsed.subdomain); // null // Parse domain with subdomain var parsed = psl.parse('www.google.com'); console.log(parsed.tld); // 'com' console.log(parsed.sld); // 'google' console.log(parsed.domain); // 'google.com' console.log(parsed.subdomain); // 'www' // Parse domain with nested subdomains var parsed = psl.parse('a.b.c.d.foo.com'); console.log(parsed.tld); // 'com' console.log(parsed.sld); // 'foo' console.log(parsed.domain); // 'foo.com' console.log(parsed.subdomain); // 'a.b.c.d' ``` ### `psl.get(domain)` Get domain name, `sld` + `tld`. Returns `null` if not valid. #### Example: ```js var psl = require('psl'); // null input. psl.get(null); // null // Mixed case. psl.get('COM'); // null psl.get('example.COM'); // 'example.com' psl.get('WwW.example.COM'); // 'example.com' // Unlisted TLD. psl.get('example'); // null psl.get('example.example'); // 'example.example' psl.get('b.example.example'); // 'example.example' psl.get('a.b.example.example'); // 'example.example' // TLD with only 1 rule. psl.get('biz'); // null psl.get('domain.biz'); // 'domain.biz' psl.get('b.domain.biz'); // 'domain.biz' psl.get('a.b.domain.biz'); // 'domain.biz' // TLD with some 2-level rules. psl.get('uk.com'); // null); psl.get('example.uk.com'); // 'example.uk.com'); psl.get('b.example.uk.com'); // 'example.uk.com'); // More complex TLD. psl.get('c.kobe.jp'); // null psl.get('b.c.kobe.jp'); // 'b.c.kobe.jp' psl.get('a.b.c.kobe.jp'); // 'b.c.kobe.jp' psl.get('city.kobe.jp'); // 'city.kobe.jp' psl.get('www.city.kobe.jp'); // 'city.kobe.jp' // IDN labels. psl.get('食狮.com.cn'); // '食狮.com.cn' psl.get('食狮.公司.cn'); // '食狮.公司.cn' psl.get('www.食狮.公司.cn'); // '食狮.公司.cn' // Same as above, but punycoded. psl.get('xn--85x722f.com.cn'); // 'xn--85x722f.com.cn' psl.get('xn--85x722f.xn--55qx5d.cn'); // 'xn--85x722f.xn--55qx5d.cn' psl.get('www.xn--85x722f.xn--55qx5d.cn'); // 'xn--85x722f.xn--55qx5d.cn' ``` ### `psl.isValid(domain)` Check whether a domain has a valid Public Suffix. Returns a `Boolean` indicating whether the domain has a valid Public Suffix. #### Example ```js var psl = require('psl'); psl.isValid('google.com'); // true psl.isValid('www.google.com'); // true psl.isValid('x.yz'); // false ``` ## Testing and Building Test are written using [`mocha`](https://mochajs.org/) and can be run in two different environments: `node` and `phantomjs`. ```sh # This will run `eslint`, `mocha` and `karma`. npm test # Individual test environments # Run tests in node only. ./node_modules/.bin/mocha test # Run tests in phantomjs only. ./node_modules/.bin/karma start ./karma.conf.js --single-run # Build data (parse raw list) and create dist files npm run build ``` Feel free to fork if you see possible improvements! ## Acknowledgements * Mozilla Foundation's [Public Suffix List](https://publicsuffix.org/) * Thanks to Rob Stradling of [Comodo](https://www.comodo.com/) for providing test data. * Inspired by [weppos/publicsuffix-ruby](https://github.com/weppos/publicsuffix-ruby) ## License The MIT License (MIT) Copyright (c) 2017 Lupo Montero <lupomontero@gmail.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # ssri [![npm version](https://img.shields.io/npm/v/ssri.svg)](https://npm.im/ssri) [![license](https://img.shields.io/npm/l/ssri.svg)](https://npm.im/ssri) [![Travis](https://img.shields.io/travis/zkat/ssri.svg)](https://travis-ci.org/zkat/ssri) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/ssri?svg=true)](https://ci.appveyor.com/project/zkat/ssri) [![Coverage Status](https://coveralls.io/repos/github/zkat/ssri/badge.svg?branch=latest)](https://coveralls.io/github/zkat/ssri?branch=latest) [`ssri`](https://github.com/zkat/ssri), short for Standard Subresource Integrity, is a Node.js utility for parsing, manipulating, serializing, generating, and verifying [Subresource Integrity](https://w3c.github.io/webappsec/specs/subresourceintegrity/) hashes. ## Install `$ npm install --save ssri` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * Parsing & Serializing * [`parse`](#parse) * [`stringify`](#stringify) * [`Integrity#concat`](#integrity-concat) * [`Integrity#toString`](#integrity-to-string) * [`Integrity#toJSON`](#integrity-to-json) * [`Integrity#match`](#integrity-match) * [`Integrity#pickAlgorithm`](#integrity-pick-algorithm) * [`Integrity#hexDigest`](#integrity-hex-digest) * Integrity Generation * [`fromHex`](#from-hex) * [`fromData`](#from-data) * [`fromStream`](#from-stream) * [`create`](#create) * Integrity Verification * [`checkData`](#check-data) * [`checkStream`](#check-stream) * [`integrityStream`](#integrity-stream) ### Example ```javascript const ssri = require('ssri') const integrity = 'sha512-9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==?foo' // Parsing and serializing const parsed = ssri.parse(integrity) ssri.stringify(parsed) // === integrity (works on non-Integrity objects) parsed.toString() // === integrity // Async stream functions ssri.checkStream(fs.createReadStream('./my-file'), integrity).then(...) ssri.fromStream(fs.createReadStream('./my-file')).then(sri => { sri.toString() === integrity }) fs.createReadStream('./my-file').pipe(ssri.createCheckerStream(sri)) // Sync data functions ssri.fromData(fs.readFileSync('./my-file')) // === parsed ssri.checkData(fs.readFileSync('./my-file'), integrity) // => 'sha512' ``` ### Features * Parses and stringifies SRI strings. * Generates SRI strings from raw data or Streams. * Strict standard compliance. * `?foo` metadata option support. * Multiple entries for the same algorithm. * Object-based integrity hash manipulation. * Small footprint: no dependencies, concise implementation. * Full test coverage. * Customizable algorithm picker. ### Contributing The ssri team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. ### API #### <a name="parse"></a> `> ssri.parse(sri, [opts]) -> Integrity` Parses `sri` into an `Integrity` data structure. `sri` can be an integrity string, an `Hash`-like with `digest` and `algorithm` fields and an optional `options` field, or an `Integrity`-like object. The resulting object will be an `Integrity` instance that has this shape: ```javascript { 'sha1': [{algorithm: 'sha1', digest: 'deadbeef', options: []}], 'sha512': [ {algorithm: 'sha512', digest: 'c0ffee', options: []}, {algorithm: 'sha512', digest: 'bad1dea', options: ['foo']} ], } ``` If `opts.single` is truthy, a single `Hash` object will be returned. That is, a single object that looks like `{algorithm, digest, options}`, as opposed to a larger object with multiple of these. If `opts.strict` is truthy, the resulting object will be filtered such that it strictly follows the Subresource Integrity spec, throwing away any entries with any invalid components. This also means a restricted set of algorithms will be used -- the spec limits them to `sha256`, `sha384`, and `sha512`. Strict mode is recommended if the integrity strings are intended for use in browsers, or in other situations where strict adherence to the spec is needed. ##### Example ```javascript ssri.parse('sha512-9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==?foo') // -> Integrity object ``` #### <a name="stringify"></a> `> ssri.stringify(sri, [opts]) -> String` This function is identical to [`Integrity#toString()`](#integrity-to-string), except it can be used on _any_ object that [`parse`](#parse) can handle -- that is, a string, an `Hash`-like, or an `Integrity`-like. The `opts.sep` option defines the string to use when joining multiple entries together. To be spec-compliant, this _must_ be whitespace. The default is a single space (`' '`). If `opts.strict` is true, the integrity string will be created using strict parsing rules. See [`ssri.parse`](#parse). ##### Example ```javascript // Useful for cleaning up input SRI strings: ssri.stringify('\n\rsha512-foo\n\t\tsha384-bar') // -> 'sha512-foo sha384-bar' // Hash-like: only a single entry. ssri.stringify({ algorithm: 'sha512', digest:'9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==', options: ['foo'] }) // -> // 'sha512-9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==?foo' // Integrity-like: full multi-entry syntax. Similar to output of `ssri.parse` ssri.stringify({ 'sha512': [ { algorithm: 'sha512', digest:'9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==', options: ['foo'] } ] }) // -> // 'sha512-9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==?foo' ``` #### <a name="integrity-concat"></a> `> Integrity#concat(otherIntegrity, [opts]) -> Integrity` Concatenates an `Integrity` object with another IntegrityLike, or an integrity string. This is functionally equivalent to concatenating the string format of both integrity arguments, and calling [`ssri.parse`](#ssri-parse) on the new string. If `opts.strict` is true, the new `Integrity` will be created using strict parsing rules. See [`ssri.parse`](#parse). ##### Example ```javascript // This will combine the integrity checks for two different versions of // your index.js file so you can use a single integrity string and serve // either of these to clients, from a single `<script>` tag. const desktopIntegrity = ssri.fromData(fs.readFileSync('./index.desktop.js')) const mobileIntegrity = ssri.fromData(fs.readFileSync('./index.mobile.js')) // Note that browsers (and ssri) will succeed as long as ONE of the entries // for the *prioritized* algorithm succeeds. That is, in order for this fallback // to work, both desktop and mobile *must* use the same `algorithm` values. desktopIntegrity.concat(mobileIntegrity) ``` #### <a name="integrity-to-string"></a> `> Integrity#toString([opts]) -> String` Returns the string representation of an `Integrity` object. All hash entries will be concatenated in the string by `opts.sep`, which defaults to `' '`. If you want to serialize an object that didn't come from an `ssri` function, use [`ssri.stringify()`](#stringify). If `opts.strict` is true, the integrity string will be created using strict parsing rules. See [`ssri.parse`](#parse). ##### Example ```javascript const integrity = 'sha512-9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==?foo' ssri.parse(integrity).toString() === integrity ``` #### <a name="integrity-to-json"></a> `> Integrity#toJSON() -> String` Returns the string representation of an `Integrity` object. All hash entries will be concatenated in the string by `' '`. This is a convenience method so you can pass an `Integrity` object directly to `JSON.stringify`. For more info check out [toJSON() behavior on mdn](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#toJSON%28%29_behavior). ##### Example ```javascript const integrity = '"sha512-9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==?foo"' JSON.stringify(ssri.parse(integrity)) === integrity ``` #### <a name="integrity-match"></a> `> Integrity#match(sri, [opts]) -> Hash | false` Returns the matching (truthy) hash if `Integrity` matches the argument passed as `sri`, which can be anything that [`parse`](#parse) will accept. `opts` will be passed through to `parse` and [`pickAlgorithm()`](#integrity-pick-algorithm). ##### Example ```javascript const integrity = 'sha512-9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==' ssri.parse(integrity).match(integrity) // Hash { // digest: '9KhgCRIx/AmzC8xqYJTZRrnO8OW2Pxyl2DIMZSBOr0oDvtEFyht3xpp71j/r/pAe1DM+JI/A+line3jUBgzQ7A==' // algorithm: 'sha512' // } ssri.parse(integrity).match('sha1-deadbeef') // false ``` #### <a name="integrity-pick-algorithm"></a> `> Integrity#pickAlgorithm([opts]) -> String` Returns the "best" algorithm from those available in the integrity object. If `opts.pickAlgorithm` is provided, it will be passed two algorithms as arguments. ssri will prioritize whichever of the two algorithms is returned by this function. Note that the function may be called multiple times, and it **must** return one of the two algorithms provided. By default, ssri will make a best-effort to pick the strongest/most reliable of the given algorithms. It may intentionally deprioritize algorithms with known vulnerabilities. ##### Example ```javascript ssri.parse('sha1-WEakDigEST sha512-yzd8ELD1piyANiWnmdnpCL5F52f10UfUdEkHywVZeqTt0ymgrxR63Qz0GB7TKPoeeZQmWCaz7T1').pickAlgorithm() // sha512 ``` #### <a name="integrity-hex-digest"></a> `> Integrity#hexDigest() -> String` `Integrity` is assumed to be either a single-hash `Integrity` instance, or a `Hash` instance. Returns its `digest`, converted to a hex representation of the base64 data. ##### Example ```javascript ssri.parse('sha1-deadbeef').hexDigest() // '75e69d6de79f' ``` #### <a name="from-hex"></a> `> ssri.fromHex(hexDigest, algorithm, [opts]) -> Integrity` Creates an `Integrity` object with a single entry, based on a hex-formatted hash. This is a utility function to help convert existing shasums to the Integrity format, and is roughly equivalent to something like: ```javascript algorithm + '-' + Buffer.from(hexDigest, 'hex').toString('base64') ``` `opts.options` may optionally be passed in: it must be an array of option strings that will be added to all generated integrity hashes generated by `fromData`. This is a loosely-specified feature of SRIs, and currently has no specified semantics besides being `?`-separated. Use at your own risk, and probably avoid if your integrity strings are meant to be used with browsers. If `opts.strict` is true, the integrity object will be created using strict parsing rules. See [`ssri.parse`](#parse). If `opts.single` is true, a single `Hash` object will be returned. ##### Example ```javascript ssri.fromHex('75e69d6de79f', 'sha1').toString() // 'sha1-deadbeef' ``` #### <a name="from-data"></a> `> ssri.fromData(data, [opts]) -> Integrity` Creates an `Integrity` object from either string or `Buffer` data, calculating all the requested hashes and adding any specified options to the object. `opts.algorithms` determines which algorithms to generate hashes for. All results will be included in a single `Integrity` object. The default value for `opts.algorithms` is `['sha512']`. All algorithm strings must be hashes listed in `crypto.getHashes()` for the host Node.js platform. `opts.options` may optionally be passed in: it must be an array of option strings that will be added to all generated integrity hashes generated by `fromData`. This is a loosely-specified feature of SRIs, and currently has no specified semantics besides being `?`-separated. Use at your own risk, and probably avoid if your integrity strings are meant to be used with browsers. If `opts.strict` is true, the integrity object will be created using strict parsing rules. See [`ssri.parse`](#parse). ##### Example ```javascript const integrityObj = ssri.fromData('foobarbaz', { algorithms: ['sha256', 'sha384', 'sha512'] }) integrity.toString('\n') // -> // sha256-l981iLWj8kurw4UbNy8Lpxqdzd7UOxS50Glhv8FwfZ0= // sha384-irnCxQ0CfQhYGlVAUdwTPC9bF3+YWLxlaDGM4xbYminxpbXEq+D+2GCEBTxcjES9 // sha512-yzd8ELD1piyANiWnmdnpCL5F52f10UfUdEkHywVZeqTt0ymgrxR63Qz0GB7TKPoeeZQmWCaz7T1+9vBnypkYWg== ``` #### <a name="from-stream"></a> `> ssri.fromStream(stream, [opts]) -> Promise<Integrity>` Returns a Promise of an Integrity object calculated by reading data from a given `stream`. It accepts both `opts.algorithms` and `opts.options`, which are documented as part of [`ssri.fromData`](#from-data). Additionally, `opts.Promise` may be passed in to inject a Promise library of choice. By default, ssri will use Node's built-in Promises. If `opts.strict` is true, the integrity object will be created using strict parsing rules. See [`ssri.parse`](#parse). ##### Example ```javascript ssri.fromStream(fs.createReadStream('index.js'), { algorithms: ['sha1', 'sha512'] }).then(integrity => { return ssri.checkStream(fs.createReadStream('index.js'), integrity) }) // succeeds ``` #### <a name="create"></a> `> ssri.create([opts]) -> <Hash>` Returns a Hash object with `update(<Buffer or string>[,enc])` and `digest()` methods. The Hash object provides the same methods as [crypto class Hash](https://nodejs.org/dist/latest-v6.x/docs/api/crypto.html#crypto_class_hash). `digest()` accepts no arguments and returns an Integrity object calculated by reading data from calls to update. It accepts both `opts.algorithms` and `opts.options`, which are documented as part of [`ssri.fromData`](#from-data). If `opts.strict` is true, the integrity object will be created using strict parsing rules. See [`ssri.parse`](#parse). ##### Example ```javascript const integrity = ssri.create().update('foobarbaz').digest() integrity.toString() // -> // sha512-yzd8ELD1piyANiWnmdnpCL5F52f10UfUdEkHywVZeqTt0ymgrxR63Qz0GB7TKPoeeZQmWCaz7T1+9vBnypkYWg== ``` #### <a name="check-data"></a> `> ssri.checkData(data, sri, [opts]) -> Hash|false` Verifies `data` integrity against an `sri` argument. `data` may be either a `String` or a `Buffer`, and `sri` can be any subresource integrity representation that [`ssri.parse`](#parse) can handle. If verification succeeds, `checkData` will return the name of the algorithm that was used for verification (a truthy value). Otherwise, it will return `false`. If `opts.pickAlgorithm` is provided, it will be used by [`Integrity#pickAlgorithm`](#integrity-pick-algorithm) when deciding which of the available digests to match against. If `opts.error` is true, and verification fails, `checkData` will throw either an `EBADSIZE` or an `EINTEGRITY` error, instead of just returning false. ##### Example ```javascript const data = fs.readFileSync('index.js') ssri.checkData(data, ssri.fromData(data)) // -> 'sha512' ssri.checkData(data, 'sha256-l981iLWj8kurw4UbNy8Lpxqdzd7UOxS50Glhv8FwfZ0') ssri.checkData(data, 'sha1-BaDDigEST') // -> false ssri.checkData(data, 'sha1-BaDDigEST', {error: true}) // -> Error! EINTEGRITY ``` #### <a name="check-stream"></a> `> ssri.checkStream(stream, sri, [opts]) -> Promise<Hash>` Verifies the contents of `stream` against an `sri` argument. `stream` will be consumed in its entirety by this process. `sri` can be any subresource integrity representation that [`ssri.parse`](#parse) can handle. `checkStream` will return a Promise that either resolves to the `Hash` that succeeded verification, or, if the verification fails or an error happens with `stream`, the Promise will be rejected. If the Promise is rejected because verification failed, the returned error will have `err.code` as `EINTEGRITY`. If `opts.size` is given, it will be matched against the stream size. An error with `err.code` `EBADSIZE` will be returned by a rejection if the expected size and actual size fail to match. If `opts.pickAlgorithm` is provided, it will be used by [`Integrity#pickAlgorithm`](#integrity-pick-algorithm) when deciding which of the available digests to match against. ##### Example ```javascript const integrity = ssri.fromData(fs.readFileSync('index.js')) ssri.checkStream( fs.createReadStream('index.js'), integrity ) // -> // Promise<{ // algorithm: 'sha512', // digest: 'sha512-yzd8ELD1piyANiWnmdnpCL5F52f10UfUdEkHywVZeqTt0ymgrxR63Qz0GB7TKPoeeZQmWCaz7T1' // }> ssri.checkStream( fs.createReadStream('index.js'), 'sha256-l981iLWj8kurw4UbNy8Lpxqdzd7UOxS50Glhv8FwfZ0' ) // -> Promise<Hash> ssri.checkStream( fs.createReadStream('index.js'), 'sha1-BaDDigEST' ) // -> Promise<Error<{code: 'EINTEGRITY'}>> ``` #### <a name="integrity-stream"></a> `> integrityStream([opts]) -> IntegrityStream` Returns a `Transform` stream that data can be piped through in order to generate and optionally check data integrity for piped data. When the stream completes successfully, it emits `size` and `integrity` events, containing the total number of bytes processed and a calculated `Integrity` instance based on stream data, respectively. If `opts.algorithms` is passed in, the listed algorithms will be calculated when generating the final `Integrity` instance. The default is `['sha512']`. If `opts.single` is passed in, a single `Hash` instance will be returned. If `opts.integrity` is passed in, it should be an `integrity` value understood by [`parse`](#parse) that the stream will check the data against. If verification succeeds, the integrity stream will emit a `verified` event whose value is a single `Hash` object that is the one that succeeded verification. If verification fails, the stream will error with an `EINTEGRITY` error code. If `opts.size` is given, it will be matched against the stream size. An error with `err.code` `EBADSIZE` will be emitted by the stream if the expected size and actual size fail to match. If `opts.pickAlgorithm` is provided, it will be passed two algorithms as arguments. ssri will prioritize whichever of the two algorithms is returned by this function. Note that the function may be called multiple times, and it **must** return one of the two algorithms provided. By default, ssri will make a best-effort to pick the strongest/most reliable of the given algorithms. It may intentionally deprioritize algorithms with known vulnerabilities. ##### Example ```javascript const integrity = ssri.fromData(fs.readFileSync('index.js')) fs.createReadStream('index.js') .pipe(ssri.integrityStream({integrity})) ``` # err-code [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/err-code [downloads-image]:http://img.shields.io/npm/dm/err-code.svg [npm-image]:http://img.shields.io/npm/v/err-code.svg [travis-url]:https://travis-ci.org/IndigoUnited/js-err-code [travis-image]:http://img.shields.io/travis/IndigoUnited/js-err-code/master.svg [david-dm-url]:https://david-dm.org/IndigoUnited/js-err-code [david-dm-image]:https://img.shields.io/david/IndigoUnited/js-err-code.svg [david-dm-dev-url]:https://david-dm.org/IndigoUnited/js-err-code#info=devDependencies [david-dm-dev-image]:https://img.shields.io/david/dev/IndigoUnited/js-err-code.svg Create new error instances with a code and additional properties. ## Installation `$ npm install err-code` - `NPM` `$ bower install err-code` - `bower` The browser file is named index.umd.js which supports CommonJS, AMD and globals (errCode). ## Why I find myself doing this repeatedly: ```js var err = new Error('My message'); err.code = 'SOMECODE'; err.detail = 'Additional information about the error'; throw err; ``` ## Usage Simple usage. ```js var errcode = require('err-code'); // fill error with message + code throw errcode(new Error('My message'), 'ESOMECODE'); // fill error with message + code + props throw errcode(new Error('My message'), 'ESOMECODE', { detail: 'Additional information about the error' }); // fill error with message + props throw errcode(new Error('My message'), { detail: 'Additional information about the error' }); // You may also pass a string in the first argument and an error will be automatically created // for you, though the stack trace will contain err-code in it. // create error with message + code throw errcode('My message', 'ESOMECODE'); // create error with message + code + props throw errcode('My message', 'ESOMECODE', { detail: 'Additional information about the error' }); // create error with message + props throw errcode('My message', { detail: 'Additional information about the error' }); ``` ## Tests `$ npm test` ## License Released under the [MIT License](http://www.opensource.org/licenses/mit-license.php). oauth-sign ========== OAuth 1 signing. Formerly a vendor lib in mikeal/request, now a standalone module. ## Supported Method Signatures - HMAC-SHA1 - HMAC-SHA256 - RSA-SHA1 - PLAINTEXT validate-npm-package-license ============================ Give me a string and I'll tell you if it's a valid npm package license string. ```javascript var valid = require('validate-npm-package-license'); ``` SPDX license identifiers are valid license strings: ```javascript var assert = require('assert'); var validSPDXExpression = { validForNewPackages: true, validForOldPackages: true, spdx: true }; assert.deepEqual(valid('MIT'), validSPDXExpression); assert.deepEqual(valid('BSD-2-Clause'), validSPDXExpression); assert.deepEqual(valid('Apache-2.0'), validSPDXExpression); assert.deepEqual(valid('ISC'), validSPDXExpression); ``` The function will return a warning and suggestion for nearly-correct license identifiers: ```javascript assert.deepEqual( valid('Apache 2.0'), { validForOldPackages: false, validForNewPackages: false, warnings: [ 'license should be ' + 'a valid SPDX license expression (without "LicenseRef"), ' + '"UNLICENSED", or ' + '"SEE LICENSE IN <filename>"', 'license is similar to the valid expression "Apache-2.0"' ] } ); ``` SPDX expressions are valid, too ... ```javascript // Simple SPDX license expression for dual licensing assert.deepEqual( valid('(GPL-3.0-only OR BSD-2-Clause)'), validSPDXExpression ); ``` ... except if they contain `LicenseRef`: ```javascript var warningAboutLicenseRef = { validForOldPackages: false, validForNewPackages: false, spdx: true, warnings: [ 'license should be ' + 'a valid SPDX license expression (without "LicenseRef"), ' + '"UNLICENSED", or ' + '"SEE LICENSE IN <filename>"', ] }; assert.deepEqual( valid('LicenseRef-Made-Up'), warningAboutLicenseRef ); assert.deepEqual( valid('(MIT OR LicenseRef-Made-Up)'), warningAboutLicenseRef ); ``` If you can't describe your licensing terms with standardized SPDX identifiers, put the terms in a file in the package and point users there: ```javascript assert.deepEqual( valid('SEE LICENSE IN LICENSE.txt'), { validForNewPackages: true, validForOldPackages: true, inFile: 'LICENSE.txt' } ); assert.deepEqual( valid('SEE LICENSE IN license.md'), { validForNewPackages: true, validForOldPackages: true, inFile: 'license.md' } ); ``` If there aren't any licensing terms, use `UNLICENSED`: ```javascript var unlicensed = { validForNewPackages: true, validForOldPackages: true, unlicensed: true }; assert.deepEqual(valid('UNLICENSED'), unlicensed); assert.deepEqual(valid('UNLICENCED'), unlicensed); ``` # node-http-signature node-http-signature is a node.js library that has client and server components for Joyent's [HTTP Signature Scheme](http_signing.md). ## Usage Note the example below signs a request with the same key/cert used to start an HTTP server. This is almost certainly not what you actually want, but is just used to illustrate the API calls; you will need to provide your own key management in addition to this library. ### Client ```js var fs = require('fs'); var https = require('https'); var httpSignature = require('http-signature'); var key = fs.readFileSync('./key.pem', 'ascii'); var options = { host: 'localhost', port: 8443, path: '/', method: 'GET', headers: {} }; // Adds a 'Date' header in, signs it, and adds the // 'Authorization' header in. var req = https.request(options, function(res) { console.log(res.statusCode); }); httpSignature.sign(req, { key: key, keyId: './cert.pem' }); req.end(); ``` ### Server ```js var fs = require('fs'); var https = require('https'); var httpSignature = require('http-signature'); var options = { key: fs.readFileSync('./key.pem'), cert: fs.readFileSync('./cert.pem') }; https.createServer(options, function (req, res) { var rc = 200; var parsed = httpSignature.parseRequest(req); var pub = fs.readFileSync(parsed.keyId, 'ascii'); if (!httpSignature.verifySignature(parsed, pub)) rc = 401; res.writeHead(rc); res.end(); }).listen(8443); ``` ## Installation npm install http-signature ## License MIT. ## Bugs See <https://github.com/joyent/node-http-signature/issues>. # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; # npm-logical-tree [![npm version](https://img.shields.io/npm/v/npm-logical-tree.svg)](https://npm.im/npm-logical-tree) [![license](https://img.shields.io/npm/l/npm-logical-tree.svg)](https://npm.im/npm-logical-tree) [![Travis](https://img.shields.io/travis/npm/logical-tree.svg)](https://travis-ci.org/npm/logical-tree) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/logical-tree?svg=true)](https://ci.appveyor.com/project/npm/logical-tree) [![Coverage Status](https://coveralls.io/repos/github/npm/logical-tree/badge.svg?branch=latest)](https://coveralls.io/github/npm/logical-tree?branch=latest) [`npm-logical-tree`](https://github.com/npm/npm-logical-tree) is a Node.js library that takes the contents of a `package.json` and `package-lock.json` (or `npm-shrinkwrap.json`) and returns a nested tree data structure representing the logical relationships between the different dependencies. ## Install `$ npm install npm-logical-tree` ## Table of Contents * [Example](#example) * [Contributing](#contributing) * [API](#api) * [`logicalTree`](#logical-tree) * [`logicalTree.node`](#make-node) * [`tree.isRoot`](#is-root) * [`tree.addDep`](#add-dep) * [`tree.delDep`](#del-dep) * [`tree.getDep`](#get-dep) * [`tree.path`](#path) * [`tree.hasCycle`](#has-cycle) * [`tree.forEach`](#for-each) * [`tree.forEachAsync`](#for-each-async) ### Example ```javascript const fs = require('fs') const logicalTree = require('npm-logical-tree') const pkg = require('./package.json') const pkgLock = require('./package-lock.json') logicalTree(pkg, pkgLock) // returns: LogicalTree { name: 'npm-logical-tree', version: '1.0.0', address: null, optional: false, dev: false, bundled: false, resolved: undefined, integrity: undefined, requiredBy: Set { }, dependencies: Map { 'foo' => LogicalTree { name: 'foo', version: '1.2.3', address: 'foo', optional: false, dev: true, bundled: false, resolved: 'https://registry.npmjs.org/foo/-/foo-1.2.3.tgz', integrity: 'sha1-rYUK/p261/SXByi0suR/7Rw4chw=', dependencies: Map { ... }, requiredBy: Set { ... }, }, ... } } ``` ### Contributing The npm team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other. Please refer to the [Changelog](CHANGELOG.md) for project history details, too. Happy hacking! ### API #### <a name="logical-tree"></a> `> logicalTree(pkg, lock) -> LogicalTree` Calculates a logical tree based on a matching `package.json` and `package-lock.json` pair. A "logical tree" is a fully-nested dependency graph for an npm package, as opposed to a physical tree which might be flattened. `logical-tree` will represent deduplicated/flattened nodes using the same object throughout the tree, so duplication can be checked by object identity. ##### Example ```javascript const pkg = require('./package.json') const pkgLock = require('./package-lock.json') logicalTree(pkg, pkgLock) // returns: LogicalTree { name: 'npm-logical-tree', version: '1.0.0', address: null, optional: false, dev: false, bundled: false, resolved: undefined, integrity: undefined, requiredBy: Set { }, dependencies: Map { 'foo' => LogicalTree { name: 'foo', version: '1.2.3', address: 'foo', optional: false, dev: true, bundled: false, resolved: 'https://registry.npmjs.org/foo/-/foo-1.2.3.tgz', integrity: 'sha1-rYUK/p261/SXByi0suR/7Rw4chw=', requiredBy: Set { ... }, dependencies: Map { ... } }, ... } } ``` #### <a name="make-node"></a> `> logicalTree.node(name, [address, [opts]]) -> LogicalTree` Manually creates a new LogicalTree node. ##### Options * `opts.version` - version of the node. * `opts.optional` - is this node an optionalDep? * `opts.dev` - is this node a devDep? * `opts.bundled` - is this bundled? * `opts.resolved` - resolved address. * `opts.integrity` - SRI string. ##### Example ```javascript logicalTree.node('hello', 'subpath:to:@foo/bar', {dev: true}) ``` # libnpmaccess [![npm version](https://img.shields.io/npm/v/libnpmaccess.svg)](https://npm.im/libnpmaccess) [![license](https://img.shields.io/npm/l/libnpmaccess.svg)](https://npm.im/libnpmaccess) [![Travis](https://img.shields.io/travis/npm/libnpmaccess/latest.svg)](https://travis-ci.org/npm/libnpmaccess) [![AppVeyor](https://img.shields.io/appveyor/ci/zkat/libnpmaccess/latest.svg)](https://ci.appveyor.com/project/zkat/libnpmaccess) [![Coverage Status](https://coveralls.io/repos/github/npm/libnpmaccess/badge.svg?branch=latest)](https://coveralls.io/github/npm/libnpmaccess?branch=latest) [`libnpmaccess`](https://github.com/npm/libnpmaccess) is a Node.js library that provides programmatic access to the guts of the npm CLI's `npm access` command and its various subcommands. This includes managing account 2FA, listing packages and permissions, looking at package collaborators, and defining package permissions for users, orgs, and teams. ## Example ```javascript const access = require('libnpmaccess') // List all packages @zkat has access to on the npm registry. console.log(Object.keys(await access.lsPackages('zkat'))) ``` ## Table of Contents * [Installing](#install) * [Example](#example) * [Contributing](#contributing) * [API](#api) * [access opts](#opts) * [`public()`](#public) * [`restricted()`](#restricted) * [`grant()`](#grant) * [`revoke()`](#revoke) * [`tfaRequired()`](#tfa-required) * [`tfaNotRequired()`](#tfa-not-required) * [`lsPackages()`](#ls-packages) * [`lsPackages.stream()`](#ls-packages-stream) * [`lsCollaborators()`](#ls-collaborators) * [`lsCollaborators.stream()`](#ls-collaborators-stream) ### Install `$ npm install libnpmaccess` ### Contributing The npm team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other. Please refer to the [Changelog](CHANGELOG.md) for project history details, too. Happy hacking! ### API #### <a name="opts"></a> `opts` for `libnpmaccess` commands `libnpmaccess` uses [`npm-registry-fetch`](https://npm.im/npm-registry-fetch). All options are passed through directly to that library, so please refer to [its own `opts` documentation](https://www.npmjs.com/package/npm-registry-fetch#fetch-options) for options that can be passed in. A couple of options of note for those in a hurry: * `opts.token` - can be passed in and will be used as the authentication token for the registry. For other ways to pass in auth details, see the n-r-f docs. * `opts.otp` - certain operations will require an OTP token to be passed in. If a `libnpmaccess` command fails with `err.code === EOTP`, please retry the request with `{otp: <2fa token>}` * `opts.Promise` - If you pass this in, the Promises returned by `libnpmaccess` commands will use this Promise class instead. For example: `{Promise: require('bluebird')}` #### <a name="public"></a> `> access.public(spec, [opts]) -> Promise` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. Makes package described by `spec` public. ##### Example ```javascript await access.public('@foo/bar', {token: 'myregistrytoken'}) // `@foo/bar` is now public ``` #### <a name="restricted"></a> `> access.restricted(spec, [opts]) -> Promise` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. Makes package described by `spec` private/restricted. ##### Example ```javascript await access.restricted('@foo/bar', {token: 'myregistrytoken'}) // `@foo/bar` is now private ``` #### <a name="grant"></a> `> access.grant(spec, team, permissions, [opts]) -> Promise` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. `team` must be a fully-qualified team name, in the `scope:team` format, with or without the `@` prefix, and the team must be a valid team within that scope. `permissions` must be one of `'read-only'` or `'read-write'`. Grants `read-only` or `read-write` permissions for a certain package to a team. ##### Example ```javascript await access.grant('@foo/bar', '@foo:myteam', 'read-write', { token: 'myregistrytoken' }) // `@foo/bar` is now read/write enabled for the @foo:myteam team. ``` #### <a name="revoke"></a> `> access.revoke(spec, team, [opts]) -> Promise` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. `team` must be a fully-qualified team name, in the `scope:team` format, with or without the `@` prefix, and the team must be a valid team within that scope. `permissions` must be one of `'read-only'` or `'read-write'`. Removes access to a package from a certain team. ##### Example ```javascript await access.revoke('@foo/bar', '@foo:myteam', { token: 'myregistrytoken' }) // @foo:myteam can no longer access `@foo/bar` ``` #### <a name="tfa-required"></a> `> access.tfaRequired(spec, [opts]) -> Promise` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. Makes it so publishing or managing a package requires using 2FA tokens to complete operations. ##### Example ```javascript await access.tfaRequires('lodash', {token: 'myregistrytoken'}) // Publishing or changing dist-tags on `lodash` now require OTP to be enabled. ``` #### <a name="tfa-not-required"></a> `> access.tfaNotRequired(spec, [opts]) -> Promise` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. Disabled the package-level 2FA requirement for `spec`. Note that you will need to pass in an `otp` token in `opts` in order to complete this operation. ##### Example ```javascript await access.tfaNotRequired('lodash', {otp: '123654', token: 'myregistrytoken'}) // Publishing or editing dist-tags on `lodash` no longer requires OTP to be // enabled. ``` #### <a name="ls-packages"></a> `> access.lsPackages(entity, [opts]) -> Promise` `entity` must be either a valid org or user name, or a fully-qualified team name in the `scope:team` format, with or without the `@` prefix. Lists out packages a user, org, or team has access to, with corresponding permissions. Packages that the access token does not have access to won't be listed. In order to disambiguate between users and orgs, two requests may end up being made when listing orgs or users. For a streamed version of these results, see [`access.lsPackages.stream()`](#ls-package-stream). ##### Example ```javascript await access.lsPackages('zkat', { token: 'myregistrytoken' }) // Lists all packages `@zkat` has access to on the registry, and the // corresponding permissions. ``` #### <a name="ls-packages-stream"></a> `> access.lsPackages.stream(scope, [team], [opts]) -> Stream` `entity` must be either a valid org or user name, or a fully-qualified team name in the `scope:team` format, with or without the `@` prefix. Streams out packages a user, org, or team has access to, with corresponding permissions, with each stream entry being formatted like `[packageName, permissions]`. Packages that the access token does not have access to won't be listed. In order to disambiguate between users and orgs, two requests may end up being made when listing orgs or users. The returned stream is a valid `asyncIterator`. ##### Example ```javascript for await (let [pkg, perm] of access.lsPackages.stream('zkat')) { console.log('zkat has', perm, 'access to', pkg) } // zkat has read-write access to eggplant // zkat has read-only access to @npmcorp/secret ``` #### <a name="ls-collaborators"></a> `> access.lsCollaborators(spec, [user], [opts]) -> Promise` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. `user` must be a valid user name, with or without the `@` prefix. Lists out access privileges for a certain package. Will only show permissions for packages to which you have at least read access. If `user` is passed in, the list is filtered only to teams _that_ user happens to belong to. For a streamed version of these results, see [`access.lsCollaborators.stream()`](#ls-collaborators-stream). ##### Example ```javascript await access.lsCollaborators('@npm/foo', 'zkat', { token: 'myregistrytoken' }) // Lists all teams with access to @npm/foo that @zkat belongs to. ``` #### <a name="ls-collaborators-stream"></a> `> access.lsCollaborators.stream(spec, [user], [opts]) -> Stream` `spec` must be an [`npm-package-arg`](https://npm.im/npm-package-arg)-compatible registry spec. `user` must be a valid user name, with or without the `@` prefix. Stream out access privileges for a certain package, with each entry in `[user, permissions]` format. Will only show permissions for packages to which you have at least read access. If `user` is passed in, the list is filtered only to teams _that_ user happens to belong to. The returned stream is a valid `asyncIterator`. ##### Example ```javascript for await (let [usr, perm] of access.lsCollaborators.stream('npm')) { console.log(usr, 'has', perm, 'access to npm') } // zkat has read-write access to npm // iarna has read-write access to npm ``` # readable-stream ***Node-core streams for userland*** [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png&months=6&height=3)](https://nodei.co/npm/readable-stream/) This package is a mirror of the Streams2 and Streams3 implementations in Node-core. If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core. **readable-stream** comes in two major versions, v1.0.x and v1.1.x. The former tracks the Streams2 implementation in Node 0.10, including bug-fixes and minor improvements as they are added. The latter tracks Streams3 as it develops in Node 0.11; we will likely see a v1.2.x branch for Node 0.12. **readable-stream** uses proper patch-level versioning so if you pin to `"~1.0.0"` you’ll get the latest Node 0.10 Streams2 implementation, including any fixes and minor non-breaking improvements. The patch-level versions of 1.0.x and 1.1.x should mirror the patch-level versions of Node-core releases. You should prefer the **1.0.x** releases for now and when you’re ready to start using Streams3, pin to `"~1.1.0"` # cidr-regex [![](https://img.shields.io/npm/v/cidr-regex.svg?style=flat)](https://www.npmjs.org/package/cidr-regex) [![](https://img.shields.io/npm/dm/cidr-regex.svg)](https://www.npmjs.org/package/cidr-regex) [![](https://api.travis-ci.org/silverwind/cidr-regex.svg?style=flat)](https://travis-ci.org/silverwind/cidr-regex) > Regular expression for matching IP addresses in CIDR notation ## Install ```sh $ npm install --save cidr-regex ``` ## Usage ```js const cidrRegex = require('cidr-regex'); // Contains a CIDR IP address? cidrRegex().test('foo 192.168.0.1/24'); //=> true // Is a CIDR IP address? cidrRegex({exact: true}).test('foo 192.168.0.1/24'); //=> false cidrRegex.v6({exact: true}).test('1:2:3:4:5:6:7:8/64'); //=> true 'foo 192.168.0.1/24 bar 1:2:3:4:5:6:7:8/64 baz'.match(cidrRegex()); //=> ['192.168.0.1/24', '1:2:3:4:5:6:7:8/64'] ``` ## API ### cidrRegex([options]) Returns a regex for matching both IPv4 and IPv6 CIDR IP addresses. ### cidrRegex.v4([options]) Returns a regex for matching IPv4 CIDR IP addresses. ### cidrRegex.v6([options]) Returns a regex for matching IPv6 CIDR IP addresses. #### options.exact Type: `boolean`<br> Default: `false` *(Matches any CIDR IP address in a string)* Only match an exact string. Useful with `RegExp#test()` to check if a string is a CIDR IP address. ## Related - [is-cidr](https://github.com/silverwind/is-cidr) - Check if a string is an IP address in CIDR notation - [is-ip](https://github.com/sindresorhus/is-ip) - Check if a string is an IP address - [ip-regex](https://github.com/sindresorhus/ip-regex) - Regular expression for matching IP addresses ## License © [silverwind](https://github.com/silverwind), distributed under BSD licence Based on previous work by [Felipe Apostol](https://github.com/flipjs) # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = new LRU(options) , otherCache = new LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. Setting it to a non-number or negative number will throw a `TypeError`. Setting it to 0 makes it be `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a `TypeError`. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. * `updateAgeOnGet` When using time-expiring entries with `maxAge`, setting this to `true` will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.) ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries # node-promise-retry [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/promise-retry [downloads-image]:http://img.shields.io/npm/dm/promise-retry.svg [npm-image]:http://img.shields.io/npm/v/promise-retry.svg [travis-url]:https://travis-ci.org/IndigoUnited/node-promise-retry [travis-image]:http://img.shields.io/travis/IndigoUnited/node-promise-retry/master.svg [david-dm-url]:https://david-dm.org/IndigoUnited/node-promise-retry [david-dm-image]:https://img.shields.io/david/IndigoUnited/node-promise-retry.svg [david-dm-dev-url]:https://david-dm.org/IndigoUnited/node-promise-retry#info=devDependencies [david-dm-dev-image]:https://img.shields.io/david/dev/IndigoUnited/node-promise-retry.svg Retries a function that returns a promise, leveraging the power of the [retry](https://github.com/tim-kos/node-retry) module to the promises world. There's already some modules that are able to retry functions that return promises but they were rather difficult to use or do not offer an easy way to do conditional retries. ## Installation `$ npm install promise-retry` ## Usage ### promiseRetry(fn, [options]) Calls `fn` until the returned promise ends up fulfilled or rejected with an error different than a `retry` error. The `options` argument is an object which maps to the [retry](https://github.com/tim-kos/node-retry) module options: - `retries`: The maximum amount of times to retry the operation. Default is `10`. - `factor`: The exponential factor to use. Default is `2`. - `minTimeout`: The number of milliseconds before starting the first retry. Default is `1000`. - `maxTimeout`: The maximum number of milliseconds between two retries. Default is `Infinity`. - `randomize`: Randomizes the timeouts by multiplying with a factor between `1` to `2`. Default is `false`. The `fn` function will receive a `retry` function as its first argument that should be called with an error whenever you want to retry `fn`. The `retry` function will always throw an error. If there's retries left, it will throw a special `retry` error that will be handled internally to call `fn` again. If there's no retries left, it will throw the actual error passed to it. If you prefer, you can pass the options first using the alternative function signature `promiseRetry([options], fn)`. ## Example ```js var promiseRetry = require('promise-retry'); // Simple example promiseRetry(function (retry, number) { console.log('attempt number', number); return doSomething() .catch(retry); }) .then(function (value) { // .. }, function (err) { // .. }); // Conditional example promiseRetry(function (retry, number) { console.log('attempt number', number); return doSomething() .catch(function (err) { if (err.code === 'ETIMEDOUT') { retry(err); } throw err; }); }) .then(function (value) { // .. }, function (err) { // .. }); ``` ## Tests `$ npm test` ## License Released under the [MIT License](http://www.opensource.org/licenses/mit-license.php). [![Build Status][travis-svg]][travis-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] # extend() for Node.js <sup>[![Version Badge][npm-version-png]][npm-url]</sup> `node-extend` is a port of the classic extend() method from jQuery. It behaves as you expect. It is simple, tried and true. Notes: * Since Node.js >= 4, [`Object.assign`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) now offers the same functionality natively (but without the "deep copy" option). See [ECMAScript 2015 (ES6) in Node.js](https://nodejs.org/en/docs/es6). * Some native implementations of `Object.assign` in both Node.js and many browsers (since NPM modules are for the browser too) may not be fully spec-compliant. Check [`object.assign`](https://www.npmjs.com/package/object.assign) module for a compliant candidate. ## Installation This package is available on [npm][npm-url] as: `extend` ``` sh npm install extend ``` ## Usage **Syntax:** extend **(** [`deep`], `target`, `object1`, [`objectN`] **)** *Extend one object with one or more others, returning the modified object.* **Example:** ``` js var extend = require('extend'); extend(targetObject, object1, object2); ``` Keep in mind that the target object will be modified, and will be returned from extend(). If a boolean true is specified as the first argument, extend performs a deep copy, recursively copying any objects it finds. Otherwise, the copy will share structure with the original object(s). Undefined properties are not copied. However, properties inherited from the object's prototype will be copied over. Warning: passing `false` as the first argument is not supported. ### Arguments * `deep` *Boolean* (optional) If set, the merge becomes recursive (i.e. deep copy). * `target` *Object* The object to extend. * `object1` *Object* The object that will be merged into the first. * `objectN` *Object* (Optional) More objects to merge into the first. ## License `node-extend` is licensed under the [MIT License][mit-license-url]. ## Acknowledgements All credit to the jQuery authors for perfecting this amazing utility. Ported to Node.js by [Stefan Thomas][github-justmoon] with contributions by [Jonathan Buchanan][github-insin] and [Jordan Harband][github-ljharb]. [travis-svg]: https://travis-ci.org/justmoon/node-extend.svg [travis-url]: https://travis-ci.org/justmoon/node-extend [npm-url]: https://npmjs.org/package/extend [mit-license-url]: http://opensource.org/licenses/MIT [github-justmoon]: https://github.com/justmoon [github-insin]: https://github.com/insin [github-ljharb]: https://github.com/ljharb [npm-version-png]: http://versionbadg.es/justmoon/node-extend.svg [deps-svg]: https://david-dm.org/justmoon/node-extend.svg [deps-url]: https://david-dm.org/justmoon/node-extend [dev-deps-svg]: https://david-dm.org/justmoon/node-extend/dev-status.svg [dev-deps-url]: https://david-dm.org/justmoon/node-extend#info=devDependencies # libnpmpublish [![npm version](https://img.shields.io/npm/v/libnpmpublish.svg)](https://npm.im/libnpmpublish) [![license](https://img.shields.io/npm/l/libnpmpublish.svg)](https://npm.im/libnpmpublish) [![Travis](https://img.shields.io/travis/npm/libnpmpublish.svg)](https://travis-ci.org/npm/libnpmpublish) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/libnpmpublish?svg=true)](https://ci.appveyor.com/project/zkat/libnpmpublish) [![Coverage Status](https://coveralls.io/repos/github/npm/libnpmpublish/badge.svg?branch=latest)](https://coveralls.io/github/npm/libnpmpublish?branch=latest) [`libnpmpublish`](https://github.com/npm/libnpmpublish) is a Node.js library for programmatically publishing and unpublishing npm packages. It does not take care of packing tarballs from source code, but once you have a tarball, it can take care of putting it up on a nice registry for you. ## Example ```js const { publish, unpublish } = require('libnpmpublish') ``` ## Install `$ npm install libnpmpublish` ## Table of Contents * [Example](#example) * [Install](#install) * [API](#api) * [publish/unpublish opts](#opts) * [`publish()`](#publish) * [`unpublish()`](#unpublish) ### API #### <a name="opts"></a> `opts` for `libnpmpublish` commands `libnpmpublish` uses [`npm-registry-fetch`](https://npm.im/npm-registry-fetch). Most options are passed through directly to that library, so please refer to [its own `opts` documentation](https://www.npmjs.com/package/npm-registry-fetch#fetch-options) for options that can be passed in. A couple of options of note for those in a hurry: * `opts.token` - can be passed in and will be used as the authentication token for the registry. For other ways to pass in auth details, see the n-r-f docs. * `opts.Promise` - If you pass this in, the Promises returned by `libnpmpublish` commands will use this Promise class instead. For example: `{Promise: require('bluebird')}` #### <a name="publish"></a> `> libpub.publish(pkgJson, tarData, [opts]) -> Promise` Publishes `tarData` to the appropriate configured registry. `pkgJson` should be the parsed `package.json` for the package that is being published. `tarData` can be a Buffer, a base64-encoded string, or a binary stream of data. Note that publishing itself can't be streamed, so the entire stream will be consumed into RAM before publishing (and are thus limited in how big they can be). Since `libnpmpublish` does not generate tarballs itself, one way to build your own tarball for publishing is to do `npm pack` in the directory you wish to pack. You can then `fs.createReadStream('my-proj-1.0.0.tgz')` and pass that to `libnpmpublish`, along with `require('./package.json')`. `publish()` does its best to emulate legacy publish logic in the standard npm client, and so should generally be compatible with any registry the npm CLI has been able to publish to in the past. If `opts.npmVersion` is passed in, it will be used as the `_npmVersion` field in the outgoing packument. It's recommended you add your own user agent string in there! If `opts.algorithms` is passed in, it should be an array of hashing algorithms to generate `integrity` hashes for. The default is `['sha512']`, which means you end up with `dist.integrity = 'sha512-deadbeefbadc0ffee'`. Any algorithm supported by your current node version is allowed -- npm clients that do not support those algorithms will simply ignore the unsupported hashes. If `opts.access` is passed in, it must be one of `public` or `restricted`. Unscoped packages cannot be `restricted`, and the registry may agree or disagree with whether you're allowed to publish a restricted package. ##### Example ```javascript const pkg = require('./dist/package.json') const tarball = fs.createReadStream('./dist/pkg-1.0.1.tgz') await libpub.publish(pkg, tarball, { npmVersion: 'my-pub-script@1.0.2', token: 'my-auth-token-here' }) // Package has been published to the npm registry. ``` #### <a name="unpublish"></a> `> libpub.unpublish(spec, [opts]) -> Promise` Unpublishes `spec` from the appropriate registry. The registry in question may have its own limitations on unpublishing. `spec` should be either a string, or a valid [`npm-package-arg`](https://npm.im/npm-package-arg) parsed spec object. For legacy compatibility reasons, only `tag` and `version` specs will work as expected. `range` specs will fail silently in most cases. ##### Example ```javascript await libpub.unpublish('lodash', { token: 'i-am-the-worst'}) // // `lodash` has now been unpublished, along with all its versions, and the world // devolves into utter chaos. // // That, or we all go home to our friends and/or family and have a nice time // doing nothing having to do with programming or JavaScript and realize our // lives are just so much happier now, and we just can't understand why we ever // got so into this JavaScript thing but damn did it pay well. I guess you'll // settle for gardening or something. ``` # HAR Schema [![version][npm-version]][npm-url] [![License][npm-license]][license-url] > JSON Schema for HTTP Archive ([HAR][spec]). [![Build Status][travis-image]][travis-url] [![Downloads][npm-downloads]][npm-url] [![Code Climate][codeclimate-quality]][codeclimate-url] [![Coverage Status][codeclimate-coverage]][codeclimate-url] [![Dependency Status][dependencyci-image]][dependencyci-url] [![Dependencies][david-image]][david-url] ## Install ```bash npm install --only=production --save har-schema ``` ## Usage Compatible with any [JSON Schema validation tool][validator]. ---- > :copyright: [ahmadnassri.com](https://www.ahmadnassri.com/) &nbsp;&middot;&nbsp; > License: [ISC][license-url] &nbsp;&middot;&nbsp; > Github: [@ahmadnassri](https://github.com/ahmadnassri) &nbsp;&middot;&nbsp; > Twitter: [@ahmadnassri](https://twitter.com/ahmadnassri) [license-url]: http://choosealicense.com/licenses/isc/ [travis-url]: https://travis-ci.org/ahmadnassri/har-schema [travis-image]: https://img.shields.io/travis/ahmadnassri/har-schema.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/har-schema [npm-license]: https://img.shields.io/npm/l/har-schema.svg?style=flat-square [npm-version]: https://img.shields.io/npm/v/har-schema.svg?style=flat-square [npm-downloads]: https://img.shields.io/npm/dm/har-schema.svg?style=flat-square [codeclimate-url]: https://codeclimate.com/github/ahmadnassri/har-schema [codeclimate-quality]: https://img.shields.io/codeclimate/github/ahmadnassri/har-schema.svg?style=flat-square [codeclimate-coverage]: https://img.shields.io/codeclimate/coverage/github/ahmadnassri/har-schema.svg?style=flat-square [david-url]: https://david-dm.org/ahmadnassri/har-schema [david-image]: https://img.shields.io/david/ahmadnassri/har-schema.svg?style=flat-square [dependencyci-url]: https://dependencyci.com/github/ahmadnassri/har-schema [dependencyci-image]: https://dependencyci.com/github/ahmadnassri/har-schema/badge?style=flat-square [spec]: https://github.com/ahmadnassri/har-spec/blob/master/versions/1.2.md [validator]: https://github.com/ahmadnassri/har-validator # Encoding **encoding** is a simple wrapper around [node-iconv](https://github.com/bnoordhuis/node-iconv) and [iconv-lite](https://github.com/ashtuchkin/iconv-lite/) to convert strings from one encoding to another. If node-iconv is not available for some reason, iconv-lite will be used instead of it as a fallback. [![Build Status](https://secure.travis-ci.org/andris9/encoding.svg)](http://travis-ci.org/andris9/Nodemailer) [![npm version](https://badge.fury.io/js/encoding.svg)](http://badge.fury.io/js/encoding) ## Install Install through npm npm install encoding ## Usage Require the module var encoding = require("encoding"); Convert with encoding.convert() var resultBuffer = encoding.convert(text, toCharset, fromCharset); Where * **text** is either a Buffer or a String to be converted * **toCharset** is the characterset to convert the string * **fromCharset** (*optional*, defaults to UTF-8) is the source charset Output of the conversion is always a Buffer object. Example var result = encoding.convert("ÕÄÖÜ", "Latin_1"); console.log(result); //<Buffer d5 c4 d6 dc> ## iconv support By default only iconv-lite is bundled. If you need node-iconv support, you need to add it as an additional dependency for your project: ..., "dependencies":{ "encoding": "*", "iconv": "*" }, ... ## License **MIT** wide-align ---------- A wide-character aware text alignment function for use in terminals / on the console. ### Usage ``` var align = require('wide-align') // Note that if you view this on a unicode console, all of the slashes are // aligned. This is because on a console, all narrow characters are // an en wide and all wide characters are an em. In browsers, this isn't // held to and wide characters like "古" can be less than two narrow // characters even with a fixed width font. console.log(align.center('abc', 10)) // ' abc ' console.log(align.center('古古古', 10)) // ' 古古古 ' console.log(align.left('abc', 10)) // 'abc ' console.log(align.left('古古古', 10)) // '古古古 ' console.log(align.right('abc', 10)) // ' abc' console.log(align.right('古古古', 10)) // ' 古古古' ``` ### Functions #### `align.center(str, length)` → `str` Returns *str* with spaces added to both sides such that that it is *length* chars long and centered in the spaces. #### `align.left(str, length)` → `str` Returns *str* with spaces to the right such that it is *length* chars long. ### `align.right(str, length)` → `str` Returns *str* with spaces to the left such that it is *length* chars long. ### Origins These functions were originally taken from [cliui](https://npmjs.com/package/cliui). Changes include switching to the MUCH faster pad generation function from [lodash](https://npmjs.com/package/lodash), making center alignment pad both sides and adding left alignment. # ansistyles [![build status](https://secure.travis-ci.org/thlorenz/ansistyles.png)](http://next.travis-ci.org/thlorenz/ansistyles) Functions that surround a string with ansistyle codes so it prints in style. In case you need colors, like `red`, have a look at [ansicolors](https://github.com/thlorenz/ansicolors). ## Installation npm install ansistyles ## Usage ```js var styles = require('ansistyles'); console.log(styles.bright('hello world')); // prints hello world in 'bright' white console.log(styles.underline('hello world')); // prints hello world underlined console.log(styles.inverse('hello world')); // prints hello world black on white ``` ## Combining with ansicolors Get the ansicolors module: npm install ansicolors ```js var styles = require('ansistyles') , colors = require('ansicolors'); console.log( // prints hello world underlined in blue on a green background colors.bgGreen(colors.blue(styles.underline('hello world'))) ); ``` ## Tests Look at the [tests](https://github.com/thlorenz/ansistyles/blob/master/test/ansistyles.js) to see more examples and/or run them via: npm explore ansistyles && npm test ## More Styles As you can see from [here](https://github.com/thlorenz/ansistyles/blob/master/ansistyles.js#L4-L15), more styles are available, but didn't have any effect on the terminals that I tested on Mac Lion and Ubuntu Linux. I included them for completeness, but didn't show them in the examples because they seem to have no effect. ### reset A style reset function is also included, please note however that this is not nestable. Therefore the below only underlines `hell` only, but not `world`. ```js console.log(styles.underline('hell' + styles.reset('o') + ' world')); ``` It is essentially the same as: ```js console.log(styles.underline('hell') + styles.reset('') + 'o world'); ``` ## Alternatives **ansistyles** tries to meet simple use cases with a very simple API. However, if you need a more powerful ansi formatting tool, I'd suggest to look at the [features](https://github.com/TooTallNate/ansi.js#features) of the [ansi module](https://github.com/TooTallNate/ansi.js). # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; # iferr Higher-order functions for easier error handling. `if (err) return cb(err);` be gone! ## Install ```bash npm install iferr ``` ## Use ### JavaScript example ```js var iferr = require('iferr'); function get_friends_count(id, cb) { User.load_user(id, iferr(cb, function(user) { user.load_friends(iferr(cb, function(friends) { cb(null, friends.length); })); })); } ``` ### CoffeeScript example ```coffee iferr = require 'iferr' get_friends_count = (id, cb) -> User.load_user id, iferr cb, (user) -> user.load_friends iferr cb, (friends) -> cb null, friends.length ``` (TODO: document tiferr, throwerr and printerr) ## License MIT # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) node-fetch-npm ============== [![npm version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![coverage status][codecov-image]][codecov-url] A light-weight module that brings `window.fetch` to Node.js `node-fetch-npm` is a fork of [`node-fetch`](https://npm.im/node-fetch) used in npm itself, through [`make-fetch-happen`](https://npm.im/make-fetch-happen). It has more regular releases and accepts some patches that would not fit with `node-fetch`'s own design goals (such as picking a specific cookie library, removing `babel` dependency altogether, etc). This library is *not a replacement* for `node-fetch`, nor does it intend to supplant it. It's purely a fork maintained for the sake of easier patching of specific needs that it wouldn't be fair to shove down the main project's throat. This project will still send patches for shared bugs over and hopefully help improve its "parent". ## Motivation Instead of implementing `XMLHttpRequest` in Node.js to run browser-specific [Fetch polyfill](https://github.com/github/fetch), why not go from native `http` to `fetch` API directly? Hence `node-fetch`, minimal code for a `window.fetch` compatible API on Node.js runtime. See Matt Andrews' [isomorphic-fetch](https://github.com/matthew-andrews/isomorphic-fetch) for isomorphic usage (exports `node-fetch` for server-side, `whatwg-fetch` for client-side). ## Features - Stay consistent with `window.fetch` API. - Make conscious trade-off when following [whatwg fetch spec][whatwg-fetch] and [stream spec](https://streams.spec.whatwg.org/) implementation details, document known difference. - Use native promise, but allow substituting it with [insert your favorite promise library]. - Use native stream for body, on both request and response. - Decode content encoding (gzip/deflate) properly, and convert string output (such as `res.text()` and `res.json()`) to UTF-8 automatically. - Useful extensions such as timeout, redirect limit, response size limit, [explicit errors][] for troubleshooting. ## Difference from client-side fetch - See [Known Differences](https://github.com/npm/node-fetch-npm/blob/master/LIMITS.md) for details. - If you happen to use a missing feature that `window.fetch` offers, feel free to open an issue. - Pull requests are welcomed too! ## Install ```sh $ npm install node-fetch-npm --save ``` ## Usage ```javascript import fetch from 'node-fetch'; // or // const fetch = require('node-fetch'); // if you are using your own Promise library, set it through fetch.Promise. Eg. // import Bluebird from 'bluebird'; // fetch.Promise = Bluebird; // plain text or html fetch('https://github.com/') .then(res => res.text()) .then(body => console.log(body)); // json fetch('https://api.github.com/users/github') .then(res => res.json()) .then(json => console.log(json)); // catching network error // 3xx-5xx responses are NOT network errors, and should be handled in then() // you only need one catch() at the end of your promise chain fetch('http://domain.invalid/') .catch(err => console.error(err)); // stream // the node.js way is to use stream when possible fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => { const dest = fs.createWriteStream('./octocat.png'); res.body.pipe(dest); }); // buffer // if you prefer to cache binary data in full, use buffer() // note that buffer() is a node-fetch only API import fileType from 'file-type'; fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => res.buffer()) .then(buffer => fileType(buffer)) .then(type => { /* ... */ }); // meta fetch('https://github.com/') .then(res => { console.log(res.ok); console.log(res.status); console.log(res.statusText); console.log(res.headers.raw()); console.log(res.headers.get('content-type')); }); // post fetch('http://httpbin.org/post', { method: 'POST', body: 'a=1' }) .then(res => res.json()) .then(json => console.log(json)); // post with stream from file import { createReadStream } from 'fs'; const stream = createReadStream('input.txt'); fetch('http://httpbin.org/post', { method: 'POST', body: stream }) .then(res => res.json()) .then(json => console.log(json)); // post with JSON var body = { a: 1 }; fetch('http://httpbin.org/post', { method: 'POST', body: JSON.stringify(body), headers: { 'Content-Type': 'application/json' }, }) .then(res => res.json()) .then(json => console.log(json)); // post with form-data (detect multipart) import FormData from 'form-data'; const form = new FormData(); form.append('a', 1); fetch('http://httpbin.org/post', { method: 'POST', body: form }) .then(res => res.json()) .then(json => console.log(json)); // post with form-data (custom headers) // note that getHeaders() is non-standard API import FormData from 'form-data'; const form = new FormData(); form.append('a', 1); fetch('http://httpbin.org/post', { method: 'POST', body: form, headers: form.getHeaders() }) .then(res => res.json()) .then(json => console.log(json)); // node 7+ with async function (async function () { const res = await fetch('https://api.github.com/users/github'); const json = await res.json(); console.log(json); })(); ``` See [test cases](https://github.com/npm/node-fetch-npm/blob/master/test/test.js) for more examples. ## API ### fetch(url[, options]) - `url` A string representing the URL for fetching - `options` [Options](#fetch-options) for the HTTP(S) request - Returns: <code>Promise&lt;[Response](#class-response)&gt;</code> Perform an HTTP(S) fetch. `url` should be an absolute url, such as `http://example.com/`. A path-relative URL (`/file/under/root`) or protocol-relative URL (`//can-be-http-or-https.com/`) will result in a rejected promise. <a id="fetch-options"></a> #### Options The default values are shown after each option key. ```js { // These properties are part of the Fetch Standard method: 'GET', headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below) body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect // The following properties are node-fetch-npm extensions follow: 20, // maximum redirect count. 0 to not follow redirect timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies) compress: true, // support gzip/deflate content encoding. false to disable size: 0, // maximum response body size in bytes. 0 to disable agent: null // http(s).Agent instance, allows custom proxy, certificate etc. } ``` ##### Default Headers If no values are set, the following request headers will be sent automatically: Header | Value ----------------- | -------------------------------------------------------- `Accept-Encoding` | `gzip,deflate` _(when `options.compress === true`)_ `Accept` | `*/*` `Connection` | `close` _(when no `options.agent` is present)_ `Content-Length` | _(automatically calculated, if possible)_ `User-Agent` | `node-fetch-npm/1.0 (+https://github.com/npm/node-fetch-npm)` <a id="class-request"></a> ### Class: Request An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the [Body](#iface-body) interface. Due to the nature of Node.js, the following properties are not implemented at this moment: - `type` - `destination` - `referrer` - `referrerPolicy` - `mode` - `credentials` - `cache` - `integrity` - `keepalive` The following node-fetch-npm extension properties are provided: - `follow` - `compress` - `counter` - `agent` See [options](#fetch-options) for exact meaning of these extensions. #### new Request(input[, options]) <small>*(spec-compliant)*</small> - `input` A string representing a URL, or another `Request` (which will be cloned) - `options` [Options][#fetch-options] for the HTTP(S) request Constructs a new `Request` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request). In most cases, directly `fetch(url, options)` is simpler than creating a `Request` object. <a id="class-response"></a> ### Class: Response An HTTP(S) response. This class implements the [Body](#iface-body) interface. The following properties are not implemented in node-fetch-npm at this moment: - `Response.error()` - `Response.redirect()` - `type` - `redirected` - `trailer` #### new Response([body[, options]]) <small>*(spec-compliant)*</small> - `body` A string or [Readable stream][node-readable] - `options` A [`ResponseInit`][response-init] options dictionary Constructs a new `Response` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Response/Response). Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a `Response` directly. <a id="class-headers"></a> ### Class: Headers This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the [Fetch Standard][whatwg-fetch] are implemented. #### new Headers([init]) <small>*(spec-compliant)*</small> - `init` Optional argument to pre-fill the `Headers` object Construct a new `Headers` object. `init` can be either `null`, a `Headers` object, an key-value map object, or any iterable object. ```js // Example adapted from https://fetch.spec.whatwg.org/#example-headers-class const meta = { 'Content-Type': 'text/xml', 'Breaking-Bad': '<3' }; const headers = new Headers(meta); // The above is equivalent to const meta = [ [ 'Content-Type', 'text/xml' ], [ 'Breaking-Bad', '<3' ] ]; const headers = new Headers(meta); // You can in fact use any iterable objects, like a Map or even another Headers const meta = new Map(); meta.set('Content-Type', 'text/xml'); meta.set('Breaking-Bad', '<3'); const headers = new Headers(meta); const copyOfHeaders = new Headers(headers); ``` <a id="iface-body"></a> ### Interface: Body `Body` is an abstract interface with methods that are applicable to both `Request` and `Response` classes. The following methods are not yet implemented in node-fetch-npm at this moment: - `formData()` #### body.body <small>*(deviation from spec)*</small> * Node.js [`Readable` stream][node-readable] The data encapsulated in the `Body` object. Note that while the [Fetch Standard][whatwg-fetch] requires the property to always be a WHATWG `ReadableStream`, in node-fetch-npm it is a Node.js [`Readable` stream][node-readable]. #### body.bodyUsed <small>*(spec-compliant)*</small> * `Boolean` A boolean property for if this body has been consumed. Per spec, a consumed body cannot be used again. #### body.arrayBuffer() #### body.blob() #### body.json() #### body.text() <small>*(spec-compliant)*</small> * Returns: <code>Promise</code> Consume the body and return a promise that will resolve to one of these formats. #### body.buffer() <small>*(node-fetch-npm extension)*</small> * Returns: <code>Promise&lt;Buffer&gt;</code> Consume the body and return a promise that will resolve to a Buffer. #### body.textConverted() <small>*(node-fetch-npm extension)*</small> * Returns: <code>Promise&lt;String&gt;</code> Identical to `body.text()`, except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8, if possible. <a id="class-fetcherror"></a> ### Class: FetchError <small>*(node-fetch-npm extension)*</small> An operational error in the fetching process. See [ERROR-HANDLING.md][] for more info. ## License MIT ## Acknowledgement Thanks to [github/fetch](https://github.com/github/fetch) for providing a solid implementation reference. [npm-image]: https://img.shields.io/npm/v/node-fetch-npm.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/node-fetch-npm [travis-image]: https://img.shields.io/travis/npm/node-fetch-npm.svg?style=flat-square [travis-url]: https://travis-ci.org/npm/node-fetch-npm [codecov-image]: https://img.shields.io/codecov/c/github/npm/node-fetch-npm.svg?style=flat-square [codecov-url]: https://codecov.io/gh/npm/node-fetch-npm [ERROR-HANDLING.md]: https://github.com/npm/node-fetch-npm/blob/master/ERROR-HANDLING.md [whatwg-fetch]: https://fetch.spec.whatwg.org/ [response-init]: https://fetch.spec.whatwg.org/#responseinit [node-readable]: https://nodejs.org/api/stream.html#stream_readable_streams [mdn-headers]: https://developer.mozilla.org/en-US/docs/Web/API/Headers # fast-json-stable-stringify Deterministic `JSON.stringify()` - a faster version of [@substack](https://github.com/substack)'s json-stable-strigify without [jsonify](https://github.com/substack/jsonify). You can also pass in a custom comparison function. [![Build Status](https://travis-ci.org/epoberezkin/fast-json-stable-stringify.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-json-stable-stringify) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-json-stable-stringify/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-json-stable-stringify?branch=master) # example ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; console.log(stringify(obj)); ``` output: ``` {"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8} ``` # methods ``` js var stringify = require('fast-json-stable-stringify') ``` ## var str = stringify(obj, opts) Return a deterministic stringified string `str` from the object `obj`. ## options ### cmp If `opts` is given, you can supply an `opts.cmp` to have a custom comparison function for object keys. Your function `opts.cmp` is called with these parameters: ``` js opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue }) ``` For example, to sort on the object key names in reverse order you could write: ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; var s = stringify(obj, function (a, b) { return a.key < b.key ? 1 : -1; }); console.log(s); ``` which results in the output string: ``` {"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3} ``` Or if you wanted to sort on the object values in reverse order, you could write: ``` var stringify = require('fast-json-stable-stringify'); var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 }; var s = stringify(obj, function (a, b) { return a.value < b.value ? 1 : -1; }); console.log(s); ``` which outputs: ``` {"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10} ``` ### cycles Pass `true` in `opts.cycles` to stringify circular property as `__cycle__` - the result will not be a valid JSON string in this case. TypeError will be thrown in case of circular object without this option. # install With [npm](https://npmjs.org) do: ``` npm install fast-json-stable-stringify ``` # benchmark To run benchmark (requires Node.js 6+): ``` node benchmark ``` Results: ``` fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled) json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled) fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled) faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled) The fastest is fast-stable-stringify ``` # license [MIT](https://github.com/epoberezkin/fast-json-stable-stringify/blob/master/LICENSE) call-limit ---------- Limit the number of simultaneous executions of a async function. ```javascript const fs = require('fs') const limit = require('call-limit') const limitedStat = limit(fs.stat, 5) ``` Or with promise returning functions: ```javascript const fs = Bluebird.promisifyAll(require('fs')) const limit = require('call-limit') const limitedStat = limit.promise(fs.statAsync, 5) ``` ### USAGE: Given that: ```javascript const limit = require('call-limit') ``` ### limit(func, maxRunning) → limitedFunc The returned function will execute up to maxRunning calls of `func` at once. Beyond that they get queued and called when the previous call completes. `func` must accept a callback as the final argument and must call it when it completes, or `call-limit` won't know to dequeue the next thing to run. By contrast, callers to `limitedFunc` do NOT have to pass in a callback, but if they do they'll be called when `func` calls its callback. ### limit.promise(func, maxRunning) → limitedFunc The returned function will execute up to maxRunning calls of `func` at once. Beyond that they get queued and called when the previous call completes. `func` must return a promise. `limitedFunc` will return a promise that resolves with the promise returned from the call to `func`. ### limit.method(class, methodName, maxRunning) This is sugar for: ```javascript class.prototype.methodName = limit(class.prototype.methodName, maxRunning) ``` ### limit.method(object, methodName, maxRunning) This is sugar for: ```javascript object.methodName = limit(object.methodName, maxRunning) ``` For example `limit.promise.method(fs, 'stat', 5)` is the same as `fs.stat = limit.promise(fs.stat, 5)`. ### limit.promise.method(class, methodName, maxRunning) This is sugar for: ```javascript class.prototype.methodName = limit.promise(class.prototype.methodName, maxRunning) ``` ### limit.promise.method(object, methodName, maxRunning) This is sugar for: ```javascript object.methodName = limit.promise(object.methodName, maxRunning) ``` For example `limit.promise.method(fs, 'statAsync', 5)` is the same as `fs.statAsync = limit.promise(fs.statAsync, 5)`. # lodash._baseuniq v4.6.0 The internal [lodash](https://lodash.com/) function `baseUniq` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._baseuniq ``` In Node.js: ```js var baseUniq = require('lodash._baseuniq'); ``` See the [package source](https://github.com/lodash/lodash/blob/4.6.0-npm-packages/lodash._baseuniq) for more details. # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC This is a historical repository for the early development of the JSON Schema specification and implementation. This package is considered "finished": it holds the earlier draft specification and a simple, efficient, lightweight implementation of the original core elements of JSON Schema. This repository does not house the latest specifications nor does it implement the latest versions of JSON Schema. This package seeks to maintain the stability (in behavior and size) of this original implementation for the sake of the numerous packages that rely on it. For the latest JSON Schema specifications and implementations, please visit the [JSON Schema site](https://json-schema.org/) (or the [respository](https://github.com/json-schema-org/json-schema-spec)). Code is licensed under the AFL or BSD 3-Clause license. # npm-normalize-package-bin Turn any flavor of allowable package.json bin into a normalized object. ## API ```js const normalize = require('npm-normalize-package-bin') const pkg = {name: 'foo', bin: 'bar'} console.log(normalize(pkg)) // {name:'foo', bin:{foo: 'bar'}} ``` Also strips out weird dots and slashes to prevent accidental and/or malicious bad behavior when the package is installed. A list of objects, bound by their prototype chain. Used in npm's config stuff. aproba ====== A ridiculously light-weight function argument validator ``` var validate = require("aproba") function myfunc(a, b, c) { // `a` must be a string, `b` a number, `c` a function validate('SNF', arguments) // [a,b,c] is also valid } myfunc('test', 23, function () {}) // ok myfunc(123, 23, function () {}) // type error myfunc('test', 23) // missing arg error myfunc('test', 23, function () {}, true) // too many args error ``` Valid types are: | type | description | :--: | :---------- | * | matches any type | A | `Array.isArray` OR an `arguments` object | S | typeof == string | N | typeof == number | F | typeof == function | O | typeof == object and not type A and not type E | B | typeof == boolean | E | `instanceof Error` OR `null` **(special: see below)** | Z | == `null` Validation failures throw one of three exception types, distinguished by a `code` property of `EMISSINGARG`, `EINVALIDTYPE` or `ETOOMANYARGS`. If you pass in an invalid type then it will throw with a code of `EUNKNOWNTYPE`. If an **error** argument is found and is not null then the remaining arguments are optional. That is, if you say `ESO` then that's like using a non-magical `E` in: `E|ESO|ZSO`. ### But I have optional arguments?! You can provide more than one signature by separating them with pipes `|`. If any signature matches the arguments then they'll be considered valid. So for example, say you wanted to write a signature for `fs.createWriteStream`. The docs for it describe it thusly: ``` fs.createWriteStream(path[, options]) ``` This would be a signature of `SO|S`. That is, a string and and object, or just a string. Now, if you read the full `fs` docs, you'll see that actually path can ALSO be a buffer. And options can be a string, that is: ``` path <String> | <Buffer> options <String> | <Object> ``` To reproduce this you have to fully enumerate all of the possible combinations and that implies a signature of `SO|SS|OO|OS|S|O`. The awkwardness is a feature: It reminds you of the complexity you're adding to your API when you do this sort of thing. ### Browser support This has no dependencies and should work in browsers, though you'll have noisier stack traces. ### Why this exists I wanted a very simple argument validator. It needed to do two things: 1. Be more concise and easier to use than assertions 2. Not encourage an infinite bikeshed of DSLs This is why types are specified by a single character and there's no such thing as an optional argument. This is not intended to validate user data. This is specifically about asserting the interface of your functions. If you need greater validation, I encourage you to write them by hand or look elsewhere. # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # validate-npm-package-name Give me a string and I'll tell you if it's a valid `npm` package name. This package exports a single synchronous function that takes a `string` as input and returns an object with two properties: - `validForNewPackages` :: `Boolean` - `validForOldPackages` :: `Boolean` ## Contents - [Naming rules](#naming-rules) - [Examples](#examples) + [Valid Names](#valid-names) + [Invalid Names](#invalid-names) - [Legacy Names](#legacy-names) - [Tests](#tests) - [License](#license) ## Naming Rules Below is a list of rules that valid `npm` package name should conform to. - package name length should be greater than zero - all the characters in the package name must be lowercase i.e., no uppercase or mixed case names are allowed - package name *can* consist of hyphens - package name must *not* contain any non-url-safe characters (since name ends up being part of a URL) - package name should not start with `.` or `_` - package name should *not* contain any leading or trailing spaces - package name should *not* contain any of the following characters: `~)('!*` - package name *cannot* be the same as a node.js/io.js core module nor a reserved/blacklisted name. For example, the following names are invalid: + http + stream + node_modules + favicon.ico - package name length cannot exceed 214 ## Examples ### Valid Names ```js var validate = require("validate-npm-package-name") validate("some-package") validate("example.com") validate("under_score") validate("123numeric") validate("excited!") validate("@npm/thingy") validate("@jane/foo.js") ``` All of the above names are valid, so you'll get this object back: ```js { validForNewPackages: true, validForOldPackages: true } ``` ### Invalid Names ```js validate(" leading-space:and:weirdchars") ``` That was never a valid package name, so you get this: ```js { validForNewPackages: false, validForOldPackages: false, errors: [ 'name cannot contain leading or trailing spaces', 'name can only contain URL-friendly characters' ] } ``` ## Legacy Names In the old days of npm, package names were wild. They could have capital letters in them. They could be really long. They could be the name of an existing module in node core. If you give this function a package name that **used to be valid**, you'll see a change in the value of `validForNewPackages` property, and a warnings array will be present: ```js validate("eLaBorAtE-paCkAgE-with-mixed-case-and-more-than-214-characters-----------------------------------------------------------------------------------------------------------------------------------------------------------") ``` returns: ```js { validForNewPackages: false, validForOldPackages: true, warnings: [ "name can no longer contain capital letters", "name can no longer contain more than 214 characters" ] } ``` ## Tests ```sh npm install npm test ``` ## License ISC # make-fetch-happen [![npm version](https://img.shields.io/npm/v/make-fetch-happen.svg)](https://npm.im/make-fetch-happen) [![license](https://img.shields.io/npm/l/make-fetch-happen.svg)](https://npm.im/make-fetch-happen) [![Travis](https://img.shields.io/travis/zkat/make-fetch-happen.svg)](https://travis-ci.org/zkat/make-fetch-happen) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/make-fetch-happen?svg=true)](https://ci.appveyor.com/project/zkat/make-fetch-happen) [![Coverage Status](https://coveralls.io/repos/github/zkat/make-fetch-happen/badge.svg?branch=latest)](https://coveralls.io/github/zkat/make-fetch-happen?branch=latest) [`make-fetch-happen`](https://github.com/zkat/make-fetch-happen) is a Node.js library that wraps [`node-fetch-npm`](https://github.com/npm/node-fetch-npm) with additional features [`node-fetch`](https://github.com/bitinn/node-fetch) doesn't intend to include, including HTTP Cache support, request pooling, proxies, retries, [and more](#features)! ## Install `$ npm install --save make-fetch-happen` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [`fetch`](#fetch) * [`fetch.defaults`](#fetch-defaults) * [`node-fetch` options](#node-fetch-options) * [`make-fetch-happen` options](#extra-options) * [`opts.cacheManager`](#opts-cache-manager) * [`opts.cache`](#opts-cache) * [`opts.proxy`](#opts-proxy) * [`opts.noProxy`](#opts-no-proxy) * [`opts.ca, opts.cert, opts.key`](#https-opts) * [`opts.maxSockets`](#opts-max-sockets) * [`opts.retry`](#opts-retry) * [`opts.onRetry`](#opts-onretry) * [`opts.integrity`](#opts-integrity) * [Message From Our Sponsors](#wow) ### Example ```javascript const fetch = require('make-fetch-happen').defaults({ cacheManager: './my-cache' // path where cache will be written (and read) }) fetch('https://registry.npmjs.org/make-fetch-happen').then(res => { return res.json() // download the body as JSON }).then(body => { console.log(`got ${body.name} from web`) return fetch('https://registry.npmjs.org/make-fetch-happen', { cache: 'no-cache' // forces a conditional request }) }).then(res => { console.log(res.status) // 304! cache validated! return res.json().then(body => { console.log(`got ${body.name} from cache`) }) }) ``` ### Features * Builds around [`node-fetch`](https://npm.im/node-fetch) for the core [`fetch` API](https://fetch.spec.whatwg.org) implementation * Request pooling out of the box * Quite fast, really * Automatic HTTP-semantics-aware request retries * Cache-fallback automatic "offline mode" * Proxy support (http, https, socks, socks4, socks5) * Built-in request caching following full HTTP caching rules (`Cache-Control`, `ETag`, `304`s, cache fallback on error, etc). * Customize cache storage with any [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache)-compliant `Cache` instance. Cache to Redis! * Node.js Stream support * Transparent gzip and deflate support * [Subresource Integrity](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity) support * Literally punches nazis * (PENDING) Range request caching and resuming ### Contributing The make-fetch-happen team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other. Please refer to the [Changelog](CHANGELOG.md) for project history details, too. Happy hacking! ### API #### <a name="fetch"></a> `> fetch(uriOrRequest, [opts]) -> Promise<Response>` This function implements most of the [`fetch` API](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/fetch): given a `uri` string or a `Request` instance, it will fire off an http request and return a Promise containing the relevant response. If `opts` is provided, the [`node-fetch`-specific options](#node-fetch-options) will be passed to that library. There are also [additional options](#extra-options) specific to make-fetch-happen that add various features, such as HTTP caching, integrity verification, proxy support, and more. ##### Example ```javascript fetch('https://google.com').then(res => res.buffer()) ``` #### <a name="fetch-defaults"></a> `> fetch.defaults([defaultUrl], [defaultOpts])` Returns a new `fetch` function that will call `make-fetch-happen` using `defaultUrl` and `defaultOpts` as default values to any calls. A defaulted `fetch` will also have a `.defaults()` method, so they can be chained. ##### Example ```javascript const fetch = require('make-fetch-happen').defaults({ cacheManager: './my-local-cache' }) fetch('https://registry.npmjs.org/make-fetch-happen') // will always use the cache ``` #### <a name="node-fetch-options"></a> `> node-fetch options` The following options for `node-fetch` are used as-is: * method * body * redirect * follow * timeout * compress * size These other options are modified or augmented by make-fetch-happen: * headers - Default `User-Agent` set to make-fetch happen. `Connection` is set to `keep-alive` or `close` automatically depending on `opts.agent`. * agent * If agent is null, an http or https Agent will be automatically used. By default, these will be `http.globalAgent` and `https.globalAgent`. * If [`opts.proxy`](#opts-proxy) is provided and `opts.agent` is null, the agent will be set to an appropriate proxy-handling agent. * If `opts.agent` is an object, it will be used as the request-pooling agent argument for this request. * If `opts.agent` is `false`, it will be passed as-is to the underlying request library. This causes a new Agent to be spawned for every request. For more details, see [the documentation for `node-fetch` itself](https://github.com/bitinn/node-fetch#options). #### <a name="extra-options"></a> `> make-fetch-happen options` make-fetch-happen augments the `node-fetch` API with additional features available through extra options. The following extra options are available: * [`opts.cacheManager`](#opts-cache-manager) - Cache target to read/write * [`opts.cache`](#opts-cache) - `fetch` cache mode. Controls cache *behavior*. * [`opts.proxy`](#opts-proxy) - Proxy agent * [`opts.noProxy`](#opts-no-proxy) - Domain segments to disable proxying for. * [`opts.ca, opts.cert, opts.key, opts.strictSSL`](#https-opts) * [`opts.localAddress`](#opts-local-address) * [`opts.maxSockets`](#opts-max-sockets) * [`opts.retry`](#opts-retry) - Request retry settings * [`opts.onRetry`](#opts-onretry) - a function called whenever a retry is attempted * [`opts.integrity`](#opts-integrity) - [Subresource Integrity](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity) metadata. #### <a name="opts-cache-manager"></a> `> opts.cacheManager` Either a `String` or a `Cache`. If the former, it will be assumed to be a `Path` to be used as the cache root for [`cacache`](https://npm.im/cacache). If an object is provided, it will be assumed to be a compliant [`Cache` instance](https://developer.mozilla.org/en-US/docs/Web/API/Cache). Only `Cache.match()`, `Cache.put()`, and `Cache.delete()` are required. Options objects will not be passed in to `match()` or `delete()`. By implementing this API, you can customize the storage backend for make-fetch-happen itself -- for example, you could implement a cache that uses `redis` for caching, or simply keeps everything in memory. Most of the caching logic exists entirely on the make-fetch-happen side, so the only thing you need to worry about is reading, writing, and deleting, as well as making sure `fetch.Response` objects are what gets returned. You can refer to `cache.js` in the make-fetch-happen source code for a reference implementation. **NOTE**: Requests will not be cached unless their response bodies are consumed. You will need to use one of the `res.json()`, `res.buffer()`, etc methods on the response, or drain the `res.body` stream, in order for it to be written. The default cache manager also adds the following headers to cached responses: * `X-Local-Cache`: Path to the cache the content was found in * `X-Local-Cache-Key`: Unique cache entry key for this response * `X-Local-Cache-Hash`: Specific integrity hash for the cached entry * `X-Local-Cache-Time`: UTCString of the cache insertion time for the entry Using [`cacache`](https://npm.im/cacache), a call like this may be used to manually fetch the cached entry: ```javascript const h = response.headers cacache.get(h.get('x-local-cache'), h.get('x-local-cache-key')) // grab content only, directly: cacache.get.byDigest(h.get('x-local-cache'), h.get('x-local-cache-hash')) ``` ##### Example ```javascript fetch('https://registry.npmjs.org/make-fetch-happen', { cacheManager: './my-local-cache' }) // -> 200-level response will be written to disk fetch('https://npm.im/cacache', { cacheManager: new MyCustomRedisCache(process.env.PORT) }) // -> 200-level response will be written to redis ``` A possible (minimal) implementation for `MyCustomRedisCache`: ```javascript const bluebird = require('bluebird') const redis = require("redis") bluebird.promisifyAll(redis.RedisClient.prototype) class MyCustomRedisCache { constructor (opts) { this.redis = redis.createClient(opts) } match (req) { return this.redis.getAsync(req.url).then(res => { if (res) { const parsed = JSON.parse(res) return new fetch.Response(parsed.body, { url: req.url, headers: parsed.headers, status: 200 }) } }) } put (req, res) { return res.buffer().then(body => { return this.redis.setAsync(req.url, JSON.stringify({ body: body, headers: res.headers.raw() })) }).then(() => { // return the response itself return res }) } 'delete' (req) { return this.redis.unlinkAsync(req.url) } } ``` #### <a name="opts-cache"></a> `> opts.cache` This option follows the standard `fetch` API cache option. This option will do nothing if [`opts.cacheManager`](#opts-cache-manager) is null. The following values are accepted (as strings): * `default` - Fetch will inspect the HTTP cache on the way to the network. If there is a fresh response it will be used. If there is a stale response a conditional request will be created, and a normal request otherwise. It then updates the HTTP cache with the response. If the revalidation request fails (for example, on a 500 or if you're offline), the stale response will be returned. * `no-store` - Fetch behaves as if there is no HTTP cache at all. * `reload` - Fetch behaves as if there is no HTTP cache on the way to the network. Ergo, it creates a normal request and updates the HTTP cache with the response. * `no-cache` - Fetch creates a conditional request if there is a response in the HTTP cache and a normal request otherwise. It then updates the HTTP cache with the response. * `force-cache` - Fetch uses any response in the HTTP cache matching the request, not paying attention to staleness. If there was no response, it creates a normal request and updates the HTTP cache with the response. * `only-if-cached` - Fetch uses any response in the HTTP cache matching the request, not paying attention to staleness. If there was no response, it returns a network error. (Can only be used when request’s mode is "same-origin". Any cached redirects will be followed assuming request’s redirect mode is "follow" and the redirects do not violate request’s mode.) (Note: option descriptions are taken from https://fetch.spec.whatwg.org/#http-network-or-cache-fetch) ##### Example ```javascript const fetch = require('make-fetch-happen').defaults({ cacheManager: './my-cache' }) // Will error with ENOTCACHED if we haven't already cached this url fetch('https://registry.npmjs.org/make-fetch-happen', { cache: 'only-if-cached' }) // Will refresh any local content and cache the new response fetch('https://registry.npmjs.org/make-fetch-happen', { cache: 'reload' }) // Will use any local data, even if stale. Otherwise, will hit network. fetch('https://registry.npmjs.org/make-fetch-happen', { cache: 'force-cache' }) ``` #### <a name="opts-proxy"></a> `> opts.proxy` A string or `url.parse`-d URI to proxy through. Different Proxy handlers will be used depending on the proxy's protocol. Additionally, `process.env.HTTP_PROXY`, `process.env.HTTPS_PROXY`, and `process.env.PROXY` are used if present and no `opts.proxy` value is provided. (Pending) `process.env.NO_PROXY` may also be configured to skip proxying requests for all, or specific domains. ##### Example ```javascript fetch('https://registry.npmjs.org/make-fetch-happen', { proxy: 'https://corporate.yourcompany.proxy:4445' }) fetch('https://registry.npmjs.org/make-fetch-happen', { proxy: { protocol: 'https:', hostname: 'corporate.yourcompany.proxy', port: 4445 } }) ``` #### <a name="opts-no-proxy"></a> `> opts.noProxy` If present, should be a comma-separated string or an array of domain extensions that a proxy should _not_ be used for. This option may also be provided through `process.env.NO_PROXY`. #### <a name="https-opts"></a> `> opts.ca, opts.cert, opts.key, opts.strictSSL` These values are passed in directly to the HTTPS agent and will be used for both proxied and unproxied outgoing HTTPS requests. They mostly correspond to the same options the `https` module accepts, which will be themselves passed to `tls.connect()`. `opts.strictSSL` corresponds to `rejectUnauthorized`. #### <a name="opts-local-address"></a> `> opts.localAddress` Passed directly to `http` and `https` request calls. Determines the local address to bind to. #### <a name="opts-max-sockets"></a> `> opts.maxSockets` Default: 15 Maximum number of active concurrent sockets to use for the underlying Http/Https/Proxy agents. This setting applies once per spawned agent. 15 is probably a _pretty good value_ for most use-cases, and balances speed with, uh, not knocking out people's routers. 🤓 #### <a name="opts-retry"></a> `> opts.retry` An object that can be used to tune request retry settings. Retries will only be attempted on the following conditions: * Request method is NOT `POST` AND * Request status is one of: `408`, `420`, `429`, or any status in the 500-range. OR * Request errored with `ECONNRESET`, `ECONNREFUSED`, `EADDRINUSE`, `ETIMEDOUT`, or the `fetch` error `request-timeout`. The following are worth noting as explicitly not retried: * `getaddrinfo ENOTFOUND` and will be assumed to be either an unreachable domain or the user will be assumed offline. If a response is cached, it will be returned immediately. * `ECONNRESET` currently has no support for restarting. It will eventually be supported but requires a bit more juggling due to streaming. If `opts.retry` is `false`, it is equivalent to `{retries: 0}` If `opts.retry` is a number, it is equivalent to `{retries: num}` The following retry options are available if you want more control over it: * retries * factor * minTimeout * maxTimeout * randomize For details on what each of these do, refer to the [`retry`](https://npm.im/retry) documentation. ##### Example ```javascript fetch('https://flaky.site.com', { retry: { retries: 10, randomize: true } }) fetch('http://reliable.site.com', { retry: false }) fetch('http://one-more.site.com', { retry: 3 }) ``` #### <a name="opts-onretry"></a> `> opts.onRetry` A function called whenever a retry is attempted. ##### Example ```javascript fetch('https://flaky.site.com', { onRetry() { console.log('we will retry!') } }) ``` #### <a name="opts-integrity"></a> `> opts.integrity` Matches the response body against the given [Subresource Integrity](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity) metadata. If verification fails, the request will fail with an `EINTEGRITY` error. `integrity` may either be a string or an [`ssri`](https://npm.im/ssri) `Integrity`-like. ##### Example ```javascript fetch('https://registry.npmjs.org/make-fetch-happen/-/make-fetch-happen-1.0.0.tgz', { integrity: 'sha1-o47j7zAYnedYFn1dF/fR9OV3z8Q=' }) // -> ok fetch('https://malicious-registry.org/make-fetch-happen/-/make-fetch-happen-1.0.0.tgz', { integrity: 'sha1-o47j7zAYnedYFn1dF/fR9OV3z8Q=' }) // Error: EINTEGRITY ``` ### <a name="wow"></a> Message From Our Sponsors ![](stop.gif) ![](happening.gif) # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) stream.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/coveralls/bestiejs/punycode.js/master.svg)](https://coveralls.io/r/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) A robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891), and works on nearly all JavaScript platforms. This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project is [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with [Node.js v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) and [io.js v1.0.0+](https://github.com/iojs/io.js/blob/v1.x/lib/punycode.js). ## Installation Via [npm](https://www.npmjs.com/) (only required for Node.js releases older than v0.6.2): ```bash npm install punycode ``` Via [Bower](http://bower.io/): ```bash bower install punycode ``` Via [Component](https://github.com/component/component): ```bash component install bestiejs/punycode.js ``` In a browser: ```html <script src="punycode.js"></script> ``` In [Node.js](https://nodejs.org/), [io.js](https://iojs.org/), [Narwhal](http://narwhaljs.org/), and [RingoJS](http://ringojs.org/): ```js var punycode = require('punycode'); ``` In [Rhino](http://www.mozilla.org/rhino/): ```js load('punycode.js'); ``` Using an AMD loader like [RequireJS](http://requirejs.org/): ```js require( { 'paths': { 'punycode': 'path/to/punycode' } }, ['punycode'], function(punycode) { console.log(punycode); } ); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Unit tests & code coverage After cloning this repository, run `npm install --dev` to install the dependencies needed for Punycode.js development and testing. You may want to install Istanbul _globally_ using `npm install istanbul -g`. Once that’s done, you can run the unit tests in Node using `npm test` or `node tests/tests.js`. To run the tests in Rhino, Ringo, Narwhal, PhantomJS, and web browsers as well, use `grunt test`. To generate the code coverage report, use `grunt cover`. Feel free to fork if you see possible improvements! ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## Contributors | [![twitter/jdalton](https://gravatar.com/avatar/299a3d891ff1920b69c364d061007043?s=70)](https://twitter.com/jdalton "Follow @jdalton on Twitter") | |---| | [John-David Dalton](http://allyoucanleet.com/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # is-cidr [![](https://img.shields.io/npm/v/is-cidr.svg?style=flat)](https://www.npmjs.org/package/is-cidr) [![](https://img.shields.io/npm/dm/is-cidr.svg)](https://www.npmjs.org/package/is-cidr) [![](https://api.travis-ci.org/silverwind/is-cidr.svg?style=flat)](https://travis-ci.org/silverwind/is-cidr) > Check if a string is an IP address in CIDR notation ## Install ``` npm i is-cidr ``` ## Usage ```js const isCidr = require('is-cidr'); isCidr('192.168.0.1/24'); //=> 4 isCidr('1:2:3:4:5:6:7:8/64'); //=> 6 isCidr('10.0.0.0'); //=> 0 isCidr.v6('10.0.0.0/24'); //=> false ``` ## API ### isCidr(input) Check if `input` is a IPv4 or IPv6 CIDR address. Returns either `4`, `6` (indicating the IP version) or `0` if the string is not a CIDR. ### isCidr.v4(input) Check if `input` is a IPv4 CIDR address. Returns a boolean. ### isCidr.v6(input) Check if `input` is a IPv6 CIDR address. Returns a boolean. ## Related - [cidr-regex](https://github.com/silverwind/cidr-regex) - Regular expression for matching IP addresses in CIDR notation - [is-ip](https://github.com/sindresorhus/is-ip) - Check if a string is an IP address - [ip-regex](https://github.com/sindresorhus/ip-regex) - Regular expression for matching IP addresses ## License © [silverwind](https://github.com/silverwind), distributed under BSD licence Based on previous work by [Felipe Apostol](https://github.com/flipjs) # dezalgo Contain async insanity so that the dark pony lord doesn't eat souls See [this blog post](http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony). ## USAGE Pass a callback to `dezalgo` and it will ensure that it is *always* called in a future tick, and never in this tick. ```javascript var dz = require('dezalgo') var cache = {} function maybeSync(arg, cb) { cb = dz(cb) // this will actually defer to nextTick if (cache[arg]) cb(null, cache[arg]) fs.readFile(arg, function (er, data) { // since this is *already* defered, it will call immediately if (er) cb(er) cb(null, cache[arg] = data) }) } ``` # HAR Validator [![license][license-img]][license-url] [![version][npm-img]][npm-url] [![super linter][super-linter-img]][super-linter-url] [![test][test-img]][test-url] [![release][release-img]][release-url] [license-url]: LICENSE [license-img]: https://badgen.net/github/license/ahmadnassri/node-har-validator [npm-url]: https://www.npmjs.com/package/har-validator [npm-img]: https://badgen.net/npm/v/har-validator [super-linter-url]: https://github.com/ahmadnassri/node-har-validator/actions?query=workflow%3Asuper-linter [super-linter-img]: https://github.com/ahmadnassri/node-har-validator/workflows/super-linter/badge.svg [test-url]: https://github.com/ahmadnassri/node-har-validator/actions?query=workflow%3Atest [test-img]: https://github.com/ahmadnassri/node-har-validator/workflows/test/badge.svg [release-url]: https://github.com/ahmadnassri/node-har-validator/actions?query=workflow%3Arelease [release-img]: https://github.com/ahmadnassri/node-har-validator/workflows/release/badge.svg > Extremely fast HTTP Archive ([HAR](https://github.com/ahmadnassri/har-spec/blob/master/versions/1.2.md)) validator using JSON Schema. ## Install ```bash npm install har-validator ``` ## CLI Usage Please refer to [`har-cli`](https://github.com/ahmadnassri/har-cli) for more info. ## API **Note**: as of [`v2.0.0`](https://github.com/ahmadnassri/node-har-validator/releases/tag/v2.0.0) this module defaults to Promise based API. _For backward compatibility with `v1.x` an [async/callback API](docs/async.md) is also provided_ - [async API](docs/async.md) - [callback API](docs/async.md) - [Promise API](docs/promise.md) _(default)_ # Note: pending imminent deprecation **This module will be deprecated once npm v7 is released. Please do not rely on it more than absolutely necessary.** The lifecycle script runner used in npm v7 is [@npmcli/run-script](http://npm.im/@npmcli/run-script). Please use that module moving forward. ----- # npm-lifecycle [`npm-lifecycle`](https://github.com/npm/npm-lifecycle) is a standalone library for executing packages' lifecycle scripts. It is extracted from npm itself and intended to be fully compatible with the way npm executes individual scripts. ## Install `$ npm install npm-lifecycle` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [`lifecycle`](#lifecycle) ### Example ```javascript // idk yet ``` ### API #### <a name="lifecycle"></a> `> lifecycle(name, pkg, wd, [opts]) -> Promise` ##### Arguments * `opts.stdio` - the [stdio](https://nodejs.org/api/child_process.html#child_process_options_stdio) passed to the child process. `[0, 1, 2]` by default. ##### Example ```javascript lifecycle() ``` # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # Note: pending imminent deprecation **This module will be deprecated once npm v7 is released. Please do not rely on it more than absolutely necessary.** ---- [`libcipm`](https://github.com/npm/libcipm) installs npm projects in a way that's optimized for continuous integration/deployment/etc scenarios. It gives up the ability to build its own trees or install packages individually, as well as other user-oriented features, in exchange for speed, and being more strict about project state. For documentation about the associated command-line tool, see [`cipm`](https://npm.im/cipm). ## Install `$ npm install libcipm` ## Table of Contents * [Features](#features) * [API](#api) ### Features * npm-compatible project installation * lifecycle script support * blazing fast * npm-compatible caching * errors if `package.json` and `package-lock.json` are out of sync, instead of fixing it like npm does. Essentially provides a `--frozen` install. aproba ====== A ridiculously light-weight function argument validator ``` var validate = require("aproba") function myfunc(a, b, c) { // `a` must be a string, `b` a number, `c` a function validate('SNF', arguments) // [a,b,c] is also valid } myfunc('test', 23, function () {}) // ok myfunc(123, 23, function () {}) // type error myfunc('test', 23) // missing arg error myfunc('test', 23, function () {}, true) // too many args error ``` Valid types are: | type | description | :--: | :---------- | * | matches any type | A | `Array.isArray` OR an `arguments` object | S | typeof == string | N | typeof == number | F | typeof == function | O | typeof == object and not type A and not type E | B | typeof == boolean | E | `instanceof Error` OR `null` **(special: see below)** | Z | == `null` Validation failures throw one of three exception types, distinguished by a `code` property of `EMISSINGARG`, `EINVALIDTYPE` or `ETOOMANYARGS`. If you pass in an invalid type then it will throw with a code of `EUNKNOWNTYPE`. If an **error** argument is found and is not null then the remaining arguments are optional. That is, if you say `ESO` then that's like using a non-magical `E` in: `E|ESO|ZSO`. ### But I have optional arguments?! You can provide more than one signature by separating them with pipes `|`. If any signature matches the arguments then they'll be considered valid. So for example, say you wanted to write a signature for `fs.createWriteStream`. The docs for it describe it thusly: ``` fs.createWriteStream(path[, options]) ``` This would be a signature of `SO|S`. That is, a string and and object, or just a string. Now, if you read the full `fs` docs, you'll see that actually path can ALSO be a buffer. And options can be a string, that is: ``` path <String> | <Buffer> options <String> | <Object> ``` To reproduce this you have to fully enumerate all of the possible combinations and that implies a signature of `SO|SS|OO|OS|S|O`. The awkwardness is a feature: It reminds you of the complexity you're adding to your API when you do this sort of thing. ### Browser support This has no dependencies and should work in browsers, though you'll have noisier stack traces. ### Why this exists I wanted a very simple argument validator. It needed to do two things: 1. Be more concise and easier to use than assertions 2. Not encourage an infinite bikeshed of DSLs This is why types are specified by a single character and there's no such thing as an optional argument. This is not intended to validate user data. This is specifically about asserting the interface of your functions. If you need greater validation, I encourage you to write them by hand or look elsewhere. ## getpass Get a password from the terminal. Sounds simple? Sounds like the `readline` module should be able to do it? NOPE. ## Install and use it ```bash npm install --save getpass ``` ```javascript const mod_getpass = require('getpass'); ``` ## API ### `mod_getpass.getPass([options, ]callback)` Gets a password from the terminal. If available, this uses `/dev/tty` to avoid interfering with any data being piped in or out of stdio. This function prints a prompt (by default `Password:`) and then accepts input without echoing. Parameters: * `options`, an Object, with properties: * `prompt`, an optional String * `callback`, a `Func(error, password)`, with arguments: * `error`, either `null` (no error) or an `Error` instance * `password`, a String # node-errno > Better [libuv](https://github.com/libuv/libuv)/[Node.js](https://nodejs.org)/[io.js](https://iojs.org) error handling & reporting. Available in npm as *errno*. [![npm](https://img.shields.io/npm/v/errno.svg)](https://www.npmjs.com/package/errno) [![Build Status](https://secure.travis-ci.org/rvagg/node-errno.png)](http://travis-ci.org/rvagg/node-errno) [![npm](https://img.shields.io/npm/dm/errno.svg)](https://www.npmjs.com/package/errno) * [errno exposed](#errnoexposed) * [Custom errors](#customerrors) <a name="errnoexposed"></a> ## errno exposed Ever find yourself needing more details about Node.js errors? Me too, so *node-errno* contains the errno mappings direct from libuv so you can use them in your code. **By errno:** ```js require('errno').errno[3] // → { // "errno": 3, // "code": "EACCES", // "description": "permission denied" // } ``` **By code:** ```js require('errno').code.ENOTEMPTY // → { // "errno": 53, // "code": "ENOTEMPTY", // "description": "directory not empty" // } ``` **Make your errors more descriptive:** ```js var errno = require('errno') function errmsg(err) { var str = 'Error: ' // if it's a libuv error then get the description from errno if (errno.errno[err.errno]) str += errno.errno[err.errno].description else str += err.message // if it's a `fs` error then it'll have a 'path' property if (err.path) str += ' [' + err.path + ']' return str } var fs = require('fs') fs.readFile('thisisnotarealfile.txt', function (err, data) { if (err) console.log(errmsg(err)) }) ``` **Use as a command line tool:** ``` ~ $ errno 53 { "errno": 53, "code": "ENOTEMPTY", "description": "directory not empty" } ~ $ errno EROFS { "errno": 56, "code": "EROFS", "description": "read-only file system" } ~ $ errno foo No such errno/code: "foo" ``` Supply no arguments for the full list. Error codes are processed case-insensitive. You will need to install with `npm install errno -g` if you want the `errno` command to be available without supplying a full path to the node_modules installation. <a name="customerrors"></a> ## Custom errors Use `errno.custom.createError()` to create custom `Error` objects to throw around in your Node.js library. Create error hierarchies so `instanceof` becomes a useful tool in tracking errors. Call-stack is correctly captured at the time you create an instance of the error object, plus a `cause` property will make available the original error object if you pass one in to the constructor. ```js var create = require('errno').custom.createError var MyError = create('MyError') // inherits from Error var SpecificError = create('SpecificError', MyError) // inherits from MyError var OtherError = create('OtherError', MyError) // use them! if (condition) throw new SpecificError('Eeek! Something bad happened') if (err) return callback(new OtherError(err)) ``` Also available is a `errno.custom.FilesystemError` with in-built access to errno properties: ```js fs.readFile('foo', function (err, data) { if (err) return callback(new errno.custom.FilesystemError(err)) // do something else }) ``` The resulting error object passed through the callback will have the following properties: `code`, `errno`, `path` and `message` will contain a descriptive human-readable message. ## Contributors * [bahamas10](https://github.com/bahamas10) (Dave Eddy) - Added CLI * [ralphtheninja](https://github.com/ralphtheninja) (Lars-Magnus Skog) ## Copyright & Licence *Copyright (c) 2012-2015 [Rod Vagg](https://github.com/rvagg) ([@rvagg](https://twitter.com/rvagg))* Made available under the MIT licence: Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) [![Travis CI](https://travis-ci.org/digitaldesignlabs/es6-promisify.svg)](https://travis-ci.org/digitaldesignlabs/es6-promisify) # es6-promisify Converts callback-based functions to Promise-based functions. ## Install Install with [npm](https://npmjs.org/package/es6-promisify) ```bash npm install --save es6-promisify ``` ## Example ```js "use strict"; // Declare variables const promisify = require("es6-promisify"); const fs = require("fs"); // Convert the stat function const stat = promisify(fs.stat); // Now usable as a promise! stat("example.txt").then(function (stats) { console.log("Got stats", stats); }).catch(function (err) { console.error("Yikes!", err); }); ``` ## Promisify methods ```js "use strict"; // Declare variables const promisify = require("es6-promisify"); const redis = require("redis").createClient(6379, "localhost"); // Create a promise-based version of send_command const client = promisify(redis.send_command, redis); // Send commands to redis and get a promise back client("ping").then(function (pong) { console.log("Got", pong); }).catch(function (err) { console.error("Unexpected error", err); }).then(function () { redis.quit(); }); ``` ## Handle callback multiple arguments ```js "use strict"; // Declare functions function test(cb) { return cb(undefined, 1, 2, 3); } // Declare variables const promisify = require("es6-promisify"); // Create promise-based version of test const single = promisify(test); const multi = promisify(test, {multiArgs: true}); // Discards additional arguments single().then(function (result) { console.log(result); // 1 }); // Returns all arguments as an array multi().then(function (result) { console.log(result); // [1, 2, 3] }); ``` ### Tests Test with nodeunit ```bash $ npm test ``` Published under the [MIT License](http://opensource.org/licenses/MIT). # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # is-ci Returns `true` if the current environment is a Continuous Integration server. Please [open an issue](https://github.com/watson/is-ci/issues) if your CI server isn't properly detected :) [![npm](https://img.shields.io/npm/v/is-ci.svg)](https://www.npmjs.com/package/is-ci) [![Build status](https://travis-ci.org/watson/is-ci.svg?branch=master)](https://travis-ci.org/watson/is-ci) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat)](https://github.com/feross/standard) ## Installation ```bash npm install is-ci --save ``` ## Programmatic Usage ```js const isCI = require('is-ci') if (isCI) { console.log('The code is running on a CI server') } ``` ## CLI Usage For CLI usage you need to have the `is-ci` executable in your `PATH`. There's a few ways to do that: - Either install the module globally using `npm install is-ci -g` - Or add the module as a dependency to your app in which case it can be used inside your package.json scripts as is - Or provide the full path to the executable, e.g. `./node_modules/.bin/is-ci` ```bash is-ci && echo "This is a CI server" ``` ## Supported CI tools Refer to [ci-info](https://github.com/watson/ci-info#supported-ci-tools) docs for all supported CI's ## License [MIT](LICENSE) A light, featureful and explicit option parsing library for node.js. [Why another one? See below](#why). tl;dr: The others I've tried are one of too loosey goosey (not explicit), too big/too many deps, or ill specified. YMMV. Follow <a href="https://twitter.com/intent/user?screen_name=trentmick" target="_blank">@trentmick</a> for updates to node-dashdash. # Install npm install dashdash # Usage ```javascript var dashdash = require('dashdash'); // Specify the options. Minimally `name` (or `names`) and `type` // must be given for each. var options = [ { // `names` or a single `name`. First element is the `opts.KEY`. names: ['help', 'h'], // See "Option specs" below for types. type: 'bool', help: 'Print this help and exit.' } ]; // Shortcut form. As called it infers `process.argv`. See below for // the longer form to use methods like `.help()` on the Parser object. var opts = dashdash.parse({options: options}); console.log("opts:", opts); console.log("args:", opts._args); ``` # Longer Example A more realistic [starter script "foo.js"](./examples/foo.js) is as follows. This also shows using `parser.help()` for formatted option help. ```javascript var dashdash = require('./lib/dashdash'); var options = [ { name: 'version', type: 'bool', help: 'Print tool version and exit.' }, { names: ['help', 'h'], type: 'bool', help: 'Print this help and exit.' }, { names: ['verbose', 'v'], type: 'arrayOfBool', help: 'Verbose output. Use multiple times for more verbose.' }, { names: ['file', 'f'], type: 'string', help: 'File to process', helpArg: 'FILE' } ]; var parser = dashdash.createParser({options: options}); try { var opts = parser.parse(process.argv); } catch (e) { console.error('foo: error: %s', e.message); process.exit(1); } console.log("# opts:", opts); console.log("# args:", opts._args); // Use `parser.help()` for formatted options help. if (opts.help) { var help = parser.help({includeEnv: true}).trimRight(); console.log('usage: node foo.js [OPTIONS]\n' + 'options:\n' + help); process.exit(0); } // ... ``` Some example output from this script (foo.js): ``` $ node foo.js -h # opts: { help: true, _order: [ { name: 'help', value: true, from: 'argv' } ], _args: [] } # args: [] usage: node foo.js [OPTIONS] options: --version Print tool version and exit. -h, --help Print this help and exit. -v, --verbose Verbose output. Use multiple times for more verbose. -f FILE, --file=FILE File to process $ node foo.js -v # opts: { verbose: [ true ], _order: [ { name: 'verbose', value: true, from: 'argv' } ], _args: [] } # args: [] $ node foo.js --version arg1 # opts: { version: true, _order: [ { name: 'version', value: true, from: 'argv' } ], _args: [ 'arg1' ] } # args: [ 'arg1' ] $ node foo.js -f bar.txt # opts: { file: 'bar.txt', _order: [ { name: 'file', value: 'bar.txt', from: 'argv' } ], _args: [] } # args: [] $ node foo.js -vvv --file=blah # opts: { verbose: [ true, true, true ], file: 'blah', _order: [ { name: 'verbose', value: true, from: 'argv' }, { name: 'verbose', value: true, from: 'argv' }, { name: 'verbose', value: true, from: 'argv' }, { name: 'file', value: 'blah', from: 'argv' } ], _args: [] } # args: [] ``` See the ["examples"](examples/) dir for a number of starter examples using some of dashdash's features. # Environment variable integration If you want to allow environment variables to specify options to your tool, dashdash makes this easy. We can change the 'verbose' option in the example above to include an 'env' field: ```javascript { names: ['verbose', 'v'], type: 'arrayOfBool', env: 'FOO_VERBOSE', // <--- add this line help: 'Verbose output. Use multiple times for more verbose.' }, ``` then the **"FOO_VERBOSE" environment variable** can be used to set this option: ```shell $ FOO_VERBOSE=1 node foo.js # opts: { verbose: [ true ], _order: [ { name: 'verbose', value: true, from: 'env' } ], _args: [] } # args: [] ``` Boolean options will interpret the empty string as unset, '0' as false and anything else as true. ```shell $ FOO_VERBOSE= node examples/foo.js # not set # opts: { _order: [], _args: [] } # args: [] $ FOO_VERBOSE=0 node examples/foo.js # '0' is false # opts: { verbose: [ false ], _order: [ { key: 'verbose', value: false, from: 'env' } ], _args: [] } # args: [] $ FOO_VERBOSE=1 node examples/foo.js # true # opts: { verbose: [ true ], _order: [ { key: 'verbose', value: true, from: 'env' } ], _args: [] } # args: [] $ FOO_VERBOSE=boogabooga node examples/foo.js # true # opts: { verbose: [ true ], _order: [ { key: 'verbose', value: true, from: 'env' } ], _args: [] } # args: [] ``` Non-booleans can be used as well. Strings: ```shell $ FOO_FILE=data.txt node examples/foo.js # opts: { file: 'data.txt', _order: [ { key: 'file', value: 'data.txt', from: 'env' } ], _args: [] } # args: [] ``` Numbers: ```shell $ FOO_TIMEOUT=5000 node examples/foo.js # opts: { timeout: 5000, _order: [ { key: 'timeout', value: 5000, from: 'env' } ], _args: [] } # args: [] $ FOO_TIMEOUT=blarg node examples/foo.js foo: error: arg for "FOO_TIMEOUT" is not a positive integer: "blarg" ``` With the `includeEnv: true` config to `parser.help()` the environment variable can also be included in **help output**: usage: node foo.js [OPTIONS] options: --version Print tool version and exit. -h, --help Print this help and exit. -v, --verbose Verbose output. Use multiple times for more verbose. Environment: FOO_VERBOSE=1 -f FILE, --file=FILE File to process # Bash completion Dashdash provides a simple way to create a Bash completion file that you can place in your "bash_completion.d" directory -- sometimes that is "/usr/local/etc/bash_completion.d/"). Features: - Support for short and long opts - Support for knowing which options take arguments - Support for subcommands (e.g. 'git log <TAB>' to show just options for the log subcommand). See [node-cmdln](https://github.com/trentm/node-cmdln#bash-completion) for how to integrate that. - Does the right thing with "--" to stop options. - Custom optarg and arg types for custom completions. Dashdash will return bash completion file content given a parser instance: var parser = dashdash.createParser({options: options}); console.log( parser.bashCompletion({name: 'mycli'}) ); or directly from a `options` array of options specs: var code = dashdash.bashCompletionFromOptions({ name: 'mycli', options: OPTIONS }); Write that content to "/usr/local/etc/bash_completion.d/mycli" and you will have Bash completions for `mycli`. Alternatively you can write it to any file (e.g. "~/.bashrc") and source it. You could add a `--completion` hidden option to your tool that emits the completion content and document for your users to call that to install Bash completions. See [examples/ddcompletion.js](examples/ddcompletion.js) for a complete example, including how one can define bash functions for completion of custom option types. Also see [node-cmdln](https://github.com/trentm/node-cmdln) for how it uses this for Bash completion for full multi-subcommand tools. - TODO: document specExtra - TODO: document includeHidden - TODO: document custom types, `function complete\_FOO` guide, completionType - TODO: document argtypes # Parser config Parser construction (i.e. `dashdash.createParser(CONFIG)`) takes the following fields: - `options` (Array of option specs). Required. See the [Option specs](#option-specs) section below. - `interspersed` (Boolean). Optional. Default is true. If true this allows interspersed arguments and options. I.e.: node ./tool.js -v arg1 arg2 -h # '-h' is after interspersed args Set it to false to have '-h' **not** get parsed as an option in the above example. - `allowUnknown` (Boolean). Optional. Default is false. If false, this causes unknown arguments to throw an error. I.e.: node ./tool.js -v arg1 --afe8asefksjefhas Set it to true to treat the unknown option as a positional argument. **Caveat**: When a shortopt group, such as `-xaz` contains a mix of known and unknown options, the *entire* group is passed through unmolested as a positional argument. Consider if you have a known short option `-a`, and parse the following command line: node ./tool.js -xaz where `-x` and `-z` are unknown. There are multiple ways to interpret this: 1. `-x` takes a value: `{x: 'az'}` 2. `-x` and `-z` are both booleans: `{x:true,a:true,z:true}` Since dashdash does not know what `-x` and `-z` are, it can't know if you'd prefer to receive `{a:true,_args:['-x','-z']}` or `{x:'az'}`, or `{_args:['-xaz']}`. Leaving the positional arg unprocessed is the easiest mistake for the user to recover from. # Option specs Example using all fields (required fields are noted): ```javascript { names: ['file', 'f'], // Required (one of `names` or `name`). type: 'string', // Required. completionType: 'filename', env: 'MYTOOL_FILE', help: 'Config file to load before running "mytool"', helpArg: 'PATH', helpWrap: false, default: path.resolve(process.env.HOME, '.mytoolrc') } ``` Each option spec in the `options` array must/can have the following fields: - `name` (String) or `names` (Array). Required. These give the option name and aliases. The first name (if more than one given) is the key for the parsed `opts` object. - `type` (String). Required. One of: - bool - string - number - integer - positiveInteger - date (epoch seconds, e.g. 1396031701, or ISO 8601 format `YYYY-MM-DD[THH:MM:SS[.sss][Z]]`, e.g. "2014-03-28T18:35:01.489Z") - arrayOfBool - arrayOfString - arrayOfNumber - arrayOfInteger - arrayOfPositiveInteger - arrayOfDate FWIW, these names attempt to match with asserts on [assert-plus](https://github.com/mcavage/node-assert-plus). You can add your own custom option types with `dashdash.addOptionType`. See below. - `completionType` (String). Optional. This is used for [Bash completion](#bash-completion) for an option argument. If not specified, then the value of `type` is used. Any string may be specified, but only the following values have meaning: - `none`: Provide no completions. - `file`: Bash's default completion (i.e. `complete -o default`), which includes filenames. - *Any string FOO for which a `function complete_FOO` Bash function is defined.* This is for custom completions for a given tool. Typically these custom functions are provided in the `specExtra` argument to `dashdash.bashCompletionFromOptions()`. See ["examples/ddcompletion.js"](examples/ddcompletion.js) for an example. - `env` (String or Array of String). Optional. An environment variable name (or names) that can be used as a fallback for this option. For example, given a "foo.js" like this: var options = [{names: ['dry-run', 'n'], env: 'FOO_DRY_RUN'}]; var opts = dashdash.parse({options: options}); Both `node foo.js --dry-run` and `FOO_DRY_RUN=1 node foo.js` would result in `opts.dry_run = true`. An environment variable is only used as a fallback, i.e. it is ignored if the associated option is given in `argv`. - `help` (String). Optional. Used for `parser.help()` output. - `helpArg` (String). Optional. Used in help output as the placeholder for the option argument, e.g. the "PATH" in: ... -f PATH, --file=PATH File to process ... - `helpWrap` (Boolean). Optional, default true. Set this to `false` to have that option's `help` *not* be text wrapped in `<parser>.help()` output. - `default`. Optional. A default value used for this option, if the option isn't specified in argv. - `hidden` (Boolean). Optional, default false. If true, help output will not include this option. See also the `includeHidden` option to `bashCompletionFromOptions()` for [Bash completion](#bash-completion). # Option group headings You can add headings between option specs in the `options` array. To do so, simply add an object with only a `group` property -- the string to print as the heading for the subsequent options in the array. For example: ```javascript var options = [ { group: 'Armament Options' }, { names: [ 'weapon', 'w' ], type: 'string' }, { group: 'General Options' }, { names: [ 'help', 'h' ], type: 'bool' } ]; ... ``` Note: You can use an empty string, `{group: ''}`, to get a blank line in help output between groups of options. # Help config The `parser.help(...)` function is configurable as follows: Options: Armament Options: ^^ -w WEAPON, --weapon=WEAPON Weapon with which to crush. One of: | / sword, spear, maul | / General Options: | / -h, --help Print this help and exit. | / ^^^^ ^ | \ `-- indent `-- helpCol maxCol ---' `-- headingIndent - `indent` (Number or String). Default 4. Set to a number (for that many spaces) or a string for the literal indent. - `headingIndent` (Number or String). Default half length of `indent`. Set to a number (for that many spaces) or a string for the literal indent. This indent applies to group heading lines, between normal option lines. - `nameSort` (String). Default is 'length'. By default the names are sorted to put the short opts first (i.e. '-h, --help' preferred to '--help, -h'). Set to 'none' to not do this sorting. - `maxCol` (Number). Default 80. Note that reflow is just done on whitespace so a long token in the option help can overflow maxCol. - `helpCol` (Number). If not set a reasonable value will be determined between `minHelpCol` and `maxHelpCol`. - `minHelpCol` (Number). Default 20. - `maxHelpCol` (Number). Default 40. - `helpWrap` (Boolean). Default true. Set to `false` to have option `help` strings *not* be textwrapped to the helpCol..maxCol range. - `includeEnv` (Boolean). Default false. If the option has associated environment variables (via the `env` option spec attribute), then append mentioned of those envvars to the help string. - `includeDefault` (Boolean). Default false. If the option has a default value (via the `default` option spec attribute, or a default on the option's type), then a "Default: VALUE" string will be appended to the help string. # Custom option types Dashdash includes a good starter set of option types that it will parse for you. However, you can add your own via: var dashdash = require('dashdash'); dashdash.addOptionType({ name: '...', takesArg: true, helpArg: '...', parseArg: function (option, optstr, arg) { ... }, array: false, // optional arrayFlatten: false, // optional default: ..., // optional completionType: ... // optional }); For example, a simple option type that accepts 'yes', 'y', 'no' or 'n' as a boolean argument would look like: var dashdash = require('dashdash'); function parseYesNo(option, optstr, arg) { var argLower = arg.toLowerCase() if (~['yes', 'y'].indexOf(argLower)) { return true; } else if (~['no', 'n'].indexOf(argLower)) { return false; } else { throw new Error(format( 'arg for "%s" is not "yes" or "no": "%s"', optstr, arg)); } } dashdash.addOptionType({ name: 'yesno' takesArg: true, helpArg: '<yes|no>', parseArg: parseYesNo }); var options = { {names: ['answer', 'a'], type: 'yesno'} }; var opts = dashdash.parse({options: options}); See "examples/custom-option-\*.js" for other examples. See the `addOptionType` block comment in "lib/dashdash.js" for more details. Please let me know [with an issue](https://github.com/trentm/node-dashdash/issues/new) if you write a generally useful one. # Why Why another node.js option parsing lib? - `nopt` really is just for "tools like npm". Implicit opts (e.g. '--no-foo' works for every '--foo'). Can't disable abbreviated opts. Can't do multiple usages of same opt, e.g. '-vvv' (I think). Can't do grouped short opts. - `optimist` has surprise interpretation of options (at least to me). Implicit opts mean ambiguities and poor error handling for fat-fingering. `process.exit` calls makes it hard to use as a libary. - `optparse` Incomplete docs. Is this an attempted clone of Python's `optparse`. Not clear. Some divergence. `parser.on("name", ...)` API is weird. - `argparse` Dep on underscore. No thanks just for option processing. `find lib | wc -l` -> `26`. Overkill. Argparse is a bit different anyway. Not sure I want that. - `posix-getopt` No type validation. Though that isn't a killer. AFAIK can't have a long opt without a short alias. I.e. no `getopt_long` semantics. Also, no whizbang features like generated help output. - ["commander.js"](https://github.com/visionmedia/commander.js): I wrote [a critique](http://trentm.com/2014/01/a-critique-of-commander-for-nodejs.html) a while back. It seems fine, but last I checked had [an outstanding bug](https://github.com/visionmedia/commander.js/pull/121) that would prevent me from using it. # License MIT. See LICENSE.txt. # lodash._baseindexof v3.1.0 The [modern build](https://github.com/lodash/lodash/wiki/Build-Differences) of [lodash’s](https://lodash.com/) internal `baseIndexOf` exported as a [Node.js](http://nodejs.org/)/[io.js](https://iojs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._baseindexof ``` In Node.js/io.js: ```js var baseIndexOf = require('lodash._baseindexof'); ``` See the [package source](https://github.com/lodash/lodash/blob/3.1.0-npm-packages/lodash._baseindexof) for more details. # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` # debuglog - backport of util.debuglog() from node v0.11 To facilitate using the `util.debuglog()` function that will be available when node v0.12 is released now, this is a copy extracted from the source. ## require('debuglog') Return `util.debuglog`, if it exists, otherwise it will return an internal copy of the implementation from node v0.11. ## debuglog(section) * `section` {String} The section of the program to be debugged * Returns: {Function} The logging function This is used to create a function which conditionally writes to stderr based on the existence of a `NODE_DEBUG` environment variable. If the `section` name appears in that environment variable, then the returned function will be similar to `console.error()`. If not, then the returned function is a no-op. For example: ```javascript var debuglog = util.debuglog('foo'); var bar = 123; debuglog('hello from foo [%d]', bar); ``` If this program is run with `NODE_DEBUG=foo` in the environment, then it will output something like: FOO 3245: hello from foo [123] where `3245` is the process id. If it is not run with that environment variable set, then it will not print anything. You may separate multiple `NODE_DEBUG` environment variables with a comma. For example, `NODE_DEBUG=fs,net,tls`. # stringify-package [![npm version](https://img.shields.io/npm/v/stringify-package.svg)](https://npm.im/stringify-package) [![license](https://img.shields.io/npm/l/stringify-package.svg)](https://npm.im/stringify-package) [![Travis](https://img.shields.io/travis/npm/stringify-package/latest.svg)](https://travis-ci.org/npm/stringify-package) [![AppVeyor](https://img.shields.io/appveyor/ci/npm/stringify-package/latest.svg)](https://ci.appveyor.com/project/npm/stringify-package) [![Coverage Status](https://coveralls.io/repos/github/npm/stringify-package/badge.svg?branch=latest)](https://coveralls.io/github/npm/stringify-package?branch=latest) [`stringify-package`](https://github.com/npm/stringify-package) is a standalone library for writing out package data as a JSON file. It is extracted from npm. ## Install `$ npm install stringify-package` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [`stringifyPackage`](#stringifypackage) ### Example ```javascript const fs = require('fs') const pkg = { /* ... */ } fs.writeFile('package.json', stringifyPackage(pkg), 'utf8', cb(err) => { // ... }) ``` ### Features * Ensures consistent file indentation To match existing file indentation, [`detect-indent`](https://npm.im/detect-indent) is recommended. * Ensures consistent newlines To match existing newline characters, [`detect-newline`](https://npm.im/detect-newline) is recommended. ### Contributing The npm team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. ### API ### <a name="stringifypackage"></a> `> stringifyPackage(data, indent, newline) -> String` #### Arguments * `data` - the package data as an object to be stringified * `indent` - the number of spaces to use for each level of indentation (defaults to 2) * `newline` - the character(s) to be used as a line terminator agent-base ========== ### Turn a function into an [`http.Agent`][http.Agent] instance [![Build Status](https://travis-ci.org/TooTallNate/node-agent-base.svg?branch=master)](https://travis-ci.org/TooTallNate/node-agent-base) This module provides an `http.Agent` generator. That is, you pass it an async callback function, and it returns a new `http.Agent` instance that will invoke the given callback function when sending outbound HTTP requests. #### Some subclasses: Here's some more interesting uses of `agent-base`. Send a pull request to list yours! * [`http-proxy-agent`][http-proxy-agent]: An HTTP(s) proxy `http.Agent` implementation for HTTP endpoints * [`https-proxy-agent`][https-proxy-agent]: An HTTP(s) proxy `http.Agent` implementation for HTTPS endpoints * [`pac-proxy-agent`][pac-proxy-agent]: A PAC file proxy `http.Agent` implementation for HTTP and HTTPS * [`socks-proxy-agent`][socks-proxy-agent]: A SOCKS (v4a) proxy `http.Agent` implementation for HTTP and HTTPS Installation ------------ Install with `npm`: ``` bash $ npm install agent-base ``` Example ------- Here's a minimal example that creates a new `net.Socket` connection to the server for every HTTP request (i.e. the equivalent of `agent: false` option): ```js var net = require('net'); var tls = require('tls'); var url = require('url'); var http = require('http'); var agent = require('agent-base'); var endpoint = 'http://nodejs.org/api/'; var parsed = url.parse(endpoint); // This is the important part! parsed.agent = agent(function (req, opts) { var socket; // `secureEndpoint` is true when using the https module if (opts.secureEndpoint) { socket = tls.connect(opts); } else { socket = net.connect(opts); } return socket; }); // Everything else works just like normal... http.get(parsed, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` Returning a Promise or using an `async` function is also supported: ```js agent(async function (req, opts) { await sleep(1000); // etc… }); ``` Return another `http.Agent` instance to "pass through" the responsibility for that HTTP request to that agent: ```js agent(function (req, opts) { return opts.secureEndpoint ? https.globalAgent : http.globalAgent; }); ``` API --- ## Agent(Function callback[, Object options]) → [http.Agent][] Creates a base `http.Agent` that will execute the callback function `callback` for every HTTP request that it is used as the `agent` for. The callback function is responsible for creating a `stream.Duplex` instance of some kind that will be used as the underlying socket in the HTTP request. The `options` object accepts the following properties: * `timeout` - Number - Timeout for the `callback()` function in milliseconds. Defaults to Infinity (optional). The callback function should have the following signature: ### callback(http.ClientRequest req, Object options, Function cb) → undefined The ClientRequest `req` can be accessed to read request headers and and the path, etc. The `options` object contains the options passed to the `http.request()`/`https.request()` function call, and is formatted to be directly passed to `net.connect()`/`tls.connect()`, or however else you want a Socket to be created. Pass the created socket to the callback function `cb` once created, and the HTTP request will continue to proceed. If the `https` module is used to invoke the HTTP request, then the `secureEndpoint` property on `options` _will be set to `true`_. License ------- (The MIT License) Copyright (c) 2013 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [http-proxy-agent]: https://github.com/TooTallNate/node-http-proxy-agent [https-proxy-agent]: https://github.com/TooTallNate/node-https-proxy-agent [pac-proxy-agent]: https://github.com/TooTallNate/node-pac-proxy-agent [socks-proxy-agent]: https://github.com/TooTallNate/node-socks-proxy-agent [http.Agent]: https://nodejs.org/api/http.html#http_class_http_agent #define-properties <sup>[![Version Badge][npm-version-svg]][package-url]</sup> [![Build Status][travis-svg]][travis-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][npm-badge-png]][package-url] [![browser support][testling-svg]][testling-url] Define multiple non-enumerable properties at once. Uses `Object.defineProperty` when available; falls back to standard assignment in older engines. Existing properties are not overridden. Accepts a map of property names to a predicate that, when true, force-overrides. ## Example ```js var define = require('define-properties'); var assert = require('assert'); var obj = define({ a: 1, b: 2 }, { a: 10, b: 20, c: 30 }); assert(obj.a === 1); assert(obj.b === 2); assert(obj.c === 30); if (define.supportsDescriptors) { assert.deepEqual(Object.keys(obj), ['a', 'b']); assert.deepEqual(Object.getOwnPropertyDescriptor(obj, 'c'), { configurable: true, enumerable: false, value: 30, writable: false }); } ``` Then, with predicates: ```js var define = require('define-properties'); var assert = require('assert'); var obj = define({ a: 1, b: 2, c: 3 }, { a: 10, b: 20, c: 30 }, { a: function () { return false; }, b: function () { return true; } }); assert(obj.a === 1); assert(obj.b === 20); assert(obj.c === 3); if (define.supportsDescriptors) { assert.deepEqual(Object.keys(obj), ['a', 'c']); assert.deepEqual(Object.getOwnPropertyDescriptor(obj, 'b'), { configurable: true, enumerable: false, value: 20, writable: false }); } ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [package-url]: https://npmjs.org/package/define-properties [npm-version-svg]: http://versionbadg.es/ljharb/define-properties.svg [travis-svg]: https://travis-ci.org/ljharb/define-properties.svg [travis-url]: https://travis-ci.org/ljharb/define-properties [deps-svg]: https://david-dm.org/ljharb/define-properties.svg [deps-url]: https://david-dm.org/ljharb/define-properties [dev-deps-svg]: https://david-dm.org/ljharb/define-properties/dev-status.svg [dev-deps-url]: https://david-dm.org/ljharb/define-properties#info=devDependencies [testling-svg]: https://ci.testling.com/ljharb/define-properties.png [testling-url]: https://ci.testling.com/ljharb/define-properties [npm-badge-png]: https://nodei.co/npm/define-properties.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/define-properties.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/define-properties.svg [downloads-url]: http://npm-stat.com/charts.html?package=define-properties # Form-Data [![NPM Module](https://img.shields.io/npm/v/form-data.svg)](https://www.npmjs.com/package/form-data) [![Join the chat at https://gitter.im/form-data/form-data](http://form-data.github.io/images/gitterbadge.svg)](https://gitter.im/form-data/form-data) A library to create readable ```"multipart/form-data"``` streams. Can be used to submit forms and file uploads to other web applications. The API of this library is inspired by the [XMLHttpRequest-2 FormData Interface][xhr2-fd]. [xhr2-fd]: http://dev.w3.org/2006/webapi/XMLHttpRequest-2/Overview.html#the-formdata-interface [![Linux Build](https://img.shields.io/travis/form-data/form-data/v2.3.2.svg?label=linux:4.x-9.x)](https://travis-ci.org/form-data/form-data) [![MacOS Build](https://img.shields.io/travis/form-data/form-data/v2.3.2.svg?label=macos:4.x-9.x)](https://travis-ci.org/form-data/form-data) [![Windows Build](https://img.shields.io/appveyor/ci/alexindigo/form-data/v2.3.2.svg?label=windows:4.x-9.x)](https://ci.appveyor.com/project/alexindigo/form-data) [![Coverage Status](https://img.shields.io/coveralls/form-data/form-data/v2.3.2.svg?label=code+coverage)](https://coveralls.io/github/form-data/form-data?branch=master) [![Dependency Status](https://img.shields.io/david/form-data/form-data.svg)](https://david-dm.org/form-data/form-data) [![bitHound Overall Score](https://www.bithound.io/github/form-data/form-data/badges/score.svg)](https://www.bithound.io/github/form-data/form-data) ## Install ``` npm install --save form-data ``` ## Usage In this example we are constructing a form with 3 fields that contain a string, a buffer and a file stream. ``` javascript var FormData = require('form-data'); var fs = require('fs'); var form = new FormData(); form.append('my_field', 'my value'); form.append('my_buffer', new Buffer(10)); form.append('my_file', fs.createReadStream('/foo/bar.jpg')); ``` Also you can use http-response stream: ``` javascript var FormData = require('form-data'); var http = require('http'); var form = new FormData(); http.request('http://nodejs.org/images/logo.png', function(response) { form.append('my_field', 'my value'); form.append('my_buffer', new Buffer(10)); form.append('my_logo', response); }); ``` Or @mikeal's [request](https://github.com/request/request) stream: ``` javascript var FormData = require('form-data'); var request = require('request'); var form = new FormData(); form.append('my_field', 'my value'); form.append('my_buffer', new Buffer(10)); form.append('my_logo', request('http://nodejs.org/images/logo.png')); ``` In order to submit this form to a web application, call ```submit(url, [callback])``` method: ``` javascript form.submit('http://example.org/', function(err, res) { // res – response object (http.IncomingMessage) // res.resume(); }); ``` For more advanced request manipulations ```submit()``` method returns ```http.ClientRequest``` object, or you can choose from one of the alternative submission methods. ### Custom options You can provide custom options, such as `maxDataSize`: ``` javascript var FormData = require('form-data'); var form = new FormData({ maxDataSize: 20971520 }); form.append('my_field', 'my value'); form.append('my_buffer', /* something big */); ``` List of available options could be found in [combined-stream](https://github.com/felixge/node-combined-stream/blob/master/lib/combined_stream.js#L7-L15) ### Alternative submission methods You can use node's http client interface: ``` javascript var http = require('http'); var request = http.request({ method: 'post', host: 'example.org', path: '/upload', headers: form.getHeaders() }); form.pipe(request); request.on('response', function(res) { console.log(res.statusCode); }); ``` Or if you would prefer the `'Content-Length'` header to be set for you: ``` javascript form.submit('example.org/upload', function(err, res) { console.log(res.statusCode); }); ``` To use custom headers and pre-known length in parts: ``` javascript var CRLF = '\r\n'; var form = new FormData(); var options = { header: CRLF + '--' + form.getBoundary() + CRLF + 'X-Custom-Header: 123' + CRLF + CRLF, knownLength: 1 }; form.append('my_buffer', buffer, options); form.submit('http://example.com/', function(err, res) { if (err) throw err; console.log('Done'); }); ``` Form-Data can recognize and fetch all the required information from common types of streams (```fs.readStream```, ```http.response``` and ```mikeal's request```), for some other types of streams you'd need to provide "file"-related information manually: ``` javascript someModule.stream(function(err, stdout, stderr) { if (err) throw err; var form = new FormData(); form.append('file', stdout, { filename: 'unicycle.jpg', // ... or: filepath: 'photos/toys/unicycle.jpg', contentType: 'image/jpeg', knownLength: 19806 }); form.submit('http://example.com/', function(err, res) { if (err) throw err; console.log('Done'); }); }); ``` The `filepath` property overrides `filename` and may contain a relative path. This is typically used when uploading [multiple files from a directory](https://wicg.github.io/entries-api/#dom-htmlinputelement-webkitdirectory). For edge cases, like POST request to URL with query string or to pass HTTP auth credentials, object can be passed to `form.submit()` as first parameter: ``` javascript form.submit({ host: 'example.com', path: '/probably.php?extra=params', auth: 'username:password' }, function(err, res) { console.log(res.statusCode); }); ``` In case you need to also send custom HTTP headers with the POST request, you can use the `headers` key in first parameter of `form.submit()`: ``` javascript form.submit({ host: 'example.com', path: '/surelynot.php', headers: {'x-test-header': 'test-header-value'} }, function(err, res) { console.log(res.statusCode); }); ``` ### Integration with other libraries #### Request Form submission using [request](https://github.com/request/request): ```javascript var formData = { my_field: 'my_value', my_file: fs.createReadStream(__dirname + '/unicycle.jpg'), }; request.post({url:'http://service.com/upload', formData: formData}, function(err, httpResponse, body) { if (err) { return console.error('upload failed:', err); } console.log('Upload successful! Server responded with:', body); }); ``` For more details see [request readme](https://github.com/request/request#multipartform-data-multipart-form-uploads). #### node-fetch You can also submit a form using [node-fetch](https://github.com/bitinn/node-fetch): ```javascript var form = new FormData(); form.append('a', 1); fetch('http://example.com', { method: 'POST', body: form }) .then(function(res) { return res.json(); }).then(function(json) { console.log(json); }); ``` ## Notes - ```getLengthSync()``` method DOESN'T calculate length for streams, use ```knownLength``` options as workaround. - Starting version `2.x` FormData has dropped support for `node@0.10.x`. ## License Form-Data is released under the [MIT](License) license. # lodash._root v3.0.1 The internal [lodash](https://lodash.com/) function `root` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._root ``` In Node.js: ```js var root = require('lodash._root'); ``` See the [package source](https://github.com/lodash/lodash/blob/3.0.1-npm-packages/lodash._root) for more details. ecc-jsbn ======== ECC package based on [jsbn](https://github.com/andyperlitch/jsbn) from [Tom Wu](http://www-cs-students.stanford.edu/~tjw/). This is a subset of the same interface as the [node compiled module](https://github.com/quartzjer/ecc), but works in the browser too. Also uses point compression now from [https://github.com/kaielvin](https://github.com/kaielvin/jsbn-ec-point-compression). # ci-info Get details about the current Continuous Integration environment. Please [open an issue](https://github.com/watson/ci-info/issues/new?template=ci-server-not-detected.md) if your CI server isn't properly detected :) [![npm](https://img.shields.io/npm/v/ci-info.svg)](https://www.npmjs.com/package/ci-info) [![Build status](https://travis-ci.org/watson/ci-info.svg?branch=master)](https://travis-ci.org/watson/ci-info) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat)](https://github.com/feross/standard) ## Installation ```bash npm install ci-info --save ``` ## Usage ```js var ci = require('ci-info') if (ci.isCI) { console.log('The name of the CI server is:', ci.name) } else { console.log('This program is not running on a CI server') } ``` ## Supported CI tools Officially supported CI servers: | Name | Constant | isPR | |------|----------|------| | [AWS CodeBuild](https://aws.amazon.com/codebuild/) | `ci.CODEBUILD` | 🚫 | | [AppVeyor](http://www.appveyor.com) | `ci.APPVEYOR` | ✅ | | [Azure Pipelines](https://azure.microsoft.com/en-us/services/devops/pipelines/) | `ci.AZURE_PIPELINES` | ✅ | | [Bamboo](https://www.atlassian.com/software/bamboo) by Atlassian | `ci.BAMBOO` | 🚫 | | [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) | `ci.BITBUCKET` | ✅ | | [Bitrise](https://www.bitrise.io/) | `ci.BITRISE` | ✅ | | [Buddy](https://buddy.works/) | `ci.BUDDY` | ✅ | | [Buildkite](https://buildkite.com) | `ci.BUILDKITE` | ✅ | | [CircleCI](http://circleci.com) | `ci.CIRCLE` | ✅ | | [Cirrus CI](https://cirrus-ci.org) | `ci.CIRRUS` | ✅ | | [Codeship](https://codeship.com) | `ci.CODESHIP` | 🚫 | | [Drone](https://drone.io) | `ci.DRONE` | ✅ | | [dsari](https://github.com/rfinnie/dsari) | `ci.DSARI` | 🚫 | | [GitLab CI](https://about.gitlab.com/gitlab-ci/) | `ci.GITLAB` | 🚫 | | [GoCD](https://www.go.cd/) | `ci.GOCD` | 🚫 | | [Hudson](http://hudson-ci.org) | `ci.HUDSON` | 🚫 | | [Jenkins CI](https://jenkins-ci.org) | `ci.JENKINS` | ✅ | | [Magnum CI](https://magnum-ci.com) | `ci.MAGNUM` | 🚫 | | [Netlify CI](https://www.netlify.com/) | `ci.NETLIFY` | ✅ | | [Sail CI](https://sail.ci/) | `ci.SAIL` | ✅ | | [Semaphore](https://semaphoreci.com) | `ci.SEMAPHORE` | ✅ | | [Shippable](https://www.shippable.com/) | `ci.SHIPPABLE` | ✅ | | [Solano CI](https://www.solanolabs.com/) | `ci.SOLANO` | ✅ | | [Strider CD](https://strider-cd.github.io/) | `ci.STRIDER` | 🚫 | | [TaskCluster](http://docs.taskcluster.net) | `ci.TASKCLUSTER` | 🚫 | | [TeamCity](https://www.jetbrains.com/teamcity/) by JetBrains | `ci.TEAMCITY` | 🚫 | | [Travis CI](http://travis-ci.org) | `ci.TRAVIS` | ✅ | ## API ### `ci.name` Returns a string containing name of the CI server the code is running on. If CI server is not detected, it returns `null`. Don't depend on the value of this string not to change for a specific vendor. If you find your self writing `ci.name === 'Travis CI'`, you most likely want to use `ci.TRAVIS` instead. ### `ci.isCI` Returns a boolean. Will be `true` if the code is running on a CI server, otherwise `false`. Some CI servers not listed here might still trigger the `ci.isCI` boolean to be set to `true` if they use certain vendor neutral environment variables. In those cases `ci.name` will be `null` and no vendor specific boolean will be set to `true`. ### `ci.isPR` Returns a boolean if PR detection is supported for the current CI server. Will be `true` if a PR is being tested, otherwise `false`. If PR detection is not supported for the current CI server, the value will be `null`. ### `ci.<VENDOR-CONSTANT>` A vendor specific boolean constant is exposed for each support CI vendor. A constant will be `true` if the code is determined to run on the given CI server, otherwise `false`. Examples of vendor constants are `ci.TRAVIS` or `ci.APPVEYOR`. For a complete list, see the support table above. Deprecated vendor constants that will be removed in the next major release: - `ci.TDDIUM` (Solano CI) This have been renamed `ci.SOLANO` ## License [MIT](LICENSE) These files are compiled dot templates from dot folder. Do NOT edit them directly, edit the templates and run `npm run build` from main ajv folder. # is-callable <sup>[![Version Badge][2]][1]</sup> [![Build Status][3]][4] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] [![browser support][9]][10] Is this JS value callable? Works with Functions and GeneratorFunctions, despite ES6 @@toStringTag. ## Example ```js var isCallable = require('is-callable'); var assert = require('assert'); assert.notOk(isCallable(undefined)); assert.notOk(isCallable(null)); assert.notOk(isCallable(false)); assert.notOk(isCallable(true)); assert.notOk(isCallable([])); assert.notOk(isCallable({})); assert.notOk(isCallable(/a/g)); assert.notOk(isCallable(new RegExp('a', 'g'))); assert.notOk(isCallable(new Date())); assert.notOk(isCallable(42)); assert.notOk(isCallable(NaN)); assert.notOk(isCallable(Infinity)); assert.notOk(isCallable(new Number(42))); assert.notOk(isCallable('foo')); assert.notOk(isCallable(Object('foo'))); assert.ok(isCallable(function () {})); assert.ok(isCallable(function* () {})); assert.ok(isCallable(x => x * x)); ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-callable [2]: http://versionbadg.es/ljharb/is-callable.svg [3]: https://travis-ci.org/ljharb/is-callable.svg [4]: https://travis-ci.org/ljharb/is-callable [5]: https://david-dm.org/ljharb/is-callable.svg [6]: https://david-dm.org/ljharb/is-callable [7]: https://david-dm.org/ljharb/is-callable/dev-status.svg [8]: https://david-dm.org/ljharb/is-callable#info=devDependencies [9]: https://ci.testling.com/ljharb/is-callable.png [10]: https://ci.testling.com/ljharb/is-callable [11]: https://nodei.co/npm/is-callable.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/is-callable.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/is-callable.svg [downloads-url]: http://npm-stat.com/charts.html?package=is-callable # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; # IP [![](https://badge.fury.io/js/ip.svg)](https://www.npmjs.com/package/ip) IP address utilities for node.js ## Installation ### npm ```shell npm install ip ``` ### git ```shell git clone https://github.com/indutny/node-ip.git ``` ## Usage Get your ip address, compare ip addresses, validate ip addresses, etc. ```js var ip = require('ip'); ip.address() // my ip address ip.isEqual('::1', '::0:1'); // true ip.toBuffer('127.0.0.1') // Buffer([127, 0, 0, 1]) ip.toString(new Buffer([127, 0, 0, 1])) // 127.0.0.1 ip.fromPrefixLen(24) // 255.255.255.0 ip.mask('192.168.1.134', '255.255.255.0') // 192.168.1.0 ip.cidr('192.168.1.134/26') // 192.168.1.128 ip.not('255.255.255.0') // 0.0.0.255 ip.or('192.168.1.134', '0.0.0.255') // 192.168.1.255 ip.isPrivate('127.0.0.1') // true ip.isV4Format('127.0.0.1'); // true ip.isV6Format('::ffff:127.0.0.1'); // true // operate on buffers in-place var buf = new Buffer(128); var offset = 64; ip.toBuffer('127.0.0.1', buf, offset); // [127, 0, 0, 1] at offset 64 ip.toString(buf, offset, 4); // '127.0.0.1' // subnet information ip.subnet('192.168.1.134', '255.255.255.192') // { networkAddress: '192.168.1.128', // firstAddress: '192.168.1.129', // lastAddress: '192.168.1.190', // broadcastAddress: '192.168.1.191', // subnetMask: '255.255.255.192', // subnetMaskLength: 26, // numHosts: 62, // length: 64, // contains: function(addr){...} } ip.cidrSubnet('192.168.1.134/26') // Same as previous. // range checking ip.cidrSubnet('192.168.1.134/26').contains('192.168.1.190') // true // ipv4 long conversion ip.toLong('127.0.0.1'); // 2130706433 ip.fromLong(2130706433); // '127.0.0.1' ``` ### License This software is licensed under the MIT License. Copyright Fedor Indutny, 2012. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # npm-cache-filename Given a cache folder and url, return the appropriate cache folder. ## USAGE ```javascript var cf = require('npm-cache-filename'); console.log(cf('/tmp/cache', 'https://registry.npmjs.org:1234/foo/bar')); // outputs: /tmp/cache/registry.npmjs.org_1234/foo/bar ``` As a bonus, you can also bind it to a specific root path: ```javascript var cf = require('npm-cache-filename'); var getFile = cf('/tmp/cache'); console.log(getFile('https://registry.npmjs.org:1234/foo/bar')); // outputs: /tmp/cache/registry.npmjs.org_1234/foo/bar ``` # read-package-json This is the thing that npm uses to read package.json files. It validates some stuff, and loads some default things. It keeps a cache of the files you've read, so that you don't end up reading the same package.json file multiple times. Note that if you just want to see what's literally in the package.json file, you can usually do `var data = require('some-module/package.json')`. This module is basically only needed by npm, but it's handy to see what npm will see when it looks at your package. ## Usage ```javascript var readJson = require('read-package-json') // readJson(filename, [logFunction=noop], [strict=false], cb) readJson('/path/to/package.json', console.error, false, function (er, data) { if (er) { console.error("There was an error reading the file") return } console.error('the package data is', data) }); ``` ## readJson(file, [logFn = noop], [strict = false], cb) * `file` {String} The path to the package.json file * `logFn` {Function} Function to handle logging. Defaults to a noop. * `strict` {Boolean} True to enforce SemVer 2.0 version strings, and other strict requirements. * `cb` {Function} Gets called with `(er, data)`, as is The Node Way. Reads the JSON file and does the things. ## `package.json` Fields See `man 5 package.json` or `npm help json`. ## readJson.log By default this is a reference to the `npmlog` module. But if that module can't be found, then it'll be set to just a dummy thing that does nothing. Replace with your own `{log,warn,error}` object for fun loggy time. ## readJson.extras(file, data, cb) Run all the extra stuff relative to the file, with the parsed data. Modifies the data as it does stuff. Calls the cb when it's done. ## readJson.extraSet = [fn, fn, ...] Array of functions that are called by `extras`. Each one receives the arguments `fn(file, data, cb)` and is expected to call `cb(er, data)` when done or when an error occurs. Order is indeterminate, so each function should be completely independent. Mix and match! ## Other Relevant Files Besides `package.json` Some other files have an effect on the resulting data object, in the following ways: ### `README?(.*)` If there is a `README` or `README.*` file present, then npm will attach a `readme` field to the data with the contents of this file. Owing to the fact that roughly 100% of existing node modules have Markdown README files, it will generally be assumed to be Markdown, regardless of the extension. Please plan accordingly. ### `server.js` If there is a `server.js` file, and there is not already a `scripts.start` field, then `scripts.start` will be set to `node server.js`. ### `AUTHORS` If there is not already a `contributors` field, then the `contributors` field will be set to the contents of the `AUTHORS` file, split by lines, and parsed. ### `bindings.gyp` If a bindings.gyp file exists, and there is not already a `scripts.install` field, then the `scripts.install` field will be set to `node-gyp rebuild`. ### `index.js` If the json file does not exist, but there is a `index.js` file present instead, and that file has a package comment, then it will try to parse the package comment, and use that as the data instead. A package comment looks like this: ```javascript /**package * { "name": "my-bare-module" * , "version": "1.2.3" * , "description": "etc...." } **/ // or... /**package { "name": "my-bare-module" , "version": "1.2.3" , "description": "etc...." } **/ ``` The important thing is that it starts with `/**package`, and ends with `**/`. If the package.json file exists, then the index.js is not parsed. ### `{directories.man}/*.[0-9]` If there is not already a `man` field defined as an array of files or a single file, and there is a `directories.man` field defined, then that directory will be searched for manpages. Any valid manpages found in that directory will be assigned to the `man` array, and installed in the appropriate man directory at package install time, when installed globally on a Unix system. ### `{directories.bin}/*` If there is not already a `bin` field defined as a string filename or a hash of `<name> : <filename>` pairs, then the `directories.bin` directory will be searched and all the files within it will be linked as executables at install time. When installing locally, npm links bins into `node_modules/.bin`, which is in the `PATH` environ when npm runs scripts. When installing globally, they are linked into `{prefix}/bin`, which is presumably in the `PATH` environment variable. # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # spdx-license-ids [![npm version](https://img.shields.io/npm/v/spdx-license-ids.svg)](https://www.npmjs.com/package/spdx-license-ids) [![Github Actions](https://action-badges.now.sh/shinnn/spdx-license-ids)](https://wdp9fww0r9.execute-api.us-west-2.amazonaws.com/production/results/shinnn/spdx-license-ids) A list of [SPDX license](https://spdx.org/licenses/) identifiers ## Installation [Download JSON directly](https://raw.githubusercontent.com/shinnn/spdx-license-ids/master/index.json), or [use](https://docs.npmjs.com/cli/install) [npm](https://docs.npmjs.com/about-npm/): ``` npm install spdx-license-ids ``` ## [Node.js](https://nodejs.org/) API ### require('spdx-license-ids') Type: `string[]` All license IDs except for the currently deprecated ones. ```javascript const ids = require('spdx-license-ids'); //=> ['0BSD', 'AAL', 'ADSL', 'AFL-1.1', 'AFL-1.2', 'AFL-2.0', 'AFL-2.1', 'AFL-3.0', 'AGPL-1.0-only', ...] ids.includes('BSD-3-Clause'); //=> true ids.includes('CC-BY-1.0'); //=> true ids.includes('GPL-3.0'); //=> false ``` ### require('spdx-license-ids/deprecated') Type: `string[]` Deprecated license IDs. ```javascript const deprecatedIds = require('spdx-license-ids/deprecated'); //=> ['AGPL-1.0', 'AGPL-3.0', 'GFDL-1.1', 'GFDL-1.2', 'GFDL-1.3', 'GPL-1.0', 'GPL-2.0', ...] deprecatedIds.includes('BSD-3-Clause'); //=> false deprecatedIds.includes('CC-BY-1.0'); //=> false deprecatedIds.includes('GPL-3.0'); //=> true ``` ## License [Creative Commons Zero v1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/deed) # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # npmlog The logger util that npm uses. This logger is very basic. It does the logging for npm. It supports custom levels and colored output. By default, logs are written to stderr. If you want to send log messages to outputs other than streams, then you can change the `log.stream` member, or you can just listen to the events that it emits, and do whatever you want with them. # Installation ```console npm install npmlog --save ``` # Basic Usage ```javascript var log = require('npmlog') // additional stuff ---------------------------+ // message ----------+ | // prefix ----+ | | // level -+ | | | // v v v v log.info('fyi', 'I have a kitty cat: %j', myKittyCat) ``` ## log.level * {String} The level to display logs at. Any logs at or above this level will be displayed. The special level `silent` will prevent anything from being displayed ever. ## log.record * {Array} An array of all the log messages that have been entered. ## log.maxRecordSize * {Number} The maximum number of records to keep. If log.record gets bigger than 10% over this value, then it is sliced down to 90% of this value. The reason for the 10% window is so that it doesn't have to resize a large array on every log entry. ## log.prefixStyle * {Object} A style object that specifies how prefixes are styled. (See below) ## log.headingStyle * {Object} A style object that specifies how the heading is styled. (See below) ## log.heading * {String} Default: "" If set, a heading that is printed at the start of every line. ## log.stream * {Stream} Default: `process.stderr` The stream where output is written. ## log.enableColor() Force colors to be used on all messages, regardless of the output stream. ## log.disableColor() Disable colors on all messages. ## log.enableProgress() Enable the display of log activity spinner and progress bar ## log.disableProgress() Disable the display of a progress bar ## log.enableUnicode() Force the unicode theme to be used for the progress bar. ## log.disableUnicode() Disable the use of unicode in the progress bar. ## log.setGaugeTemplate(template) Set a template for outputting the progress bar. See the [gauge documentation] for details. [gauge documentation]: https://npmjs.com/package/gauge ## log.setGaugeThemeset(themes) Select a themeset to pick themes from for the progress bar. See the [gauge documentation] for details. ## log.pause() Stop emitting messages to the stream, but do not drop them. ## log.resume() Emit all buffered messages that were written while paused. ## log.log(level, prefix, message, ...) * `level` {String} The level to emit the message at * `prefix` {String} A string prefix. Set to "" to skip. * `message...` Arguments to `util.format` Emit a log message at the specified level. ## log\[level](prefix, message, ...) For example, * log.silly(prefix, message, ...) * log.verbose(prefix, message, ...) * log.info(prefix, message, ...) * log.http(prefix, message, ...) * log.warn(prefix, message, ...) * log.error(prefix, message, ...) Like `log.log(level, prefix, message, ...)`. In this way, each level is given a shorthand, so you can do `log.info(prefix, message)`. ## log.addLevel(level, n, style, disp) * `level` {String} Level indicator * `n` {Number} The numeric level * `style` {Object} Object with fg, bg, inverse, etc. * `disp` {String} Optional replacement for `level` in the output. Sets up a new level with a shorthand function and so forth. Note that if the number is `Infinity`, then setting the level to that will cause all log messages to be suppressed. If the number is `-Infinity`, then the only way to show it is to enable all log messages. ## log.newItem(name, todo, weight) * `name` {String} Optional; progress item name. * `todo` {Number} Optional; total amount of work to be done. Default 0. * `weight` {Number} Optional; the weight of this item relative to others. Default 1. This adds a new `are-we-there-yet` item tracker to the progress tracker. The object returned has the `log[level]` methods but is otherwise an `are-we-there-yet` `Tracker` object. ## log.newStream(name, todo, weight) This adds a new `are-we-there-yet` stream tracker to the progress tracker. The object returned has the `log[level]` methods but is otherwise an `are-we-there-yet` `TrackerStream` object. ## log.newGroup(name, weight) This adds a new `are-we-there-yet` tracker group to the progress tracker. The object returned has the `log[level]` methods but is otherwise an `are-we-there-yet` `TrackerGroup` object. # Events Events are all emitted with the message object. * `log` Emitted for all messages * `log.<level>` Emitted for all messages with the `<level>` level. * `<prefix>` Messages with prefixes also emit their prefix as an event. # Style Objects Style objects can have the following fields: * `fg` {String} Color for the foreground text * `bg` {String} Color for the background * `bold`, `inverse`, `underline` {Boolean} Set the associated property * `bell` {Boolean} Make a noise (This is pretty annoying, probably.) # Message Objects Every log event is emitted with a message object, and the `log.record` list contains all of them that have been created. They have the following fields: * `id` {Number} * `level` {String} * `prefix` {String} * `message` {String} Result of `util.format()` * `messageRaw` {Array} Arguments to `util.format()` # Blocking TTYs We use [`set-blocking`](https://npmjs.com/package/set-blocking) to set stderr and stdout blocking if they are tty's and have the setBlocking call. This is a work around for an issue in early versions of Node.js 6.x, which made stderr and stdout non-blocking on OSX. (They are always blocking Windows and were never blocking on Linux.) `npmlog` needs them to be blocking so that it can allow output to stdout and stderr to be interlaced. # agentkeepalive [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Appveyor status][appveyor-image]][appveyor-url] [![Test coverage][codecov-image]][codecov-url] [![David deps][david-image]][david-url] [![Known Vulnerabilities][snyk-image]][snyk-url] [![npm download][download-image]][download-url] [npm-image]: https://img.shields.io/npm/v/agentkeepalive.svg?style=flat [npm-url]: https://npmjs.org/package/agentkeepalive [travis-image]: https://img.shields.io/travis/node-modules/agentkeepalive.svg?style=flat [travis-url]: https://travis-ci.org/node-modules/agentkeepalive [appveyor-image]: https://ci.appveyor.com/api/projects/status/k7ct4s47di6m5uy2?svg=true [appveyor-url]: https://ci.appveyor.com/project/fengmk2/agentkeepalive [codecov-image]: https://codecov.io/gh/node-modules/agentkeepalive/branch/master/graph/badge.svg [codecov-url]: https://codecov.io/gh/node-modules/agentkeepalive [david-image]: https://img.shields.io/david/node-modules/agentkeepalive.svg?style=flat [david-url]: https://david-dm.org/node-modules/agentkeepalive [snyk-image]: https://snyk.io/test/npm/agentkeepalive/badge.svg?style=flat-square [snyk-url]: https://snyk.io/test/npm/agentkeepalive [download-image]: https://img.shields.io/npm/dm/agentkeepalive.svg?style=flat-square [download-url]: https://npmjs.org/package/agentkeepalive The Node.js's missing `keep alive` `http.Agent`. Support `http` and `https`. ## What's different from original `http.Agent`? - `keepAlive=true` by default - Disable Nagle's algorithm: `socket.setNoDelay(true)` - Add free socket timeout: avoid long time inactivity socket leak in the free-sockets queue. - Add active socket timeout: avoid long time inactivity socket leak in the active-sockets queue. ## Install ```bash $ npm install agentkeepalive --save ``` ## new Agent([options]) * `options` {Object} Set of configurable options to set on the agent. Can have the following fields: * `keepAlive` {Boolean} Keep sockets around in a pool to be used by other requests in the future. Default = `true`. * `keepAliveMsecs` {Number} When using the keepAlive option, specifies the initial delay for TCP Keep-Alive packets. Ignored when the keepAlive option is false or undefined. Defaults to 1000. Default = `1000`. Only relevant if `keepAlive` is set to `true`. * `freeSocketKeepAliveTimeout`: {Number} Sets the free socket to timeout after `freeSocketKeepAliveTimeout` milliseconds of inactivity on the free socket. Default is `15000`. Only relevant if `keepAlive` is set to `true`. * `timeout`: {Number} Sets the working socket to timeout after `timeout` milliseconds of inactivity on the working socket. Default is `freeSocketKeepAliveTimeout * 2`. * `maxSockets` {Number} Maximum number of sockets to allow per host. Default = `Infinity`. * `maxFreeSockets` {Number} Maximum number of sockets (per host) to leave open in a free state. Only relevant if `keepAlive` is set to `true`. Default = `256`. * `socketActiveTTL` {Number} Sets the socket active time to live, even if it's in use. If not setted the behaviour continues the same (the socket will be released only when free) Default = `null`. ## Usage ```js const http = require('http'); const Agent = require('agentkeepalive'); const keepaliveAgent = new Agent({ maxSockets: 100, maxFreeSockets: 10, timeout: 60000, freeSocketKeepAliveTimeout: 30000, // free socket keepalive for 30 seconds }); const options = { host: 'cnodejs.org', port: 80, path: '/', method: 'GET', agent: keepaliveAgent, }; const req = http.request(options, res => { console.log('STATUS: ' + res.statusCode); console.log('HEADERS: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function (chunk) { console.log('BODY: ' + chunk); }); }); req.on('error', e => { console.log('problem with request: ' + e.message); }); req.end(); setTimeout(() => { if (keepaliveAgent.statusChanged) { console.log('[%s] agent status changed: %j', Date(), keepaliveAgent.getCurrentStatus()); } }, 2000); ``` ### `getter agent.statusChanged` counters have change or not after last checkpoint. ### `agent.getCurrentStatus()` `agent.getCurrentStatus()` will return a object to show the status of this agent: ```js { createSocketCount: 10, closeSocketCount: 5, timeoutSocketCount: 0, requestCount: 5, freeSockets: { 'localhost:57479:': 3 }, sockets: { 'localhost:57479:': 5 }, requests: {} } ``` ### Support `https` ```js const https = require('https'); const HttpsAgent = require('agentkeepalive').HttpsAgent; const keepaliveAgent = new HttpsAgent(); // https://www.google.com/search?q=nodejs&sugexp=chrome,mod=12&sourceid=chrome&ie=UTF-8 const options = { host: 'www.google.com', port: 443, path: '/search?q=nodejs&sugexp=chrome,mod=12&sourceid=chrome&ie=UTF-8', method: 'GET', agent: keepaliveAgent, }; const req = https.request(options, res => { console.log('STATUS: ' + res.statusCode); console.log('HEADERS: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', chunk => { console.log('BODY: ' + chunk); }); }); req.on('error', e => { console.log('problem with request: ' + e.message); }); req.end(); setTimeout(() => { console.log('agent status: %j', keepaliveAgent.getCurrentStatus()); }, 2000); ``` ## [Benchmark](https://github.com/node-modules/agentkeepalive/tree/master/benchmark) run the benchmark: ```bash cd benchmark sh start.sh ``` Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz node@v0.8.9 50 maxSockets, 60 concurrent, 1000 requests per concurrent, 5ms delay Keep alive agent (30 seconds): ```js Transactions: 60000 hits Availability: 100.00 % Elapsed time: 29.70 secs Data transferred: 14.88 MB Response time: 0.03 secs Transaction rate: 2020.20 trans/sec Throughput: 0.50 MB/sec Concurrency: 59.84 Successful transactions: 60000 Failed transactions: 0 Longest transaction: 0.15 Shortest transaction: 0.01 ``` Normal agent: ```js Transactions: 60000 hits Availability: 100.00 % Elapsed time: 46.53 secs Data transferred: 14.88 MB Response time: 0.05 secs Transaction rate: 1289.49 trans/sec Throughput: 0.32 MB/sec Concurrency: 59.81 Successful transactions: 60000 Failed transactions: 0 Longest transaction: 0.45 Shortest transaction: 0.00 ``` Socket created: ``` [proxy.js:120000] keepalive, 50 created, 60000 requestFinished, 1200 req/socket, 0 requests, 0 sockets, 0 unusedSockets, 50 timeout {" <10ms":662," <15ms":17825," <20ms":20552," <30ms":17646," <40ms":2315," <50ms":567," <100ms":377," <150ms":56," <200ms":0," >=200ms+":0} ---------------------------------------------------------------- [proxy.js:120000] normal , 53866 created, 84260 requestFinished, 1.56 req/socket, 0 requests, 0 sockets {" <10ms":75," <15ms":1112," <20ms":10947," <30ms":32130," <40ms":8228," <50ms":3002," <100ms":4274," <150ms":181," <200ms":18," >=200ms+":33} ``` ## License ``` (The MIT License) Copyright(c) node-modules and other contributors. Copyright(c) 2012 - 2015 fengmk2 <fengmk2@gmail.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` # from2 [![Flattr this!](https://api.flattr.com/button/flattr-badge-large.png)](https://flattr.com/submit/auto?user_id=hughskennedy&url=http://github.com/hughsk/from2&title=from2&description=hughsk/from2%20on%20GitHub&language=en_GB&tags=flattr,github,javascript&category=software)[![experimental](http://hughsk.github.io/stability-badges/dist/experimental.svg)](http://github.com/hughsk/stability-badges) # `from2` is a high-level module for creating readable streams that properly handle backpressure. Convience wrapper for [readable-stream](http://github.com/isaacs/readable-stream)'s `ReadableStream` base class, with an API lifted from [from](http://github.com/dominictarr/from) and [through2](http://github.com/rvagg/through2). ## Usage ## [![from2](https://nodei.co/npm/from2.png?mini=true)](https://nodei.co/npm/from2) ### `stream = from2([opts], read)` ### Where `opts` are the options to pass on to the `ReadableStream` constructor, and `read(size, next)` is called when data is requested from the stream. * `size` is the recommended amount of data (in bytes) to retrieve. * `next(err)` should be called when you're ready to emit more data. For example, here's a readable stream that emits the contents of a given string: ``` javascript var from = require('from2') function fromString(string) { return from(function(size, next) { // if there's no more content // left in the string, close the stream. if (string.length <= 0) return next(null, null) // Pull in a new chunk of text, // removing it from the string. var chunk = string.slice(0, size) string = string.slice(size) // Emit "chunk" from the stream. next(null, chunk) }) } // pipe "hello world" out // to stdout. fromString('hello world').pipe(process.stdout) ``` ### `stream = from2.obj([opts], read)` ### Shorthand for `from2({ objectMode: true }, read)`. ### `createStream = from2.ctor([opts], read)` ### If you're creating similar streams in quick succession you can improve performance by generating a stream **constructor** that you can reuse instead of creating one-off streams on each call. Takes the same options as `from2`, instead returning a constructor which you can use to create new streams. ### See Also - [from2-array](https://github.com/binocarlos/from2-array) - Create a from2 stream based on an array of source values. - [from2-string](https://github.com/yoshuawuyts/from2-string) - Create a stream from a string. Sugary wrapper around from2. ## License ## MIT. See [LICENSE.md](http://github.com/hughsk/from2/blob/master/LICENSE.md) for details. tunnel-agent ============ HTTP proxy tunneling agent. Formerly part of mikeal/request, now a standalone module. # es-abstract <sup>[![Version Badge][npm-version-svg]][package-url]</sup> [![Build Status][travis-svg]][travis-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][npm-badge-png]][package-url] [![browser support][testling-svg]][testling-url] ECMAScript spec abstract operations. When different versions of the spec conflict, the default export will be the latest version of the abstract operation. All abstract operations will also be available under an `es5`/`es2015`/`es2016` entry point, and exported property, if you require a specific version. ## Example ```js var ES = require('es-abstract'); var assert = require('assert'); assert(ES.isCallable(function () {})); assert(!ES.isCallable(/a/g)); ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [package-url]: https://npmjs.org/package/es-abstract [npm-version-svg]: http://versionbadg.es/ljharb/es-abstract.svg [travis-svg]: https://travis-ci.org/ljharb/es-abstract.svg [travis-url]: https://travis-ci.org/ljharb/es-abstract [deps-svg]: https://david-dm.org/ljharb/es-abstract.svg [deps-url]: https://david-dm.org/ljharb/es-abstract [dev-deps-svg]: https://david-dm.org/ljharb/es-abstract/dev-status.svg [dev-deps-url]: https://david-dm.org/ljharb/es-abstract#info=devDependencies [testling-svg]: https://ci.testling.com/ljharb/es-abstract.png [testling-url]: https://ci.testling.com/ljharb/es-abstract [npm-badge-png]: https://nodei.co/npm/es-abstract.png?downloads=true&stars=true [license-image]: https://img.shields.io/npm/l/es-abstract.svg [license-url]: LICENSE [downloads-image]: https://img.shields.io/npm/dm/es-abstract.svg [downloads-url]: https://npm-stat.com/charts.html?package=es-abstract # lodash._getnative v3.9.1 The [modern build](https://github.com/lodash/lodash/wiki/Build-Differences) of [lodash’s](https://lodash.com/) internal `getNative` exported as a [Node.js](http://nodejs.org/)/[io.js](https://iojs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._getnative ``` In Node.js/io.js: ```js var getNative = require('lodash._getnative'); ``` See the [package source](https://github.com/lodash/lodash/blob/3.9.1-npm-packages/lodash._getnative) for more details. # read-package-tree [![Build Status](https://travis-ci.org/npm/read-package-tree.svg?branch=master)](https://travis-ci.org/npm/read-package-tree) Read the contents of node_modules. ## USAGE ```javascript var rpt = require ('read-package-tree') rpt('/path/to/pkg/root', function (node, kidName) { // optional filter function– if included, each package folder found is passed to // it to see if it should be included in the final tree // node is what we're adding children to // kidName is the directory name of the module we're considering adding // return true -> include, false -> skip }, function (er, data) { // er means that something didn't work. // data is a structure like: // { // package: <package.json data, or an empty object> // package.name: defaults to `basename(path)` // children: [ <more things like this> ] // parent: <thing that has this in its children property, or null> // path: <path loaded> // realpath: <the real path on disk> // isLink: <set if this is a Link> // target: <if a Link, then this is the actual Node> // error: <if set, the error we got loading/parsing the package.json> // } }) // or promise-style rpt('/path/to/pkg/root').then(data => { ... }) ``` That's it. It doesn't figure out if dependencies are met, it doesn't mutate package.json data objects (beyond what [read-package-json](http://npm.im/read-package-json) already does), it doesn't limit its search to include/exclude `devDependencies`, or anything else. Just follows the links in the `node_modules` hierarchy and reads the package.json files it finds therein. ## Symbolic Links When there are symlinks to packages in the `node_modules` hierarchy, a `Link` object will be created, with a `target` that is a `Node` object. For the most part, you can treat `Link` objects just the same as `Node` objects. But if your tree-walking program needs to treat symlinks differently from normal folders, then make sure to check the object. In a given `read-package-tree` run, a specific `path` will always correspond to a single object, and a specific `realpath` will always correspond to a single `Node` object. This means that you may not be able to pass the resulting data object to `JSON.stringify`, because it may contain cycles. ## Errors Errors parsing or finding a package.json in node_modules will result in a node with the error property set. We will still find deeper node_modules if any exist. *Prior to `5.0.0` these aborted tree reading with an error callback.* Only a few classes of errors are fatal (result in an error callback): * If the top level location is entirely missing, that will error. * if `fs.realpath` returns an error for any path its trying to resolve. # Worker Farm [![Build Status](https://secure.travis-ci.org/rvagg/node-worker-farm.svg)](http://travis-ci.org/rvagg/node-worker-farm) [![NPM](https://nodei.co/npm/worker-farm.png?downloads=true&downloadRank=true&stars=true)](https://nodei.co/npm/worker-farm/) [![NPM](https://nodei.co/npm-dl/worker-farm.png?months=6&height=3)](https://nodei.co/npm/worker-farm/) Distribute processing tasks to child processes with an über-simple API and baked-in durability & custom concurrency options. *Available in npm as <strong>worker-farm</strong>*. ## Example Given a file, *child.js*: ```js module.exports = function (inp, callback) { callback(null, inp + ' BAR (' + process.pid + ')') } ``` And a main file: ```js var workerFarm = require('worker-farm') , workers = workerFarm(require.resolve('./child')) , ret = 0 for (var i = 0; i < 10; i++) { workers('#' + i + ' FOO', function (err, outp) { console.log(outp) if (++ret == 10) workerFarm.end(workers) }) } ``` We'll get an output something like the following: ``` #1 FOO BAR (8546) #0 FOO BAR (8545) #8 FOO BAR (8545) #9 FOO BAR (8546) #2 FOO BAR (8548) #4 FOO BAR (8551) #3 FOO BAR (8549) #6 FOO BAR (8555) #5 FOO BAR (8553) #7 FOO BAR (8557) ``` This example is contained in the *[examples/basic](https://github.com/rvagg/node-worker-farm/tree/master/examples/basic/)* directory. ### Example #1: Estimating π using child workers You will also find a more complex example in *[examples/pi](https://github.com/rvagg/node-worker-farm/tree/master/examples/pi/)* that estimates the value of **π** by using a Monte Carlo *area-under-the-curve* method and compares the speed of doing it all in-process vs using child workers to complete separate portions. Running `node examples/pi` will give you something like: ``` Doing it the slow (single-process) way... π ≈ 3.1416269360000006 (0.0000342824102075312 away from actual!) took 8341 milliseconds Doing it the fast (multi-process) way... π ≈ 3.1416233600000036 (0.00003070641021052367 away from actual!) took 1985 milliseconds ``` ## Durability An important feature of Worker Farm is **call durability**. If a child process dies for any reason during the execution of call(s), those calls will be re-queued and taken care of by other child processes. In this way, when you ask for something to be done, unless there is something *seriously* wrong with what you're doing, you should get a result on your callback function. ## My use-case There are other libraries for managing worker processes available but my use-case was fairly specific: I need to make heavy use of the [node-java](https://github.com/nearinfinity/node-java) library to interact with JVM code. Unfortunately, because the JVM garbage collector is so difficult to interact with, it's prone to killing your Node process when the GC kicks under heavy load. For safety I needed a durable way to make calls so that (1) it wouldn't kill my main process and (2) any calls that weren't successful would be resubmitted for processing. Worker Farm allows me to spin up multiple JVMs to be controlled by Node, and have a single, uncomplicated API that acts the same way as an in-process API and the calls will be taken care of by a child process even if an error kills a child process while it is working as the call will simply be passed to a new child process. **But**, don't think that Worker Farm is specific to that use-case, it's designed to be very generic and simple to adapt to anything requiring the use of child Node processes. ## API Worker Farm exports a main function and an `end()` method. The main function sets up a "farm" of coordinated child-process workers and it can be used to instantiate multiple farms, all operating independently. ### workerFarm([options, ]pathToModule[, exportedMethods]) In its most basic form, you call `workerFarm()` with the path to a module file to be invoked by the child process. You should use an **absolute path** to the module file, the best way to obtain the path is with `require.resolve('./path/to/module')`, this function can be used in exactly the same way as `require('./path/to/module')` but it returns an absolute path. #### `exportedMethods` If your module exports a single function on `module.exports` then you should omit the final parameter. However, if you are exporting multiple functions on `module.exports` then you should list them in an Array of Strings: ```js var workers = workerFarm(require.resolve('./mod'), [ 'doSomething', 'doSomethingElse' ]) workers.doSomething(function () {}) workers.doSomethingElse(function () {}) ``` Listing the available methods will instruct Worker Farm what API to provide you with on the returned object. If you don't list a `exportedMethods` Array then you'll get a single callable function to use; but if you list the available methods then you'll get an object with callable functions by those names. **It is assumed that each function you call on your child module will take a `callback` function as the last argument.** #### `options` If you don't provide an `options` object then the following defaults will be used: ```js { workerOptions : {} , maxCallsPerWorker : Infinity , maxConcurrentWorkers : require('os').cpus().length , maxConcurrentCallsPerWorker : 10 , maxConcurrentCalls : Infinity , maxCallTime : Infinity , maxRetries : Infinity , autoStart : false , onChild : function() {} } ``` * **<code>workerOptions</code>** allows you to customize all the parameters passed to child nodes. This object supports [all possible options of `child_process.fork`](https://nodejs.org/api/child_process.html#child_process_child_process_fork_modulepath_args_options). The default options passed are the parent `execArgv`, `cwd` and `env`. Any (or all) of them can be overridden, and others can be added as well. * **<code>maxCallsPerWorker</code>** allows you to control the lifespan of your child processes. A positive number will indicate that you only want each child to accept that many calls before it is terminated. This may be useful if you need to control memory leaks or similar in child processes. * **<code>maxConcurrentWorkers</code>** will set the number of child processes to maintain concurrently. By default it is set to the number of CPUs available on the current system, but it can be any reasonable number, including `1`. * **<code>maxConcurrentCallsPerWorker</code>** allows you to control the *concurrency* of individual child processes. Calls are placed into a queue and farmed out to child processes according to the number of calls they are allowed to handle concurrently. It is arbitrarily set to 10 by default so that calls are shared relatively evenly across workers, however if your calls predictably take a similar amount of time then you could set it to `Infinity` and Worker Farm won't queue any calls but spread them evenly across child processes and let them go at it. If your calls aren't I/O bound then it won't matter what value you use here as the individual workers won't be able to execute more than a single call at a time. * **<code>maxConcurrentCalls</code>** allows you to control the maximum number of calls in the queue&mdash;either actively being processed or waiting for a worker to be processed. `Infinity` indicates no limit but if you have conditions that may endlessly queue jobs and you need to set a limit then provide a `>0` value and any calls that push the limit will return on their callback with a `MaxConcurrentCallsError` error (check `err.type == 'MaxConcurrentCallsError'`). * **<code>maxCallTime</code>** *(use with caution, understand what this does before you use it!)* when `!== Infinity`, will cap a time, in milliseconds, that *any single call* can take to execute in a worker. If this time limit is exceeded by just a single call then the worker running that call will be killed and any calls running on that worker will have their callbacks returned with a `TimeoutError` (check `err.type == 'TimeoutError'`). If you are running with `maxConcurrentCallsPerWorker` value greater than `1` then **all calls currently executing** will fail and will be automatically resubmitted uless you've changed the `maxRetries` option. Use this if you have jobs that may potentially end in infinite loops that you can't programatically end with your child code. Preferably run this with a `maxConcurrentCallsPerWorker` so you don't interrupt other calls when you have a timeout. This timeout operates on a per-call basis but will interrupt a whole worker. * **<code>maxRetries</code>** allows you to control the max number of call requeues after worker termination (unexpected or timeout). By default this option is set to `Infinity` which means that each call of each terminated worker will always be auto requeued. When the number of retries exceeds `maxRetries` value, the job callback will be executed with a `ProcessTerminatedError`. Note that if you are running with finite `maxCallTime` and `maxConcurrentCallsPerWorkers` greater than `1` then any `TimeoutError` will increase the retries counter *for each* concurrent call of the terminated worker. * **<code>autoStart</code>** when set to `true` will start the workers as early as possible. Use this when your workers have to do expensive initialization. That way they'll be ready when the first request comes through. * **<code>onChild</code>** when new child process starts this callback will be called with subprocess object as an argument. Use this when you need to add some custom communication with child processes. ### workerFarm.end(farm) Child processes stay alive waiting for jobs indefinitely and your farm manager will stay alive managing its workers, so if you need it to stop then you have to do so explicitly. If you send your farm API to `workerFarm.end()` then it'll cleanly end your worker processes. Note though that it's a *soft* ending so it'll wait for child processes to finish what they are working on before asking them to die. Any calls that are queued and not yet being handled by a child process will be discarded. `end()` only waits for those currently in progress. Once you end a farm, it won't handle any more calls, so don't even try! ## Related * [farm-cli](https://github.com/Kikobeats/farm-cli) – Launch a farm of workers from CLI. ## License Worker Farm is Copyright (c) 2014 Rod Vagg [@rvagg](https://twitter.com/rvagg) and licensed under the MIT license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE.md file for more details. # readable-stream ***Node.js core streams for userland*** [![Build Status](https://travis-ci.com/nodejs/readable-stream.svg?branch=master)](https://travis-ci.com/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readabe-stream.svg)](https://saucelabs.com/u/readabe-stream) ```bash npm install --save readable-stream ``` This package is a mirror of the streams implementations in Node.js. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v10.19.0/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. ## Version 3.x.x v3.x.x of `readable-stream` is a cut from Node 10. This version supports Node 6, 8, and 10, as well as evergreen browsers, IE 11 and latest Safari. The breaking changes introduced by v3 are composed by the combined breaking changes in [Node v9](https://nodejs.org/en/blog/release/v9.0.0/) and [Node v10](https://nodejs.org/en/blog/release/v10.0.0/), as follows: 1. Error codes: https://github.com/nodejs/node/pull/13310, https://github.com/nodejs/node/pull/13291, https://github.com/nodejs/node/pull/16589, https://github.com/nodejs/node/pull/15042, https://github.com/nodejs/node/pull/15665, https://github.com/nodejs/readable-stream/pull/344 2. 'readable' have precedence over flowing https://github.com/nodejs/node/pull/18994 3. make virtual methods errors consistent https://github.com/nodejs/node/pull/18813 4. updated streams error handling https://github.com/nodejs/node/pull/18438 5. writable.end should return this. https://github.com/nodejs/node/pull/18780 6. readable continues to read when push('') https://github.com/nodejs/node/pull/18211 7. add custom inspect to BufferList https://github.com/nodejs/node/pull/17907 8. always defer 'readable' with nextTick https://github.com/nodejs/node/pull/17979 ## Version 2.x.x v2.x.x of `readable-stream` is a cut of the stream module from Node 8 (there have been no semver-major changes from Node 4 to 8). This version supports all Node.js versions from 0.8, as well as evergreen browsers and IE 10 & 11. ### Big Thanks Cross-browser Testing Platform and Open Source <3 Provided by [Sauce Labs][sauce] # Usage You can swap your `require('stream')` with `require('readable-stream')` without any changes, if you are just using one of the main classes and functions. ```js const { Readable, Writable, Transform, Duplex, pipeline, finished } = require('readable-stream') ```` Note that `require('stream')` will return `Stream`, while `require('readable-stream')` will return `Readable`. We discourage using whatever is exported directly, but rather use one of the properties as shown in the example above. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; * **Yoshua Wyuts** ([@yoshuawuyts](https://github.com/yoshuawuyts)) &lt;yoshuawuyts@gmail.com&gt; [sauce]: https://saucelabs.com # pumpify Combine an array of streams into a single duplex stream using [pump](https://github.com/mafintosh/pump) and [duplexify](https://github.com/mafintosh/duplexify). If one of the streams closes/errors all streams in the pipeline will be destroyed. ``` npm install pumpify ``` [![build status](http://img.shields.io/travis/mafintosh/pumpify.svg?style=flat)](http://travis-ci.org/mafintosh/pumpify) ## Usage Pass the streams you want to pipe together to pumpify `pipeline = pumpify(s1, s2, s3, ...)`. `pipeline` is a duplex stream that writes to the first streams and reads from the last one. Streams are piped together using [pump](https://github.com/mafintosh/pump) so if one of them closes all streams will be destroyed. ``` js var pumpify = require('pumpify') var tar = require('tar-fs') var zlib = require('zlib') var fs = require('fs') var untar = pumpify(zlib.createGunzip(), tar.extract('output-folder')) // you can also pass an array instead // var untar = pumpify([zlib.createGunzip(), tar.extract('output-folder')]) fs.createReadStream('some-gzipped-tarball.tgz').pipe(untar) ``` If you are pumping object streams together use `pipeline = pumpify.obj(s1, s2, ...)`. Call `pipeline.destroy()` to destroy the pipeline (including the streams passed to pumpify). ### Using `setPipeline(s1, s2, ...)` Similar to [duplexify](https://github.com/mafintosh/duplexify) you can also define the pipeline asynchronously using `setPipeline(s1, s2, ...)` ``` js var untar = pumpify() setTimeout(function() { // will start draining the input now untar.setPipeline(zlib.createGunzip(), tar.extract('output-folder')) }, 1000) fs.createReadStream('some-gzipped-tarball.tgz').pipe(untar) ``` ## License MIT ## Related `pumpify` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. # Console Control Strings A library of cross-platform tested terminal/console command strings for doing things like color and cursor positioning. This is a subset of both ansi and vt100. All control codes included work on both Windows & Unix-like OSes, except where noted. ## Usage ```js var consoleControl = require('console-control-strings') console.log(consoleControl.color('blue','bgRed', 'bold') + 'hi there' + consoleControl.color('reset')) process.stdout.write(consoleControl.goto(75, 10)) ``` ## Why Another? There are tons of libraries similar to this one. I wanted one that was: 1. Very clear about compatibility goals. 2. Could emit, for instance, a start color code without an end one. 3. Returned strings w/o writing to streams. 4. Was not weighed down with other unrelated baggage. ## Functions ### var code = consoleControl.up(_num = 1_) Returns the escape sequence to move _num_ lines up. ### var code = consoleControl.down(_num = 1_) Returns the escape sequence to move _num_ lines down. ### var code = consoleControl.forward(_num = 1_) Returns the escape sequence to move _num_ lines righ. ### var code = consoleControl.back(_num = 1_) Returns the escape sequence to move _num_ lines left. ### var code = consoleControl.nextLine(_num = 1_) Returns the escape sequence to move _num_ lines down and to the beginning of the line. ### var code = consoleControl.previousLine(_num = 1_) Returns the escape sequence to move _num_ lines up and to the beginning of the line. ### var code = consoleControl.eraseData() Returns the escape sequence to erase everything from the current cursor position to the bottom right of the screen. This is line based, so it erases the remainder of the current line and all following lines. ### var code = consoleControl.eraseLine() Returns the escape sequence to erase to the end of the current line. ### var code = consoleControl.goto(_x_, _y_) Returns the escape sequence to move the cursor to the designated position. Note that the origin is _1, 1_ not _0, 0_. ### var code = consoleControl.gotoSOL() Returns the escape sequence to move the cursor to the beginning of the current line. (That is, it returns a carriage return, `\r`.) ### var code = consoleControl.beep() Returns the escape sequence to cause the termianl to beep. (That is, it returns unicode character `\x0007`, a Control-G.) ### var code = consoleControl.hideCursor() Returns the escape sequence to hide the cursor. ### var code = consoleControl.showCursor() Returns the escape sequence to show the cursor. ### var code = consoleControl.color(_colors = []_) ### var code = consoleControl.color(_color1_, _color2_, _…_, _colorn_) Returns the escape sequence to set the current terminal display attributes (mostly colors). Arguments can either be a list of attributes or an array of attributes. The difference between passing in an array or list of colors and calling `.color` separately for each one, is that in the former case a single escape sequence will be produced where as in the latter each change will have its own distinct escape sequence. Each attribute can be one of: * Reset: * **reset** – Reset all attributes to the terminal default. * Styles: * **bold** – Display text as bold. In some terminals this means using a bold font, in others this means changing the color. In some it means both. * **italic** – Display text as italic. This is not available in most Windows terminals. * **underline** – Underline text. This is not available in most Windows Terminals. * **inverse** – Invert the foreground and background colors. * **stopBold** – Do not display text as bold. * **stopItalic** – Do not display text as italic. * **stopUnderline** – Do not underline text. * **stopInverse** – Do not invert foreground and background. * Colors: * **white** * **black** * **blue** * **cyan** * **green** * **magenta** * **red** * **yellow** * **grey** / **brightBlack** * **brightRed** * **brightGreen** * **brightYellow** * **brightBlue** * **brightMagenta** * **brightCyan** * **brightWhite** * Background Colors: * **bgWhite** * **bgBlack** * **bgBlue** * **bgCyan** * **bgGreen** * **bgMagenta** * **bgRed** * **bgYellow** * **bgGrey** / **bgBrightBlack** * **bgBrightRed** * **bgBrightGreen** * **bgBrightYellow** * **bgBrightBlue** * **bgBrightMagenta** * **bgBrightCyan** * **bgBrightWhite** Like `chown -R`. Takes the same arguments as `fs.chown()` # libnpmhook [![npm version](https://img.shields.io/npm/v/libnpmhook.svg)](https://npm.im/libnpmhook) [![license](https://img.shields.io/npm/l/libnpmhook.svg)](https://npm.im/libnpmhook) [![Travis](https://img.shields.io/travis/npm/libnpmhook.svg)](https://travis-ci.org/npm/libnpmhook) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/libnpmhook?svg=true)](https://ci.appveyor.com/project/zkat/libnpmhook) [![Coverage Status](https://coveralls.io/repos/github/npm/libnpmhook/badge.svg?branch=latest)](https://coveralls.io/github/npm/libnpmhook?branch=latest) [`libnpmhook`](https://github.com/npm/libnpmhook) is a Node.js library for programmatically managing the npm registry's server-side hooks. For a more general introduction to managing hooks, see [the introductory blog post](https://blog.npmjs.org/post/145260155635/introducing-hooks-get-notifications-of-npm). ## Example ```js const hooks = require('libnpmhook') console.log(await hooks.ls('mypkg', {token: 'deadbeef'})) // array of hook objects on `mypkg`. ``` ## Install `$ npm install libnpmhook` ## Table of Contents * [Example](#example) * [Install](#install) * [API](#api) * [hook opts](#opts) * [`add()`](#add) * [`rm()`](#rm) * [`ls()`](#ls) * [`ls.stream()`](#ls-stream) * [`update()`](#update) ### API #### <a name="opts"></a> `opts` for `libnpmhook` commands `libnpmhook` uses [`npm-registry-fetch`](https://npm.im/npm-registry-fetch). All options are passed through directly to that library, so please refer to [its own `opts` documentation](https://www.npmjs.com/package/npm-registry-fetch#fetch-options) for options that can be passed in. A couple of options of note for those in a hurry: * `opts.token` - can be passed in and will be used as the authentication token for the registry. For other ways to pass in auth details, see the n-r-f docs. * `opts.otp` - certain operations will require an OTP token to be passed in. If a `libnpmhook` command fails with `err.code === EOTP`, please retry the request with `{otp: <2fa token>}` * `opts.Promise` - If you pass this in, the Promises returned by `libnpmhook` commands will use this Promise class instead. For example: `{Promise: require('bluebird')}` #### <a name="add"></a> `> hooks.add(name, endpoint, secret, [opts]) -> Promise` `name` is the name of the package, org, or user/org scope to watch. The type is determined by the name syntax: `'@foo/bar'` and `'foo'` are treated as packages, `@foo` is treated as a scope, and `~user` is treated as an org name or scope. Each type will attach to different events. The `endpoint` should be a fully-qualified http URL for the endpoint the hook will send its payload to when it fires. `secret` is a shared secret that the hook will send to that endpoint to verify that it's actually coming from the registry hook. The returned Promise resolves to the full hook object that was created, including its generated `id`. See also: [`POST /v1/hooks/hook`](https://github.com/npm/registry/blob/master/docs/hooks/endpoints.md#post-v1hookshook) ##### Example ```javascript await hooks.add('~zkat', 'https://zkat.tech/api/added', 'supersekrit', { token: 'myregistrytoken', otp: '694207' }) => { id: '16f7xoal', username: 'zkat', name: 'zkat', endpoint: 'https://zkat.tech/api/added', secret: 'supersekrit', type: 'owner', created: '2018-08-21T20:05:25.125Z', updated: '2018-08-21T20:05:25.125Z', deleted: false, delivered: false, last_delivery: null, response_code: 0, status: 'active' } ``` #### <a name="find"></a> `> hooks.find(id, [opts]) -> Promise` Returns the hook identified by `id`. The returned Promise resolves to the full hook object that was found, or error with `err.code` of `'E404'` if it didn't exist. See also: [`GET /v1/hooks/hook/:id`](https://github.com/npm/registry/blob/master/docs/hooks/endpoints.md#get-v1hookshookid) ##### Example ```javascript await hooks.find('16f7xoal', {token: 'myregistrytoken'}) => { id: '16f7xoal', username: 'zkat', name: 'zkat', endpoint: 'https://zkat.tech/api/added', secret: 'supersekrit', type: 'owner', created: '2018-08-21T20:05:25.125Z', updated: '2018-08-21T20:05:25.125Z', deleted: false, delivered: false, last_delivery: null, response_code: 0, status: 'active' } ``` #### <a name="rm"></a> `> hooks.rm(id, [opts]) -> Promise` Removes the hook identified by `id`. The returned Promise resolves to the full hook object that was removed, if it existed, or `null` if no such hook was there (instead of erroring). See also: [`DELETE /v1/hooks/hook/:id`](https://github.com/npm/registry/blob/master/docs/hooks/endpoints.md#delete-v1hookshookid) ##### Example ```javascript await hooks.rm('16f7xoal', { token: 'myregistrytoken', otp: '694207' }) => { id: '16f7xoal', username: 'zkat', name: 'zkat', endpoint: 'https://zkat.tech/api/added', secret: 'supersekrit', type: 'owner', created: '2018-08-21T20:05:25.125Z', updated: '2018-08-21T20:05:25.125Z', deleted: true, delivered: false, last_delivery: null, response_code: 0, status: 'active' } // Repeat it... await hooks.rm('16f7xoal', { token: 'myregistrytoken', otp: '694207' }) => null ``` #### <a name="update"></a> `> hooks.update(id, endpoint, secret, [opts]) -> Promise` The `id` should be a hook ID from a previously-created hook. The `endpoint` should be a fully-qualified http URL for the endpoint the hook will send its payload to when it fires. `secret` is a shared secret that the hook will send to that endpoint to verify that it's actually coming from the registry hook. The returned Promise resolves to the full hook object that was updated, if it existed. Otherwise, it will error with an `'E404'` error code. See also: [`PUT /v1/hooks/hook/:id`](https://github.com/npm/registry/blob/master/docs/hooks/endpoints.md#put-v1hookshookid) ##### Example ```javascript await hooks.update('16fxoal', 'https://zkat.tech/api/other', 'newsekrit', { token: 'myregistrytoken', otp: '694207' }) => { id: '16f7xoal', username: 'zkat', name: 'zkat', endpoint: 'https://zkat.tech/api/other', secret: 'newsekrit', type: 'owner', created: '2018-08-21T20:05:25.125Z', updated: '2018-08-21T20:14:41.964Z', deleted: false, delivered: false, last_delivery: null, response_code: 0, status: 'active' } ``` #### <a name="ls"></a> `> hooks.ls([opts]) -> Promise` Resolves to an array of hook objects associated with the account you're authenticated as. Results can be further filtered with three values that can be passed in through `opts`: * `opts.package` - filter results by package name * `opts.limit` - maximum number of hooks to return * `opts.offset` - pagination offset for results (use with `opts.limit`) See also: * [`hooks.ls.stream()`](#ls-stream) * [`GET /v1/hooks`](https://github.com/npm/registry/blob/master/docs/hooks/endpoints.md#get-v1hooks) ##### Example ```javascript await hooks.ls({token: 'myregistrytoken'}) => [ { id: '16f7xoal', ... }, { id: 'wnyf98a1', ... }, ... ] ``` #### <a name="ls-stream"></a> `> hooks.ls.stream([opts]) -> Stream` Returns a stream of hook objects associated with the account you're authenticated as. The returned stream is a valid `Symbol.asyncIterator` on `node@>=10`. Results can be further filtered with three values that can be passed in through `opts`: * `opts.package` - filter results by package name * `opts.limit` - maximum number of hooks to return * `opts.offset` - pagination offset for results (use with `opts.limit`) See also: * [`hooks.ls()`](#ls) * [`GET /v1/hooks`](https://github.com/npm/registry/blob/master/docs/hooks/endpoints.md#get-v1hooks) ##### Example ```javascript for await (let hook of hooks.ls.stream({token: 'myregistrytoken'})) { console.log('found hook:', hook.id) } => // outputs: // found hook: 16f7xoal // found hook: wnyf98a1 ``` An ini format parser and serializer for node. Sections are treated as nested objects. Items before the first heading are saved on the object directly. ## Usage Consider an ini-file `config.ini` that looks like this: ; this comment is being ignored scope = global [database] user = dbuser password = dbpassword database = use_this_database [paths.default] datadir = /var/lib/data array[] = first value array[] = second value array[] = third value You can read, manipulate and write the ini-file like so: var fs = require('fs') , ini = require('ini') var config = ini.parse(fs.readFileSync('./config.ini', 'utf-8')) config.scope = 'local' config.database.database = 'use_another_database' config.paths.default.tmpdir = '/tmp' delete config.paths.default.datadir config.paths.default.array.push('fourth value') fs.writeFileSync('./config_modified.ini', ini.stringify(config, { section: 'section' })) This will result in a file called `config_modified.ini` being written to the filesystem with the following content: [section] scope=local [section.database] user=dbuser password=dbpassword database=use_another_database [section.paths.default] tmpdir=/tmp array[]=first value array[]=second value array[]=third value array[]=fourth value ## API ### decode(inistring) Decode the ini-style formatted `inistring` into a nested object. ### parse(inistring) Alias for `decode(inistring)` ### encode(object, [options]) Encode the object `object` into an ini-style formatted string. If the optional parameter `section` is given, then all top-level properties of the object are put into this section and the `section`-string is prepended to all sub-sections, see the usage example above. The `options` object may contain the following: * `section` A string which will be the first `section` in the encoded ini data. Defaults to none. * `whitespace` Boolean to specify whether to put whitespace around the `=` character. By default, whitespace is omitted, to be friendly to some persnickety old parsers that don't tolerate it well. But some find that it's more human-readable and pretty with the whitespace. For backwards compatibility reasons, if a `string` options is passed in, then it is assumed to be the `section` value. ### stringify(object, [options]) Alias for `encode(object, [options])` ### safe(val) Escapes the string `val` such that it is safe to be used as a key or value in an ini-file. Basically escapes quotes. For example ini.safe('"unsafe string"') would result in "\"unsafe string\"" ### unsafe(val) Unescapes the string `val` # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # mime-db [![NPM Version][npm-version-image]][npm-url] [![NPM Downloads][npm-downloads-image]][npm-url] [![Node.js Version][node-image]][node-url] [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] This is a database of all mime types. It consists of a single, public JSON file and does not include any logic, allowing it to remain as un-opinionated as possible with an API. It aggregates data from the following sources: - http://www.iana.org/assignments/media-types/media-types.xhtml - http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/conf/mime.types - http://hg.nginx.org/nginx/raw-file/default/conf/mime.types ## Installation ```bash npm install mime-db ``` ### Database Download If you're crazy enough to use this in the browser, you can just grab the JSON file using [RawGit](https://rawgit.com/). It is recommended to replace `master` with [a release tag](https://github.com/jshttp/mime-db/tags) as the JSON format may change in the future. ``` https://cdn.rawgit.com/jshttp/mime-db/master/db.json ``` ## Usage ```js var db = require('mime-db'); // grab data on .js files var data = db['application/javascript']; ``` ## Data Structure The JSON file is a map lookup for lowercased mime types. Each mime type has the following properties: - `.source` - where the mime type is defined. If not set, it's probably a custom media type. - `apache` - [Apache common media types](http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/conf/mime.types) - `iana` - [IANA-defined media types](http://www.iana.org/assignments/media-types/media-types.xhtml) - `nginx` - [nginx media types](http://hg.nginx.org/nginx/raw-file/default/conf/mime.types) - `.extensions[]` - known extensions associated with this mime type. - `.compressible` - whether a file of this type can be gzipped. - `.charset` - the default charset associated with this type, if any. If unknown, every property could be `undefined`. ## Contributing To edit the database, only make PRs against `src/custom.json` or `src/custom-suffix.json`. The `src/custom.json` file is a JSON object with the MIME type as the keys and the values being an object with the following keys: - `compressible` - leave out if you don't know, otherwise `true`/`false` to indicate whether the data represented by the type is typically compressible. - `extensions` - include an array of file extensions that are associated with the type. - `notes` - human-readable notes about the type, typically what the type is. - `sources` - include an array of URLs of where the MIME type and the associated extensions are sourced from. This needs to be a [primary source](https://en.wikipedia.org/wiki/Primary_source); links to type aggregating sites and Wikipedia are _not acceptable_. To update the build, run `npm run build`. ## Adding Custom Media Types The best way to get new media types included in this library is to register them with the IANA. The community registration procedure is outlined in [RFC 6838 section 5](http://tools.ietf.org/html/rfc6838#section-5). Types registered with the IANA are automatically pulled into this library. [npm-version-image]: https://img.shields.io/npm/v/mime-db.svg [npm-downloads-image]: https://img.shields.io/npm/dm/mime-db.svg [npm-url]: https://npmjs.org/package/mime-db [travis-image]: https://img.shields.io/travis/jshttp/mime-db/master.svg [travis-url]: https://travis-ci.org/jshttp/mime-db [coveralls-image]: https://img.shields.io/coveralls/jshttp/mime-db/master.svg [coveralls-url]: https://coveralls.io/r/jshttp/mime-db?branch=master [node-image]: https://img.shields.io/node/v/mime-db.svg [node-url]: https://nodejs.org/en/download/ aproba ====== A ridiculously light-weight function argument validator ``` var validate = require("aproba") function myfunc(a, b, c) { // `a` must be a string, `b` a number, `c` a function validate('SNF', arguments) // [a,b,c] is also valid } myfunc('test', 23, function () {}) // ok myfunc(123, 23, function () {}) // type error myfunc('test', 23) // missing arg error myfunc('test', 23, function () {}, true) // too many args error ``` Valid types are: | type | description | :--: | :---------- | * | matches any type | A | `Array.isArray` OR an `arguments` object | S | typeof == string | N | typeof == number | F | typeof == function | O | typeof == object and not type A and not type E | B | typeof == boolean | E | `instanceof Error` OR `null` **(special: see below)** | Z | == `null` Validation failures throw one of three exception types, distinguished by a `code` property of `EMISSINGARG`, `EINVALIDTYPE` or `ETOOMANYARGS`. If you pass in an invalid type then it will throw with a code of `EUNKNOWNTYPE`. If an **error** argument is found and is not null then the remaining arguments are optional. That is, if you say `ESO` then that's like using a non-magical `E` in: `E|ESO|ZSO`. ### But I have optional arguments?! You can provide more than one signature by separating them with pipes `|`. If any signature matches the arguments then they'll be considered valid. So for example, say you wanted to write a signature for `fs.createWriteStream`. The docs for it describe it thusly: ``` fs.createWriteStream(path[, options]) ``` This would be a signature of `SO|S`. That is, a string and and object, or just a string. Now, if you read the full `fs` docs, you'll see that actually path can ALSO be a buffer. And options can be a string, that is: ``` path <String> | <Buffer> options <String> | <Object> ``` To reproduce this you have to fully enumerate all of the possible combinations and that implies a signature of `SO|SS|OO|OS|S|O`. The awkwardness is a feature: It reminds you of the complexity you're adding to your API when you do this sort of thing. ### Browser support This has no dependencies and should work in browsers, though you'll have noisier stack traces. ### Why this exists I wanted a very simple argument validator. It needed to do two things: 1. Be more concise and easier to use than assertions 2. Not encourage an infinite bikeshed of DSLs This is why types are specified by a single character and there's no such thing as an optional argument. This is not intended to validate user data. This is specifically about asserting the interface of your functions. If you need greater validation, I encourage you to write them by hand or look elsewhere. # lodash.uniq v4.5.0 The [lodash](https://lodash.com/) method `_.uniq` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.uniq ``` In Node.js: ```js var uniq = require('lodash.uniq'); ``` See the [documentation](https://lodash.com/docs#uniq) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.uniq) for more details. # json-stringify-safe Like JSON.stringify, but doesn't throw on circular references. ## Usage Takes the same arguments as `JSON.stringify`. ```javascript var stringify = require('json-stringify-safe'); var circularObj = {}; circularObj.circularRef = circularObj; circularObj.list = [ circularObj, circularObj ]; console.log(stringify(circularObj, null, 2)); ``` Output: ```json { "circularRef": "[Circular]", "list": [ "[Circular]", "[Circular]" ] } ``` ## Details ``` stringify(obj, serializer, indent, decycler) ``` The first three arguments are the same as to JSON.stringify. The last is an argument that's only used when the object has been seen already. The default `decycler` function returns the string `'[Circular]'`. If, for example, you pass in `function(k,v){}` (return nothing) then it will prune cycles. If you pass in `function(k,v){ return {foo: 'bar'}}`, then cyclical objects will always be represented as `{"foo":"bar"}` in the result. ``` stringify.getSerialize(serializer, decycler) ``` Returns a serializer that can be used elsewhere. This is the actual function that's passed to JSON.stringify. **Note** that the function returned from `getSerialize` is stateful for now, so do **not** use it more than once. # clone [![build status](https://secure.travis-ci.org/pvorb/node-clone.png)](http://travis-ci.org/pvorb/node-clone) [![info badge](https://nodei.co/npm/clone.png?downloads=true&downloadRank=true&stars=true)](http://npm-stat.com/charts.html?package=clone) offers foolproof _deep cloning_ of objects, arrays, numbers, strings etc. in JavaScript. ## Installation npm install clone (It also works with browserify, ender or standalone.) ## Example ~~~ javascript var clone = require('clone'); var a, b; a = { foo: { bar: 'baz' } }; // initial value of a b = clone(a); // clone a -> b a.foo.bar = 'foo'; // change a console.log(a); // show a console.log(b); // show b ~~~ This will print: ~~~ javascript { foo: { bar: 'foo' } } { foo: { bar: 'baz' } } ~~~ **clone** masters cloning simple objects (even with custom prototype), arrays, Date objects, and RegExp objects. Everything is cloned recursively, so that you can clone dates in arrays in objects, for example. ## API `clone(val, circular, depth)` * `val` -- the value that you want to clone, any type allowed * `circular` -- boolean Call `clone` with `circular` set to `false` if you are certain that `obj` contains no circular references. This will give better performance if needed. There is no error if `undefined` or `null` is passed as `obj`. * `depth` -- depth to which the object is to be cloned (optional, defaults to infinity) `clone.clonePrototype(obj)` * `obj` -- the object that you want to clone Does a prototype clone as [described by Oran Looney](http://oranlooney.com/functional-javascript/). ## Circular References ~~~ javascript var a, b; a = { hello: 'world' }; a.myself = a; b = clone(a); console.log(b); ~~~ This will print: ~~~ javascript { hello: "world", myself: [Circular] } ~~~ So, `b.myself` points to `b`, not `a`. Neat! ## Test npm test ## Caveat Some special objects like a socket or `process.stdout`/`stderr` are known to not be cloneable. If you find other objects that cannot be cloned, please [open an issue](https://github.com/pvorb/node-clone/issues/new). ## Bugs and Issues If you encounter any bugs or issues, feel free to [open an issue at github](https://github.com/pvorb/node-clone/issues) or send me an email to <paul@vorba.ch>. I also always like to hear from you, if you’re using my code. ## License Copyright © 2011-2015 [Paul Vorbach](http://paul.vorba.ch/) and [contributors](https://github.com/pvorb/node-clone/graphs/contributors). Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # qs <sup>[![Version Badge][2]][1]</sup> [![Build Status][3]][4] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] A querystring parsing and stringifying library with some added security. Lead Maintainer: [Jordan Harband](https://github.com/ljharb) The **qs** module was originally created and maintained by [TJ Holowaychuk](https://github.com/visionmedia/node-querystring). ## Usage ```javascript var qs = require('qs'); var assert = require('assert'); var obj = qs.parse('a=c'); assert.deepEqual(obj, { a: 'c' }); var str = qs.stringify(obj); assert.equal(str, 'a=c'); ``` ### Parsing Objects [](#preventEval) ```javascript qs.parse(string, [options]); ``` **qs** allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets `[]`. For example, the string `'foo[bar]=baz'` converts to: ```javascript assert.deepEqual(qs.parse('foo[bar]=baz'), { foo: { bar: 'baz' } }); ``` When using the `plainObjects` option the parsed value is returned as a null object, created via `Object.create(null)` and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like: ```javascript var nullObject = qs.parse('a[hasOwnProperty]=b', { plainObjects: true }); assert.deepEqual(nullObject, { a: { hasOwnProperty: 'b' } }); ``` By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use `plainObjects` as mentioned above, or set `allowPrototypes` to `true` which will allow user input to overwrite those properties. *WARNING* It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option. ```javascript var protoObject = qs.parse('a[hasOwnProperty]=b', { allowPrototypes: true }); assert.deepEqual(protoObject, { a: { hasOwnProperty: 'b' } }); ``` URI encoded strings work too: ```javascript assert.deepEqual(qs.parse('a%5Bb%5D=c'), { a: { b: 'c' } }); ``` You can also nest your objects, like `'foo[bar][baz]=foobarbaz'`: ```javascript assert.deepEqual(qs.parse('foo[bar][baz]=foobarbaz'), { foo: { bar: { baz: 'foobarbaz' } } }); ``` By default, when nesting objects **qs** will only parse up to 5 children deep. This means if you attempt to parse a string like `'a[b][c][d][e][f][g][h][i]=j'` your resulting object will be: ```javascript var expected = { a: { b: { c: { d: { e: { f: { '[g][h][i]': 'j' } } } } } } }; var string = 'a[b][c][d][e][f][g][h][i]=j'; assert.deepEqual(qs.parse(string), expected); ``` This depth can be overridden by passing a `depth` option to `qs.parse(string, [options])`: ```javascript var deep = qs.parse('a[b][c][d][e][f][g][h][i]=j', { depth: 1 }); assert.deepEqual(deep, { a: { b: { '[c][d][e][f][g][h][i]': 'j' } } }); ``` The depth limit helps mitigate abuse when **qs** is used to parse user input, and it is recommended to keep it a reasonably small number. For similar reasons, by default **qs** will only parse up to 1000 parameters. This can be overridden by passing a `parameterLimit` option: ```javascript var limited = qs.parse('a=b&c=d', { parameterLimit: 1 }); assert.deepEqual(limited, { a: 'b' }); ``` To bypass the leading question mark, use `ignoreQueryPrefix`: ```javascript var prefixed = qs.parse('?a=b&c=d', { ignoreQueryPrefix: true }); assert.deepEqual(prefixed, { a: 'b', c: 'd' }); ``` An optional delimiter can also be passed: ```javascript var delimited = qs.parse('a=b;c=d', { delimiter: ';' }); assert.deepEqual(delimited, { a: 'b', c: 'd' }); ``` Delimiters can be a regular expression too: ```javascript var regexed = qs.parse('a=b;c=d,e=f', { delimiter: /[;,]/ }); assert.deepEqual(regexed, { a: 'b', c: 'd', e: 'f' }); ``` Option `allowDots` can be used to enable dot notation: ```javascript var withDots = qs.parse('a.b=c', { allowDots: true }); assert.deepEqual(withDots, { a: { b: 'c' } }); ``` ### Parsing Arrays **qs** can also parse arrays using a similar `[]` notation: ```javascript var withArray = qs.parse('a[]=b&a[]=c'); assert.deepEqual(withArray, { a: ['b', 'c'] }); ``` You may specify an index as well: ```javascript var withIndexes = qs.parse('a[1]=c&a[0]=b'); assert.deepEqual(withIndexes, { a: ['b', 'c'] }); ``` Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number to create an array. When creating arrays with specific indices, **qs** will compact a sparse array to only the existing values preserving their order: ```javascript var noSparse = qs.parse('a[1]=b&a[15]=c'); assert.deepEqual(noSparse, { a: ['b', 'c'] }); ``` Note that an empty string is also a value, and will be preserved: ```javascript var withEmptyString = qs.parse('a[]=&a[]=b'); assert.deepEqual(withEmptyString, { a: ['', 'b'] }); var withIndexedEmptyString = qs.parse('a[0]=b&a[1]=&a[2]=c'); assert.deepEqual(withIndexedEmptyString, { a: ['b', '', 'c'] }); ``` **qs** will also limit specifying indices in an array to a maximum index of `20`. Any array members with an index of greater than `20` will instead be converted to an object with the index as the key: ```javascript var withMaxIndex = qs.parse('a[100]=b'); assert.deepEqual(withMaxIndex, { a: { '100': 'b' } }); ``` This limit can be overridden by passing an `arrayLimit` option: ```javascript var withArrayLimit = qs.parse('a[1]=b', { arrayLimit: 0 }); assert.deepEqual(withArrayLimit, { a: { '1': 'b' } }); ``` To disable array parsing entirely, set `parseArrays` to `false`. ```javascript var noParsingArrays = qs.parse('a[]=b', { parseArrays: false }); assert.deepEqual(noParsingArrays, { a: { '0': 'b' } }); ``` If you mix notations, **qs** will merge the two items into an object: ```javascript var mixedNotation = qs.parse('a[0]=b&a[b]=c'); assert.deepEqual(mixedNotation, { a: { '0': 'b', b: 'c' } }); ``` You can also create arrays of objects: ```javascript var arraysOfObjects = qs.parse('a[][b]=c'); assert.deepEqual(arraysOfObjects, { a: [{ b: 'c' }] }); ``` ### Stringifying [](#preventEval) ```javascript qs.stringify(object, [options]); ``` When stringifying, **qs** by default URI encodes output. Objects are stringified as you would expect: ```javascript assert.equal(qs.stringify({ a: 'b' }), 'a=b'); assert.equal(qs.stringify({ a: { b: 'c' } }), 'a%5Bb%5D=c'); ``` This encoding can be disabled by setting the `encode` option to `false`: ```javascript var unencoded = qs.stringify({ a: { b: 'c' } }, { encode: false }); assert.equal(unencoded, 'a[b]=c'); ``` Encoding can be disabled for keys by setting the `encodeValuesOnly` option to `true`: ```javascript var encodedValues = qs.stringify( { a: 'b', c: ['d', 'e=f'], f: [['g'], ['h']] }, { encodeValuesOnly: true } ); assert.equal(encodedValues,'a=b&c[0]=d&c[1]=e%3Df&f[0][0]=g&f[1][0]=h'); ``` This encoding can also be replaced by a custom encoding method set as `encoder` option: ```javascript var encoded = qs.stringify({ a: { b: 'c' } }, { encoder: function (str) { // Passed in values `a`, `b`, `c` return // Return encoded string }}) ``` _(Note: the `encoder` option does not apply if `encode` is `false`)_ Analogue to the `encoder` there is a `decoder` option for `parse` to override decoding of properties and values: ```javascript var decoded = qs.parse('x=z', { decoder: function (str) { // Passed in values `x`, `z` return // Return decoded string }}) ``` Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases *will* be URI encoded during real usage. When arrays are stringified, by default they are given explicit indices: ```javascript qs.stringify({ a: ['b', 'c', 'd'] }); // 'a[0]=b&a[1]=c&a[2]=d' ``` You may override this by setting the `indices` option to `false`: ```javascript qs.stringify({ a: ['b', 'c', 'd'] }, { indices: false }); // 'a=b&a=c&a=d' ``` You may use the `arrayFormat` option to specify the format of the output array: ```javascript qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'indices' }) // 'a[0]=b&a[1]=c' qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'brackets' }) // 'a[]=b&a[]=c' qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'repeat' }) // 'a=b&a=c' ``` When objects are stringified, by default they use bracket notation: ```javascript qs.stringify({ a: { b: { c: 'd', e: 'f' } } }); // 'a[b][c]=d&a[b][e]=f' ``` You may override this to use dot notation by setting the `allowDots` option to `true`: ```javascript qs.stringify({ a: { b: { c: 'd', e: 'f' } } }, { allowDots: true }); // 'a.b.c=d&a.b.e=f' ``` Empty strings and null values will omit the value, but the equals sign (=) remains in place: ```javascript assert.equal(qs.stringify({ a: '' }), 'a='); ``` Key with no values (such as an empty object or array) will return nothing: ```javascript assert.equal(qs.stringify({ a: [] }), ''); assert.equal(qs.stringify({ a: {} }), ''); assert.equal(qs.stringify({ a: [{}] }), ''); assert.equal(qs.stringify({ a: { b: []} }), ''); assert.equal(qs.stringify({ a: { b: {}} }), ''); ``` Properties that are set to `undefined` will be omitted entirely: ```javascript assert.equal(qs.stringify({ a: null, b: undefined }), 'a='); ``` The query string may optionally be prepended with a question mark: ```javascript assert.equal(qs.stringify({ a: 'b', c: 'd' }, { addQueryPrefix: true }), '?a=b&c=d'); ``` The delimiter may be overridden with stringify as well: ```javascript assert.equal(qs.stringify({ a: 'b', c: 'd' }, { delimiter: ';' }), 'a=b;c=d'); ``` If you only want to override the serialization of `Date` objects, you can provide a `serializeDate` option: ```javascript var date = new Date(7); assert.equal(qs.stringify({ a: date }), 'a=1970-01-01T00:00:00.007Z'.replace(/:/g, '%3A')); assert.equal( qs.stringify({ a: date }, { serializeDate: function (d) { return d.getTime(); } }), 'a=7' ); ``` You may use the `sort` option to affect the order of parameter keys: ```javascript function alphabeticalSort(a, b) { return a.localeCompare(b); } assert.equal(qs.stringify({ a: 'c', z: 'y', b : 'f' }, { sort: alphabeticalSort }), 'a=c&b=f&z=y'); ``` Finally, you can use the `filter` option to restrict which keys will be included in the stringified output. If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you pass an array, it will be used to select properties and array indices for stringification: ```javascript function filterFunc(prefix, value) { if (prefix == 'b') { // Return an `undefined` value to omit a property. return; } if (prefix == 'e[f]') { return value.getTime(); } if (prefix == 'e[g][0]') { return value * 2; } return value; } qs.stringify({ a: 'b', c: 'd', e: { f: new Date(123), g: [2] } }, { filter: filterFunc }); // 'a=b&c=d&e[f]=123&e[g][0]=4' qs.stringify({ a: 'b', c: 'd', e: 'f' }, { filter: ['a', 'e'] }); // 'a=b&e=f' qs.stringify({ a: ['b', 'c', 'd'], e: 'f' }, { filter: ['a', 0, 2] }); // 'a[0]=b&a[2]=d' ``` ### Handling of `null` values By default, `null` values are treated like empty strings: ```javascript var withNull = qs.stringify({ a: null, b: '' }); assert.equal(withNull, 'a=&b='); ``` Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings. ```javascript var equalsInsensitive = qs.parse('a&b='); assert.deepEqual(equalsInsensitive, { a: '', b: '' }); ``` To distinguish between `null` values and empty strings use the `strictNullHandling` flag. In the result string the `null` values have no `=` sign: ```javascript var strictNull = qs.stringify({ a: null, b: '' }, { strictNullHandling: true }); assert.equal(strictNull, 'a&b='); ``` To parse values without `=` back to `null` use the `strictNullHandling` flag: ```javascript var parsedStrictNull = qs.parse('a&b=', { strictNullHandling: true }); assert.deepEqual(parsedStrictNull, { a: null, b: '' }); ``` To completely skip rendering keys with `null` values, use the `skipNulls` flag: ```javascript var nullsSkipped = qs.stringify({ a: 'b', c: null}, { skipNulls: true }); assert.equal(nullsSkipped, 'a=b'); ``` ### Dealing with special character sets By default the encoding and decoding of characters is done in `utf-8`. If you wish to encode querystrings to a different character set (i.e. [Shift JIS](https://en.wikipedia.org/wiki/Shift_JIS)) you can use the [`qs-iconv`](https://github.com/martinheidegger/qs-iconv) library: ```javascript var encoder = require('qs-iconv/encoder')('shift_jis'); var shiftJISEncoded = qs.stringify({ a: 'こんにちは!' }, { encoder: encoder }); assert.equal(shiftJISEncoded, 'a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I'); ``` This also works for decoding of query strings: ```javascript var decoder = require('qs-iconv/decoder')('shift_jis'); var obj = qs.parse('a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I', { decoder: decoder }); assert.deepEqual(obj, { a: 'こんにちは!' }); ``` ### RFC 3986 and RFC 1738 space encoding RFC3986 used as default option and encodes ' ' to *%20* which is backward compatible. In the same time, output can be stringified as per RFC1738 with ' ' equal to '+'. ``` assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c'); assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c'); assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c'); ``` [1]: https://npmjs.org/package/qs [2]: http://versionbadg.es/ljharb/qs.svg [3]: https://api.travis-ci.org/ljharb/qs.svg [4]: https://travis-ci.org/ljharb/qs [5]: https://david-dm.org/ljharb/qs.svg [6]: https://david-dm.org/ljharb/qs [7]: https://david-dm.org/ljharb/qs/dev-status.svg [8]: https://david-dm.org/ljharb/qs?type=dev [9]: https://ci.testling.com/ljharb/qs.png [10]: https://ci.testling.com/ljharb/qs [11]: https://nodei.co/npm/qs.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/qs.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/qs.svg [downloads-url]: http://npm-stat.com/charts.html?package=qs cli-table3 =============================================================================== [![npm version](https://img.shields.io/npm/v/cli-table3.svg)](https://www.npmjs.com/package/cli-table3) [![Build Status](https://travis-ci.com/cli-table/cli-table3.svg?branch=master)](https://travis-ci.com/cli-table/cli-table3) This utility allows you to render unicode-aided tables on the command line from your node.js scripts. `cli-table3` is based on (and api compatible with) the original [cli-table](https://github.com/Automattic/cli-table), and [cli-table2](https://github.com/jamestalmage/cli-table2), which are both unmaintained. `cli-table3` includes all the additional features from `cli-table2`. ![Screenshot](http://i.imgur.com/sYq4T.png) ## Features not in the original cli-table - Ability to make cells span columns and/or rows. - Ability to set custom styles per cell (border characters/colors, padding, etc). - Vertical alignment (top, bottom, center). - Automatic word wrapping. - More robust truncation of cell text that contains ansi color characters. - Better handling of text color that spans multiple lines. - API compatible with the original cli-table. - Exhaustive test suite including the entire original cli-table test suite. - Lots of examples auto-generated from the tests ([basic](https://github.com/cli-table/cli-table3/blob/master/basic-usage.md), [advanced](https://github.com/cli-table/cli-table3/blob/master/advanced-usage.md)). ## Features - Customizable characters that constitute the table. - Color/background styling in the header through [colors.js](http://github.com/marak/colors.js) - Column width customization - Text truncation based on predefined widths - Text alignment (left, right, center) - Padding (left, right) - Easy-to-use API ## Installation ```bash npm install cli-table3 ``` ## How to use A portion of the unit test suite is used to generate examples: - [basic-usage](https://github.com/cli-table/cli-table3/blob/master/basic-usage.md) - covers basic uses. - [advanced](https://github.com/cli-table/cli-table3/blob/master/advanced-usage.md) - covers using the new column and row span features. This package is api compatible with the original [cli-table](https://github.com/Automattic/cli-table). So all the original documentation still applies (copied below). ### Horizontal Tables ```javascript var Table = require('cli-table3'); // instantiate var table = new Table({ head: ['TH 1 label', 'TH 2 label'] , colWidths: [100, 200] }); // table is an Array, so you can `push`, `unshift`, `splice` and friends table.push( ['First value', 'Second value'] , ['First value', 'Second value'] ); console.log(table.toString()); ``` ### Vertical Tables ```javascript var Table = require('cli-table3'); var table = new Table(); table.push( { 'Some key': 'Some value' } , { 'Another key': 'Another value' } ); console.log(table.toString()); ``` ### Cross Tables Cross tables are very similar to vertical tables, with two key differences: 1. They require a `head` setting when instantiated that has an empty string as the first header 2. The individual rows take the general form of { "Header": ["Row", "Values"] } ```javascript var Table = require('cli-table3'); var table = new Table({ head: ["", "Top Header 1", "Top Header 2"] }); table.push( { 'Left Header 1': ['Value Row 1 Col 1', 'Value Row 1 Col 2'] } , { 'Left Header 2': ['Value Row 2 Col 1', 'Value Row 2 Col 2'] } ); console.log(table.toString()); ``` ### Custom styles The ```chars``` property controls how the table is drawn: ```javascript var table = new Table({ chars: { 'top': '═' , 'top-mid': '╤' , 'top-left': '╔' , 'top-right': '╗' , 'bottom': '═' , 'bottom-mid': '╧' , 'bottom-left': '╚' , 'bottom-right': '╝' , 'left': '║' , 'left-mid': '╟' , 'mid': '─' , 'mid-mid': '┼' , 'right': '║' , 'right-mid': '╢' , 'middle': '│' } }); table.push( ['foo', 'bar', 'baz'] , ['frob', 'bar', 'quuz'] ); console.log(table.toString()); // Outputs: // //╔══════╤═════╤══════╗ //║ foo │ bar │ baz ║ //╟──────┼─────┼──────╢ //║ frob │ bar │ quuz ║ //╚══════╧═════╧══════╝ ``` Empty decoration lines will be skipped, to avoid vertical separator rows just set the 'mid', 'left-mid', 'mid-mid', 'right-mid' to the empty string: ```javascript var table = new Table({ chars: {'mid': '', 'left-mid': '', 'mid-mid': '', 'right-mid': ''} }); table.push( ['foo', 'bar', 'baz'] , ['frobnicate', 'bar', 'quuz'] ); console.log(table.toString()); // Outputs: (note the lack of the horizontal line between rows) //┌────────────┬─────┬──────┐ //│ foo │ bar │ baz │ //│ frobnicate │ bar │ quuz │ //└────────────┴─────┴──────┘ ``` By setting all chars to empty with the exception of 'middle' being set to a single space and by setting padding to zero, it's possible to get the most compact layout with no decorations: ```javascript var table = new Table({ chars: { 'top': '' , 'top-mid': '' , 'top-left': '' , 'top-right': '' , 'bottom': '' , 'bottom-mid': '' , 'bottom-left': '' , 'bottom-right': '' , 'left': '' , 'left-mid': '' , 'mid': '' , 'mid-mid': '' , 'right': '' , 'right-mid': '' , 'middle': ' ' }, style: { 'padding-left': 0, 'padding-right': 0 } }); table.push( ['foo', 'bar', 'baz'] , ['frobnicate', 'bar', 'quuz'] ); console.log(table.toString()); // Outputs: //foo bar baz //frobnicate bar quuz ``` ## Build Targets Clone the repository and run `yarn install` to install all its submodules, then run one of the following commands: ###### Run the tests with coverage reports. ```bash $ yarn test:coverage ``` ###### Run the tests every time a file changes. ```bash $ yarn test:watch ``` ###### Update the documentation. ```bash $ yarn docs ``` ## Credits - James Talmage - author &lt;james.talmage@jrtechnical.com&gt; ([jamestalmage](http://github.com/jamestalmage)) - Guillermo Rauch - author of the original cli-table &lt;guillermo@learnboost.com&gt; ([Guille](http://github.com/guille)) ## License (The MIT License) Copyright (c) 2014 James Talmage &lt;james.talmage@jrtechnical.com&gt; Original cli-table code/documentation: Copyright (c) 2010 LearnBoost &lt;dev@learnboost.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # init-package-json A node module to get your node module started. [![Build Status](https://secure.travis-ci.org/npm/init-package-json.svg)](http://travis-ci.org/npm/init-package-json) ## Usage ```javascript var init = require('init-package-json') var path = require('path') // a path to a promzard module. In the event that this file is // not found, one will be provided for you. var initFile = path.resolve(process.env.HOME, '.npm-init') // the dir where we're doin stuff. var dir = process.cwd() // extra stuff that gets put into the PromZard module's context. // In npm, this is the resolved config object. Exposed as 'config' // Optional. var configData = { some: 'extra stuff' } // Any existing stuff from the package.json file is also exposed in the // PromZard module as the `package` object. There will also be free // vars for: // * `filename` path to the package.json file // * `basename` the tip of the package dir // * `dirname` the parent of the package dir init(dir, initFile, configData, function (er, data) { // the data's already been written to {dir}/package.json // now you can do stuff with it }) ``` Or from the command line: ``` $ npm-init ``` See [PromZard](https://github.com/npm/promzard) for details about what can go in the config file. # libnpmteam [![npm version](https://img.shields.io/npm/v/libnpmteam.svg)](https://npm.im/libnpmteam) [![license](https://img.shields.io/npm/l/libnpmteam.svg)](https://npm.im/libnpmteam) [![Travis](https://img.shields.io/travis/npm/libnpmteam/latest.svg)](https://travis-ci.org/npm/libnpmteam) [![AppVeyor](https://img.shields.io/appveyor/ci/zkat/libnpmteam/latest.svg)](https://ci.appveyor.com/project/zkat/libnpmteam) [![Coverage Status](https://coveralls.io/repos/github/npm/libnpmteam/badge.svg?branch=latest)](https://coveralls.io/github/npm/libnpmteam?branch=latest) [`libnpmteam`](https://github.com/npm/libnpmteam) is a Node.js library that provides programmatic access to the guts of the npm CLI's `npm team` command and its various subcommands. ## Example ```javascript const access = require('libnpmteam') // List all teams for the @npm org. console.log(await team.lsTeams('npm')) ``` ## Table of Contents * [Installing](#install) * [Example](#example) * [Contributing](#contributing) * [API](#api) * [team opts](#opts) * [`create()`](#create) * [`destroy()`](#destroy) * [`add()`](#add) * [`rm()`](#rm) * [`lsTeams()`](#ls-teams) * [`lsTeams.stream()`](#ls-teams-stream) * [`lsUsers()`](#ls-users) * [`lsUsers.stream()`](#ls-users-stream) ### Install `$ npm install libnpmteam` ### Contributing The npm team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other. Please refer to the [Changelog](CHANGELOG.md) for project history details, too. Happy hacking! ### API #### <a name="opts"></a> `opts` for `libnpmteam` commands `libnpmteam` uses [`npm-registry-fetch`](https://npm.im/npm-registry-fetch). All options are passed through directly to that library, so please refer to [its own `opts` documentation](https://www.npmjs.com/package/npm-registry-fetch#fetch-options) for options that can be passed in. A couple of options of note for those in a hurry: * `opts.token` - can be passed in and will be used as the authentication token for the registry. For other ways to pass in auth details, see the n-r-f docs. * `opts.otp` - certain operations will require an OTP token to be passed in. If a `libnpmteam` command fails with `err.code === EOTP`, please retry the request with `{otp: <2fa token>}` * `opts.Promise` - If you pass this in, the Promises returned by `libnpmteam` commands will use this Promise class instead. For example: `{Promise: require('bluebird')}` #### <a name="create"></a> `> team.create(team, [opts]) -> Promise` Creates a team named `team`. Team names use the format `@<scope>:<name>`, with the `@` being optional. Additionally, `opts.description` may be passed in to include a description. ##### Example ```javascript await team.create('@npm:cli', {token: 'myregistrytoken'}) // The @npm:cli team now exists. ``` #### <a name="destroy"></a> `> team.destroy(team, [opts]) -> Promise` Destroys a team named `team`. Team names use the format `@<scope>:<name>`, with the `@` being optional. ##### Example ```javascript await team.destroy('@npm:cli', {token: 'myregistrytoken'}) // The @npm:cli team has been destroyed. ``` #### <a name="add"></a> `> team.add(user, team, [opts]) -> Promise` Adds `user` to `team`. ##### Example ```javascript await team.add('zkat', '@npm:cli', {token: 'myregistrytoken'}) // @zkat now belongs to the @npm:cli team. ``` #### <a name="rm"></a> `> team.rm(user, team, [opts]) -> Promise` Removes `user` from `team`. ##### Example ```javascript await team.rm('zkat', '@npm:cli', {token: 'myregistrytoken'}) // @zkat is no longer part of the @npm:cli team. ``` #### <a name="ls-teams"></a> `> team.lsTeams(scope, [opts]) -> Promise` Resolves to an array of team names belonging to `scope`. ##### Example ```javascript await team.lsTeams('@npm', {token: 'myregistrytoken'}) => [ 'npm:cli', 'npm:web', 'npm:registry', 'npm:developers' ] ``` #### <a name="ls-teams-stream"></a> `> team.lsTeams.stream(scope, [opts]) -> Stream` Returns a stream of teams belonging to `scope`. For a Promise-based version of these results, see [`team.lsTeams()`](#ls-teams). ##### Example ```javascript for await (let team of team.lsTeams.stream('@npm', {token: 'myregistrytoken'})) { console.log(team) } // outputs // npm:cli // npm:web // npm:registry // npm:developers ``` #### <a name="ls-users"></a> `> team.lsUsers(team, [opts]) -> Promise` Resolves to an array of usernames belonging to `team`. For a streamed version of these results, see [`team.lsUsers.stream()`](#ls-users-stream). ##### Example ```javascript await team.lsUsers('@npm:cli', {token: 'myregistrytoken'}) => [ 'iarna', 'zkat' ] ``` #### <a name="ls-users-stream"></a> `> team.lsUsers.stream(team, [opts]) -> Stream` Returns a stream of usernames belonging to `team`. For a Promise-based version of these results, see [`team.lsUsers()`](#ls-users). ##### Example ```javascript for await (let user of team.lsUsers.stream('@npm:cli', {token: 'myregistrytoken'})) { console.log(user) } // outputs // iarna // zkat ``` # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # util-extend The Node object extending function that Node uses for Node! ## Usage ```js var extend = require('util-extend'); function functionThatTakesOptions(options) { var options = extend(defaults, options); // now any unset options are set to the defaults. } ``` # lodash.without v4.4.0 The [lodash](https://lodash.com/) method `_.without` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.without ``` In Node.js: ```js var without = require('lodash.without'); ``` See the [documentation](https://lodash.com/docs#without) or [package source](https://github.com/lodash/lodash/blob/4.4.0-npm-packages/lodash.without) for more details. #is-regex <sup>[![Version Badge][2]][1]</sup> [![Build Status][3]][4] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] [![browser support][9]][10] Is this value a JS regex? This module works cross-realm/iframe, and despite ES6 @@toStringTag. ## Example ```js var isRegex = require('is-regex'); var assert = require('assert'); assert.notOk(isRegex(undefined)); assert.notOk(isRegex(null)); assert.notOk(isRegex(false)); assert.notOk(isRegex(true)); assert.notOk(isRegex(42)); assert.notOk(isRegex('foo')); assert.notOk(isRegex(function () {})); assert.notOk(isRegex([])); assert.notOk(isRegex({})); assert.ok(isRegex(/a/g)); assert.ok(isRegex(new RegExp('a', 'g'))); ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-regex [2]: http://versionbadg.es/ljharb/is-regex.svg [3]: https://travis-ci.org/ljharb/is-regex.svg [4]: https://travis-ci.org/ljharb/is-regex [5]: https://david-dm.org/ljharb/is-regex.svg [6]: https://david-dm.org/ljharb/is-regex [7]: https://david-dm.org/ljharb/is-regex/dev-status.svg [8]: https://david-dm.org/ljharb/is-regex#info=devDependencies [9]: https://ci.testling.com/ljharb/is-regex.png [10]: https://ci.testling.com/ljharb/is-regex [11]: https://nodei.co/npm/is-regex.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/is-regex.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/is-regex.svg [downloads-url]: http://npm-stat.com/charts.html?package=is-regex # lodash._cacheindexof v3.0.2 The [modern build](https://github.com/lodash/lodash/wiki/Build-Differences) of [lodash’s](https://lodash.com/) internal `cacheIndexOf` exported as a [Node.js](http://nodejs.org/)/[io.js](https://iojs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._cacheindexof ``` In Node.js/io.js: ```js var cacheIndexOf = require('lodash._cacheindexof'); ``` See the [package source](https://github.com/lodash/lodash/blob/3.0.2-npm-packages/lodash._cacheindexof) for more details. # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # pump pump is a small node module that pipes streams together and destroys all of them if one of them closes. ``` npm install pump ``` [![build status](http://img.shields.io/travis/mafintosh/pump.svg?style=flat)](http://travis-ci.org/mafintosh/pump) ## What problem does it solve? When using standard `source.pipe(dest)` source will _not_ be destroyed if dest emits close or an error. You are also not able to provide a callback to tell when then pipe has finished. pump does these two things for you ## Usage Simply pass the streams you want to pipe together to pump and add an optional callback ``` js var pump = require('pump') var fs = require('fs') var source = fs.createReadStream('/dev/random') var dest = fs.createWriteStream('/dev/null') pump(source, dest, function(err) { console.log('pipe finished', err) }) setTimeout(function() { dest.destroy() // when dest is closed pump will destroy source }, 1000) ``` You can use pump to pipe more than two streams together as well ``` js var transform = someTransformStream() pump(source, transform, anotherTransform, dest, function(err) { console.log('pipe finished', err) }) ``` If `source`, `transform`, `anotherTransform` or `dest` closes all of them will be destroyed. ## License MIT ## Related `pump` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. aproba ====== A ridiculously light-weight function argument validator ``` var validate = require("aproba") function myfunc(a, b, c) { // `a` must be a string, `b` a number, `c` a function validate('SNF', arguments) // [a,b,c] is also valid } myfunc('test', 23, function () {}) // ok myfunc(123, 23, function () {}) // type error myfunc('test', 23) // missing arg error myfunc('test', 23, function () {}, true) // too many args error ``` Valid types are: | type | description | :--: | :---------- | * | matches any type | A | `Array.isArray` OR an `arguments` object | S | typeof == string | N | typeof == number | F | typeof == function | O | typeof == object and not type A and not type E | B | typeof == boolean | E | `instanceof Error` OR `null` **(special: see below)** | Z | == `null` Validation failures throw one of three exception types, distinguished by a `code` property of `EMISSINGARG`, `EINVALIDTYPE` or `ETOOMANYARGS`. If you pass in an invalid type then it will throw with a code of `EUNKNOWNTYPE`. If an **error** argument is found and is not null then the remaining arguments are optional. That is, if you say `ESO` then that's like using a non-magical `E` in: `E|ESO|ZSO`. ### But I have optional arguments?! You can provide more than one signature by separating them with pipes `|`. If any signature matches the arguments then they'll be considered valid. So for example, say you wanted to write a signature for `fs.createWriteStream`. The docs for it describe it thusly: ``` fs.createWriteStream(path[, options]) ``` This would be a signature of `SO|S`. That is, a string and and object, or just a string. Now, if you read the full `fs` docs, you'll see that actually path can ALSO be a buffer. And options can be a string, that is: ``` path <String> | <Buffer> options <String> | <Object> ``` To reproduce this you have to fully enumerate all of the possible combinations and that implies a signature of `SO|SS|OO|OS|S|O`. The awkwardness is a feature: It reminds you of the complexity you're adding to your API when you do this sort of thing. ### Browser support This has no dependencies and should work in browsers, though you'll have noisier stack traces. ### Why this exists I wanted a very simple argument validator. It needed to do two things: 1. Be more concise and easier to use than assertions 2. Not encourage an infinite bikeshed of DSLs This is why types are specified by a single character and there's no such thing as an optional argument. This is not intended to validate user data. This is specifically about asserting the interface of your functions. If you need greater validation, I encourage you to write them by hand or look elsewhere. A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # end-of-stream A node module that calls a callback when a readable/writable/duplex stream has completed or failed. npm install end-of-stream ## Usage Simply pass a stream and a callback to the `eos`. Both legacy streams, streams2 and stream3 are supported. ``` js var eos = require('end-of-stream'); eos(readableStream, function(err) { // this will be set to the stream instance if (err) return console.log('stream had an error or closed early'); console.log('stream has ended', this === readableStream); }); eos(writableStream, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has finished', this === writableStream); }); eos(duplexStream, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has ended and finished', this === duplexStream); }); eos(duplexStream, {readable:false}, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has finished but might still be readable'); }); eos(duplexStream, {writable:false}, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has ended but might still be writable'); }); eos(readableStream, {error:false}, function(err) { // do not treat emit('error', err) as a end-of-stream }); ``` ## License MIT ## Related `end-of-stream` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. # gentle-fs [![npm version](https://img.shields.io/npm/v/gentle-fs.svg)](https://npm.im/gentle-fs) [![license](https://img.shields.io/npm/l/gentle-fs.svg)](https://npm.im/gentle-fs) [![Travis](https://img.shields.io/travis/npm/gentle-fs.svg)](https://travis-ci.org/npm/gentle-fs) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/gentle-fs?svg=true)](https://ci.appveyor.com/project/npm/gentle-fs) [![Coverage Status](https://coveralls.io/repos/github/npm/gentle-fs/badge.svg?branch=latest)](https://coveralls.io/github/npm/gentle-fs?branch=latest) [`gentle-fs`](https://github.com/npm/gentle-fs) is a standalone library for "gently" remove or link directories. ## Install `$ npm install gentle-fs` ## Table of Contents * [Example](#example) * [Features](#features) * [API](#api) * [`rm`](#rm) * [`link`](#link) * [`linkIfExists`](#linkIfExists) ### Example ```javascript // todo ``` ### Features * Performs filesystem operations "gently". Please see details in the API specs below for a more precise definition of "gently". ### API #### <a name="rm"></a> `> rm(target, opts, cb)` Will delete all directories between `target` and `opts.base`, as long as they are empty. That is, if `target` is `/a/b/c/d/e` and `base` is `/a/b`, but `/a/b/c` has other files besides the `d` directory inside of it, `/a/b/c` will remain. ##### Example ```javascript rm(target, opts, cb) ``` #### <a name="link"></a> `> link(from, to, opts, cb)` If `from` is a real directory, and `from` is not the same directory as `to`, will symlink `from` to `to`, while also gently [`rm`](#rm)ing the `to` directory, and then call the callback. Otherwise, will call callback with an `Error`. ##### Example ```javascript link(from, to, opts, cb) ``` #### <a name="linkIfExists"></a> `> linkIfExists(from, to, opts, cb)` Performs the same operation as [`link`](#link), except does nothing when `from` is the same as `to`, and calls the callback. ##### Example ```javascript linkIfExists(from, to, opts, cb) ``` # cacache [![npm version](https://img.shields.io/npm/v/cacache.svg)](https://npm.im/cacache) [![license](https://img.shields.io/npm/l/cacache.svg)](https://npm.im/cacache) [![Travis](https://img.shields.io/travis/npm/cacache.svg)](https://travis-ci.org/npm/cacache) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/cacache?svg=true)](https://ci.appveyor.com/project/npm/cacache) [![Coverage Status](https://coveralls.io/repos/github/npm/cacache/badge.svg?branch=latest)](https://coveralls.io/github/npm/cacache?branch=latest) [`cacache`](https://github.com/npm/cacache) is a Node.js library for managing local key and content address caches. It's really fast, really good at concurrency, and it will never give you corrupted data, even if cache files get corrupted or manipulated. On systems that support user and group settings on files, cacache will match the `uid` and `gid` values to the folder where the cache lives, even when running as `root`. It was written to be used as [npm](https://npm.im)'s local cache, but can just as easily be used on its own. _Translations: [español](README.es.md)_ ## Install `$ npm install --save cacache` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [Using localized APIs](#localized-api) * Reading * [`ls`](#ls) * [`ls.stream`](#ls-stream) * [`get`](#get-data) * [`get.stream`](#get-stream) * [`get.info`](#get-info) * [`get.hasContent`](#get-hasContent) * Writing * [`put`](#put-data) * [`put.stream`](#put-stream) * [`put*` opts](#put-options) * [`rm.all`](#rm-all) * [`rm.entry`](#rm-entry) * [`rm.content`](#rm-content) * Utilities * [`setLocale`](#set-locale) * [`clearMemoized`](#clear-memoized) * [`tmp.mkdir`](#tmp-mkdir) * [`tmp.withTmp`](#with-tmp) * Integrity * [Subresource Integrity](#integrity) * [`verify`](#verify) * [`verify.lastRun`](#verify-last-run) ### Example ```javascript const cacache = require('cacache/en') const fs = require('fs') const tarball = '/path/to/mytar.tgz' const cachePath = '/tmp/my-toy-cache' const key = 'my-unique-key-1234' // Cache it! Use `cachePath` as the root of the content cache cacache.put(cachePath, key, '10293801983029384').then(integrity => { console.log(`Saved content to ${cachePath}.`) }) const destination = '/tmp/mytar.tgz' // Copy the contents out of the cache and into their destination! // But this time, use stream instead! cacache.get.stream( cachePath, key ).pipe( fs.createWriteStream(destination) ).on('finish', () => { console.log('done extracting!') }) // The same thing, but skip the key index. cacache.get.byDigest(cachePath, integrityHash).then(data => { fs.writeFile(destination, data, err => { console.log('tarball data fetched based on its sha512sum and written out!') }) }) ``` ### Features * Extraction by key or by content address (shasum, etc) * [Subresource Integrity](#integrity) web standard support * Multi-hash support - safely host sha1, sha512, etc, in a single cache * Automatic content deduplication * Fault tolerance (immune to corruption, partial writes, process races, etc) * Consistency guarantees on read and write (full data verification) * Lockless, high-concurrency cache access * Streaming support * Promise support * Pretty darn fast -- sub-millisecond reads and writes including verification * Arbitrary metadata storage * Garbage collection and additional offline verification * Thorough test coverage * There's probably a bloom filter in there somewhere. Those are cool, right? 🤔 ### Contributing The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other. Please refer to the [Changelog](CHANGELOG.md) for project history details, too. Happy hacking! ### API #### <a name="localized-api"></a> Using localized APIs cacache includes a complete API in English, with the same features as other translations. To use the English API as documented in this README, use `require('cacache/en')`. This is also currently the default if you do `require('cacache')`, but may change in the future. cacache also supports other languages! You can find the list of currently supported ones by looking in `./locales` in the source directory. You can use the API in that language with `require('cacache/<lang>')`. Want to add support for a new language? Please go ahead! You should be able to copy `./locales/en.js` and `./locales/en.json` and fill them in. Translating the `README.md` is a bit more work, but also appreciated if you get around to it. 👍🏼 #### <a name="ls"></a> `> cacache.ls(cache) -> Promise<Object>` Lists info for all entries currently in the cache as a single large object. Each entry in the object will be keyed by the unique index key, with corresponding [`get.info`](#get-info) objects as the values. ##### Example ```javascript cacache.ls(cachePath).then(console.log) // Output { 'my-thing': { key: 'my-thing', integrity: 'sha512-BaSe64/EnCoDED+HAsh==' path: '.testcache/content/deadbeef', // joined with `cachePath` time: 12345698490, size: 4023948, metadata: { name: 'blah', version: '1.2.3', description: 'this was once a package but now it is my-thing' } }, 'other-thing': { key: 'other-thing', integrity: 'sha1-ANothER+hasH=', path: '.testcache/content/bada55', time: 11992309289, size: 111112 } } ``` #### <a name="ls-stream"></a> `> cacache.ls.stream(cache) -> Readable` Lists info for all entries currently in the cache as a single large object. This works just like [`ls`](#ls), except [`get.info`](#get-info) entries are returned as `'data'` events on the returned stream. ##### Example ```javascript cacache.ls.stream(cachePath).on('data', console.log) // Output { key: 'my-thing', integrity: 'sha512-BaSe64HaSh', path: '.testcache/content/deadbeef', // joined with `cachePath` time: 12345698490, size: 13423, metadata: { name: 'blah', version: '1.2.3', description: 'this was once a package but now it is my-thing' } } { key: 'other-thing', integrity: 'whirlpool-WoWSoMuchSupport', path: '.testcache/content/bada55', time: 11992309289, size: 498023984029 } { ... } ``` #### <a name="get-data"></a> `> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})` Returns an object with the cached data, digest, and metadata identified by `key`. The `data` property of this object will be a `Buffer` instance that presumably holds some data that means something to you. I'm sure you know what to do with it! cacache just won't care. `integrity` is a [Subresource Integrity](#integrity) string. That is, a string that can be used to verify `data`, which looks like `<hash-algorithm>-<base64-integrity-hash>`. If there is no content identified by `key`, or if the locally-stored data does not pass the validity checksum, the promise will be rejected. A sub-function, `get.byDigest` may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version of the function *only* returns `data` itself, without any wrapper. ##### Note This function loads the entire cache entry into memory before returning it. If you're dealing with Very Large data, consider using [`get.stream`](#get-stream) instead. ##### Example ```javascript // Look up by key cache.get(cachePath, 'my-thing').then(console.log) // Output: { metadata: { thingName: 'my' }, integrity: 'sha512-BaSe64HaSh', data: Buffer#<deadbeef>, size: 9320 } // Look up by digest cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log) // Output: Buffer#<deadbeef> ``` #### <a name="get-stream"></a> `> cacache.get.stream(cache, key, [opts]) -> Readable` Returns a [Readable Stream](https://nodejs.org/api/stream.html#stream_readable_streams) of the cached data identified by `key`. If there is no content identified by `key`, or if the locally-stored data does not pass the validity checksum, an error will be emitted. `metadata` and `integrity` events will be emitted before the stream closes, if you need to collect that extra data about the cached entry. A sub-function, `get.stream.byDigest` may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version does not emit the `metadata` and `integrity` events at all. ##### Example ```javascript // Look up by key cache.get.stream( cachePath, 'my-thing' ).on('metadata', metadata => { console.log('metadata:', metadata) }).on('integrity', integrity => { console.log('integrity:', integrity) }).pipe( fs.createWriteStream('./x.tgz') ) // Outputs: metadata: { ... } integrity: 'sha512-SoMeDIGest+64==' // Look up by digest cache.get.stream.byDigest( cachePath, 'sha512-SoMeDIGest+64==' ).pipe( fs.createWriteStream('./x.tgz') ) ``` #### <a name="get-info"></a> `> cacache.get.info(cache, key) -> Promise` Looks up `key` in the cache index, returning information about the entry if one exists. ##### Fields * `key` - Key the entry was looked up under. Matches the `key` argument. * `integrity` - [Subresource Integrity hash](#integrity) for the content this entry refers to. * `path` - Filesystem path where content is stored, joined with `cache` argument. * `time` - Timestamp the entry was first added on. * `metadata` - User-assigned metadata associated with the entry/content. ##### Example ```javascript cacache.get.info(cachePath, 'my-thing').then(console.log) // Output { key: 'my-thing', integrity: 'sha256-MUSTVERIFY+ALL/THINGS==' path: '.testcache/content/deadbeef', time: 12345698490, size: 849234, metadata: { name: 'blah', version: '1.2.3', description: 'this was once a package but now it is my-thing' } } ``` #### <a name="get-hasContent"></a> `> cacache.get.hasContent(cache, integrity) -> Promise` Looks up a [Subresource Integrity hash](#integrity) in the cache. If content exists for this `integrity`, it will return an object, with the specific single integrity hash that was found in `sri` key, and the size of the found content as `size`. If no content exists for this integrity, it will return `false`. ##### Example ```javascript cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log) // Output { sri: { source: 'sha256-MUSTVERIFY+ALL/THINGS==', algorithm: 'sha256', digest: 'MUSTVERIFY+ALL/THINGS==', options: [] }, size: 9001 } cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log) // Output false ``` #### <a name="put-data"></a> `> cacache.put(cache, key, data, [opts]) -> Promise` Inserts data passed to it into the cache. The returned Promise resolves with a digest (generated according to [`opts.algorithms`](#optsalgorithms)) after the cache entry has been successfully written. ##### Example ```javascript fetch( 'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz' ).then(data => { return cacache.put(cachePath, 'registry.npmjs.org|cacache@1.0.0', data) }).then(integrity => { console.log('integrity hash is', integrity) }) ``` #### <a name="put-stream"></a> `> cacache.put.stream(cache, key, [opts]) -> Writable` Returns a [Writable Stream](https://nodejs.org/api/stream.html#stream_writable_streams) that inserts data written to it into the cache. Emits an `integrity` event with the digest of written contents when it succeeds. ##### Example ```javascript request.get( 'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz' ).pipe( cacache.put.stream( cachePath, 'registry.npmjs.org|cacache@1.0.0' ).on('integrity', d => console.log(`integrity digest is ${d}`)) ) ``` #### <a name="put-options"></a> `> cacache.put options` `cacache.put` functions have a number of options in common. ##### `opts.metadata` Arbitrary metadata to be attached to the inserted key. ##### `opts.size` If provided, the data stream will be verified to check that enough data was passed through. If there's more or less data than expected, insertion will fail with an `EBADSIZE` error. ##### `opts.integrity` If present, the pre-calculated digest for the inserted content. If this option if provided and does not match the post-insertion digest, insertion will fail with an `EINTEGRITY` error. `algorithms` has no effect if this option is present. ##### `opts.algorithms` Default: ['sha512'] Hashing algorithms to use when calculating the [subresource integrity digest](#integrity) for inserted data. Can use any algorithm listed in `crypto.getHashes()` or `'omakase'`/`'お任せします'` to pick a random hash algorithm on each insertion. You may also use any anagram of `'modnar'` to use this feature. Currently only supports one algorithm at a time (i.e., an array length of exactly `1`). Has no effect if `opts.integrity` is present. ##### `opts.memoize` Default: null If provided, cacache will memoize the given cache insertion in memory, bypassing any filesystem checks for that key or digest in future cache fetches. Nothing will be written to the in-memory cache unless this option is explicitly truthy. If `opts.memoize` is an object or a `Map`-like (that is, an object with `get` and `set` methods), it will be written to instead of the global memoization cache. Reading from disk data can be forced by explicitly passing `memoize: false` to the reader functions, but their default will be to read from memory. #### <a name="rm-all"></a> `> cacache.rm.all(cache) -> Promise` Clears the entire cache. Mainly by blowing away the cache directory itself. ##### Example ```javascript cacache.rm.all(cachePath).then(() => { console.log('THE APOCALYPSE IS UPON US 😱') }) ``` #### <a name="rm-entry"></a> `> cacache.rm.entry(cache, key) -> Promise` Alias: `cacache.rm` Removes the index entry for `key`. Content will still be accessible if requested directly by content address ([`get.stream.byDigest`](#get-stream)). To remove the content itself (which might still be used by other entries), use [`rm.content`](#rm-content). Or, to safely vacuum any unused content, use [`verify`](#verify). ##### Example ```javascript cacache.rm.entry(cachePath, 'my-thing').then(() => { console.log('I did not like it anyway') }) ``` #### <a name="rm-content"></a> `> cacache.rm.content(cache, integrity) -> Promise` Removes the content identified by `integrity`. Any index entries referring to it will not be usable again until the content is re-added to the cache with an identical digest. ##### Example ```javascript cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => { console.log('data for my-thing is gone!') }) ``` #### <a name="set-locale"></a> `> cacache.setLocale(locale)` Configure the language/locale used for messages and errors coming from cacache. The list of available locales is in the `./locales` directory in the project root. _Interested in contributing more languages! [Submit a PR](CONTRIBUTING.md)!_ #### <a name="clear-memoized"></a> `> cacache.clearMemoized()` Completely resets the in-memory entry cache. #### <a name="tmp-mkdir"></a> `> tmp.mkdir(cache, opts) -> Promise<Path>` Returns a unique temporary directory inside the cache's `tmp` dir. This directory will use the same safe user assignment that all the other stuff use. Once the directory is made, it's the user's responsibility that all files within are given the appropriate `gid`/`uid` ownership settings to match the rest of the cache. If not, you can ask cacache to do it for you by calling [`tmp.fix()`](#tmp-fix), which will fix all tmp directory permissions. If you want automatic cleanup of this directory, use [`tmp.withTmp()`](#with-tpm) ##### Example ```javascript cacache.tmp.mkdir(cache).then(dir => { fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...) }) ``` #### <a name="tmp-fix"></a> `> tmp.fix(cache) -> Promise` Sets the `uid` and `gid` properties on all files and folders within the tmp folder to match the rest of the cache. Use this after manually writing files into [`tmp.mkdir`](#tmp-mkdir) or [`tmp.withTmp`](#with-tmp). ##### Example ```javascript cacache.tmp.mkdir(cache).then(dir => { writeFile(path.join(dir, 'file'), someData).then(() => { // make sure we didn't just put a root-owned file in the cache cacache.tmp.fix().then(() => { // all uids and gids match now }) }) }) ``` #### <a name="with-tmp"></a> `> tmp.withTmp(cache, opts, cb) -> Promise` Creates a temporary directory with [`tmp.mkdir()`](#tmp-mkdir) and calls `cb` with it. The created temporary directory will be removed when the return value of `cb()` resolves -- that is, if you return a Promise from `cb()`, the tmp directory will be automatically deleted once that promise completes. The same caveats apply when it comes to managing permissions for the tmp dir's contents. ##### Example ```javascript cacache.tmp.withTmp(cache, dir => { return fs.writeFileAsync(path.join(dir, 'blablabla'), Buffer#<1234>, ...) }).then(() => { // `dir` no longer exists }) ``` #### <a name="integrity"></a> Subresource Integrity Digests For content verification and addressing, cacache uses strings following the [Subresource Integrity spec](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity). That is, any time cacache expects an `integrity` argument or option, it should be in the format `<hashAlgorithm>-<base64-hash>`. One deviation from the current spec is that cacache will support any hash algorithms supported by the underlying Node.js process. You can use `crypto.getHashes()` to see which ones you can use. ##### Generating Digests Yourself If you have an existing content shasum, they are generally formatted as a hexadecimal string (that is, a sha1 would look like: `5f5513f8822fdbe5145af33b64d8d970dcf95c6e`). In order to be compatible with cacache, you'll need to convert this to an equivalent subresource integrity string. For this example, the corresponding hash would be: `sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=`. If you want to generate an integrity string yourself for existing data, you can use something like this: ```javascript const crypto = require('crypto') const hashAlgorithm = 'sha512' const data = 'foobarbaz' const integrity = ( hashAlgorithm + '-' + crypto.createHash(hashAlgorithm).update(data).digest('base64') ) ``` You can also use [`ssri`](https://npm.im/ssri) to have a richer set of functionality around SRI strings, including generation, parsing, and translating from existing hex-formatted strings. #### <a name="verify"></a> `> cacache.verify(cache, opts) -> Promise` Checks out and fixes up your cache: * Cleans up corrupted or invalid index entries. * Custom entry filtering options. * Garbage collects any content entries not referenced by the index. * Checks integrity for all content entries and removes invalid content. * Fixes cache ownership. * Removes the `tmp` directory in the cache and all its contents. When it's done, it'll return an object with various stats about the verification process, including amount of storage reclaimed, number of valid entries, number of entries removed, etc. ##### Options * `opts.filter` - receives a formatted entry. Return false to remove it. Note: might be called more than once on the same entry. ##### Example ```sh echo somegarbage >> $CACHEPATH/content/deadbeef ``` ```javascript cacache.verify(cachePath).then(stats => { // deadbeef collected, because of invalid checksum. console.log('cache is much nicer now! stats:', stats) }) ``` #### <a name="verify-last-run"></a> `> cacache.verify.lastRun(cache) -> Promise` Returns a `Date` representing the last time `cacache.verify` was run on `cache`. ##### Example ```javascript cacache.verify(cachePath).then(() => { cacache.verify.lastRun(cachePath).then(lastTime => { console.log('cacache.verify was last called on' + lastTime) }) }) ``` #object-keys <sup>[![Version Badge][npm-version-svg]][package-url]</sup> [![Build Status][travis-svg]][travis-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][npm-badge-png]][package-url] [![browser support][testling-svg]][testling-url] An Object.keys shim. Invoke its "shim" method to shim Object.keys if it is unavailable. Most common usage: ```js var keys = Object.keys || require('object-keys'); ``` ## Example ```js var keys = require('object-keys'); var assert = require('assert'); var obj = { a: true, b: true, c: true }; assert.deepEqual(keys(obj), ['a', 'b', 'c']); ``` ```js var keys = require('object-keys'); var assert = require('assert'); /* when Object.keys is not present */ delete Object.keys; var shimmedKeys = keys.shim(); assert.equal(shimmedKeys, keys); assert.deepEqual(Object.keys(obj), keys(obj)); ``` ```js var keys = require('object-keys'); var assert = require('assert'); /* when Object.keys is present */ var shimmedKeys = keys.shim(); assert.equal(shimmedKeys, Object.keys); assert.deepEqual(Object.keys(obj), keys(obj)); ``` ## Source Implementation taken directly from [es5-shim][es5-shim-url], with modifications, including from [lodash][lodash-url]. ## Tests Simply clone the repo, `npm install`, and run `npm test` [package-url]: https://npmjs.org/package/object-keys [npm-version-svg]: http://versionbadg.es/ljharb/object-keys.svg [travis-svg]: https://travis-ci.org/ljharb/object-keys.svg [travis-url]: https://travis-ci.org/ljharb/object-keys [deps-svg]: https://david-dm.org/ljharb/object-keys.svg [deps-url]: https://david-dm.org/ljharb/object-keys [dev-deps-svg]: https://david-dm.org/ljharb/object-keys/dev-status.svg [dev-deps-url]: https://david-dm.org/ljharb/object-keys#info=devDependencies [testling-svg]: https://ci.testling.com/ljharb/object-keys.png [testling-url]: https://ci.testling.com/ljharb/object-keys [es5-shim-url]: https://github.com/es-shims/es5-shim/blob/master/es5-shim.js#L542-589 [lodash-url]: https://github.com/lodash/lodash [npm-badge-png]: https://nodei.co/npm/object-keys.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/object-keys.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/object-keys.svg [downloads-url]: http://npm-stat.com/charts.html?package=object-keys # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) stream.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # lodash._createset v4.0.3 The internal [lodash](https://lodash.com/) function `createSet` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._createset ``` In Node.js: ```js var createSet = require('lodash._createset'); ``` See the [package source](https://github.com/lodash/lodash/blob/4.0.3-npm-packages/lodash._createset) for more details. # pacote [![npm version](https://img.shields.io/npm/v/pacote.svg)](https://npm.im/pacote) [![license](https://img.shields.io/npm/l/pacote.svg)](https://npm.im/pacote) [![Travis](https://img.shields.io/travis/npm/pacote.svg)](https://travis-ci.org/npm/pacote) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/pacote?svg=true)](https://ci.appveyor.com/project/npm/pacote) [![Coverage Status](https://coveralls.io/repos/github/npm/pacote/badge.svg?branch=latest)](https://coveralls.io/github/npm/pacote?branch=latest) [`pacote`](https://github.com/npm/pacote) is a Node.js library for downloading [npm](https://npmjs.org)-compatible packages. It supports all package specifier syntax that `npm install` and its ilk support. It transparently caches anything needed to reduce excess operations, using [`cacache`](https://npm.im/cacache). ## Install `$ npm install --save pacote` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [`manifest`](#manifest) * [`packument`](#packument) * [`extract`](#extract) * [`tarball`](#tarball) * [`tarball.stream`](#tarball-stream) * [`tarball.toFile`](#tarball-to-file) * ~~[`prefetch`](#prefetch)~~ (deprecated) * [`clearMemoized`](#clearMemoized) * [`options`](#options) ### Example ```javascript const pacote = require('pacote') pacote.manifest('pacote@^1').then(pkg => { console.log('package manifest for registry pkg:', pkg) // { "name": "pacote", "version": "1.0.0", ... } }) pacote.extract('http://hi.com/pkg.tgz', './here').then(() => { console.log('remote tarball contents extracted to ./here') }) ``` ### Features * Handles all package types [npm](https://npm.im/npm) does * [high-performance, reliable, verified local cache](https://npm.im/cacache) * offline mode * authentication support (private git, private npm registries, etc) * github, gitlab, and bitbucket-aware * semver range support for git dependencies ### Contributing The pacote team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. ### API #### <a name="manifest"></a> `> pacote.manifest(spec, [opts])` Fetches the *manifest* for a package. Manifest objects are similar and based on the `package.json` for that package, but with pre-processed and limited fields. The object has the following shape: ```javascript { "name": PkgName, "version": SemverString, "dependencies": { PkgName: SemverString }, "optionalDependencies": { PkgName: SemverString }, "devDependencies": { PkgName: SemverString }, "peerDependencies": { PkgName: SemverString }, "bundleDependencies": false || [PkgName], "bin": { BinName: Path }, "_resolved": TarballSource, // different for each package type "_integrity": SubresourceIntegrityHash, "_shrinkwrap": null || ShrinkwrapJsonObj } ``` Note that depending on the spec type, some additional fields might be present. For example, packages from `registry.npmjs.org` have additional metadata appended by the registry. ##### Example ```javascript pacote.manifest('pacote@1.0.0').then(pkgJson => { // fetched `package.json` data from the registry }) ``` #### <a name="packument"></a> `> pacote.packument(spec, [opts])` Fetches the *packument* for a package. Packument objects are general metadata about a project corresponding to registry metadata, and include version and `dist-tag` information about a package's available versions, rather than a specific version. It may include additional metadata not usually available through the individual package metadata objects. It generally looks something like this: ```javascript { "name": PkgName, "dist-tags": { 'latest': VersionString, [TagName]: VersionString, ... }, "versions": { [VersionString]: Manifest, ... } } ``` Note that depending on the spec type, some additional fields might be present. For example, packages from `registry.npmjs.org` have additional metadata appended by the registry. ##### Example ```javascript pacote.packument('pacote').then(pkgJson => { // fetched package versions metadata from the registry }) ``` #### <a name="extract"></a> `> pacote.extract(spec, destination, [opts])` Extracts package data identified by `<spec>` into a directory named `<destination>`, which will be created if it does not already exist. If `opts.digest` is provided and the data it identifies is present in the cache, `extract` will bypass most of its operations and go straight to extracting the tarball. ##### Example ```javascript pacote.extract('pacote@1.0.0', './woot', { digest: 'deadbeef' }).then(() => { // Succeeds as long as `pacote@1.0.0` still exists somewhere. Network and // other operations are bypassed entirely if `digest` is present in the cache. }) ``` #### <a name="tarball"></a> `> pacote.tarball(spec, [opts])` Fetches package data identified by `<spec>` and returns the data as a buffer. This API has two variants: * `pacote.tarball.stream(spec, [opts])` - Same as `pacote.tarball`, except it returns a stream instead of a Promise. * `pacote.tarball.toFile(spec, dest, [opts])` - Instead of returning data directly, data will be written directly to `dest`, and create any required directories along the way. ##### Example ```javascript pacote.tarball('pacote@1.0.0', { cache: './my-cache' }).then(data => { // data is the tarball data for pacote@1.0.0 }) ``` #### <a name="tarball-stream"></a> `> pacote.tarball.stream(spec, [opts])` Same as `pacote.tarball`, except it returns a stream instead of a Promise. ##### Example ```javascript pacote.tarball.stream('pacote@1.0.0') .pipe(fs.createWriteStream('./pacote-1.0.0.tgz')) ``` #### <a name="tarball-to-file"></a> `> pacote.tarball.toFile(spec, dest, [opts])` Like `pacote.tarball`, but instead of returning data directly, data will be written directly to `dest`, and create any required directories along the way. ##### Example ```javascript pacote.tarball.toFile('pacote@1.0.0', './pacote-1.0.0.tgz') .then(() => /* pacote tarball written directly to ./pacote-1.0.0.tgz */) ``` #### <a name="prefetch"></a> `> pacote.prefetch(spec, [opts])` ##### THIS API IS DEPRECATED. USE `pacote.tarball()` INSTEAD Fetches package data identified by `<spec>`, usually for the purpose of warming up the local package cache (with `opts.cache`). It does not return anything. ##### Example ```javascript pacote.prefetch('pacote@1.0.0', { cache: './my-cache' }).then(() => { // ./my-cache now has both the manifest and tarball for `pacote@1.0.0`. }) ``` #### <a name="clearMemoized"></a> `> pacote.clearMemoized()` This utility function can be used to force pacote to release its references to any memoized data in its various internal caches. It might help free some memory. ```javascript pacote.manifest(...).then(() => pacote.clearMemoized) ``` #### <a name="options"></a> `> options` `pacote` accepts [the options for `npm-registry-fetch`](https://npm.im/npm-registry-fetch#fetch-options) as-is, with a couple of additional `pacote-specific` ones: ##### <a name="dirPacker"></a> `opts.dirPacker` * Type: Function * Default: Uses [`npm-packlist`](https://npm.im/npm-packlist) and [`tar`](https://npm.im/tar) to make a tarball. Expects a function that takes a single argument, `dir`, and returns a `ReadableStream` that outputs packaged tarball data. Used when creating tarballs for package specs that are not already packaged, such as git and directory dependencies. The default `opts.dirPacker` does not execute `prepare` scripts, even though npm itself does. ##### <a name="opts-enjoy-by"></a> `opts.enjoy-by` * Alias: `opts.enjoyBy`, `opts.before` * Type: Date-able * Default: undefined If passed in, will be used while resolving to filter the versions for **registry dependencies** such that versions published **after** `opts.enjoy-by` are not considered -- as if they'd never been published. ##### <a name="opts-include-deprecated"></a> `opts.include-deprecated` * Alias: `opts.includeDeprecated` * Type: Boolean * Default: false If false, deprecated versions will be skipped when selecting from registry range specifiers. If true, deprecations do not affect version selection. ##### <a name="opts-full-metadata"></a> `opts.full-metadata` * Type: Boolean * Default: false If `true`, the full packument will be fetched when doing metadata requests. By defaul, `pacote` only fetches the summarized packuments, also called "corgis". ##### <a name="opts-tag"></a> `opts.tag` * Alias: `opts.defaultTag` * Type: String * Default: `'latest'` Package version resolution tag. When processing registry spec ranges, this option is used to determine what dist-tag to treat as "latest". For more details about how `pacote` selects versions and how `tag` is involved, see [the documentation for `npm-pick-manifest`](https://npm.im/npm-pick-manifest). ##### <a name="opts-resolved"></a> `opts.resolved` * Type: String * Default: null When fetching tarballs, this option can be passed in to skip registry metadata lookups when downloading tarballs. If the string is a `file:` URL, pacote will try to read the referenced local file before attempting to do any further lookups. This option does not bypass integrity checks when `opts.integrity` is passed in. ##### <a name="opts-where"></a> `opts.where` * Type: String * Default: null Passed as an argument to [`npm-package-arg`](https://npm.im/npm-package-arg) when resolving `spec` arguments. Used to determine what path to resolve local path specs relatively from. # isexe Minimal module to check if a file is executable, and a normal file. Uses `fs.stat` and tests against the `PATHEXT` environment variable on Windows. ## USAGE ```javascript var isexe = require('isexe') isexe('some-file-name', function (err, isExe) { if (err) { console.error('probably file does not exist or something', err) } else if (isExe) { console.error('this thing can be run') } else { console.error('cannot be run') } }) // same thing but synchronous, throws errors var isExe = isexe.sync('some-file-name') // treat errors as just "not executable" isexe('maybe-missing-file', { ignoreErrors: true }, callback) var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true }) ``` ## API ### `isexe(path, [options], [callback])` Check if the path is executable. If no callback provided, and a global `Promise` object is available, then a Promise will be returned. Will raise whatever errors may be raised by `fs.stat`, unless `options.ignoreErrors` is set to true. ### `isexe.sync(path, [options])` Same as `isexe` but returns the value and throws any errors raised. ### Options * `ignoreErrors` Treat all errors as "no, this is not executable", but don't raise them. * `uid` Number to use as the user id * `gid` Number to use as the group id * `pathExt` List of path extensions to use instead of `PATHEXT` environment variable on Windows. # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; aws-sign ======== AWS signing. Originally pulled from LearnBoost/knox, maintained as vendor in request, now a standalone module. # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; # npm-pick-manifest [![npm version](https://img.shields.io/npm/v/npm-pick-manifest.svg)](https://npm.im/npm-pick-manifest) [![license](https://img.shields.io/npm/l/npm-pick-manifest.svg)](https://npm.im/npm-pick-manifest) [![Travis](https://img.shields.io/travis/npm/npm-pick-manifest.svg)](https://travis-ci.org/npm/npm-pick-manifest) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/npm-pick-manifest?svg=true)](https://ci.appveyor.com/project/npm/npm-pick-manifest) [![Coverage Status](https://coveralls.io/repos/github/npm/npm-pick-manifest/badge.svg?branch=latest)](https://coveralls.io/github/npm/npm-pick-manifest?branch=latest) [`npm-pick-manifest`](https://github.com/npm/npm-pick-manifest) is a standalone implementation of [npm](https://npmjs.com)'s semver range resolution algorithm. ## Install `$ npm install --save npm-pick-manifest` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [`pickManifest()`](#pick-manifest) ### Example ```javascript const pickManifest = require('npm-pick-manifest') fetch('https://registry.npmjs.org/npm-pick-manifest').then(res => { return res.json() }).then(packument => { return pickManifest(packument, '^1.0.0') }) // get same manifest as npm would get if you `npm i npm-pick-manifest@^1.0.0` ``` ### Features * Uses npm's exact semver resolution algorithm * Supports ranges, tags, and versions ### Contributing The npm-pick-manifest team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. ### API #### <a name="pick-manifest"></a> `> pickManifest(packument, selector, [opts]) -> manifest` Returns the manifest that matches `selector`, or throws an error. Packuments are anything returned by metadata URLs from the npm registry. That is, they're objects with the following shape (only fields used by `npm-pick-manifest` included): ```javascript { name: 'some-package', 'dist-tags': { foo: '1.0.1' }, versions: { '1.0.0': { version: '1.0.0' }, '1.0.1': { version: '1.0.1' }, '1.0.2': { version: '1.0.2' }, '2.0.0': { version: '2.0.0' } } } ``` The algorithm will follow npm's algorithm for semver resolution, and only `tag`, `range`, and `version` selectors are supported. The function will throw `ETARGET` if there was no matching manifest, and `ENOVERSIONS` if the packument object has no valid versions in `versions`. If `opts.defaultTag` is provided, it will be used instead of `latest`. That is, if that tag matches the selector, it will be used, even if a higher available version matches the range. If `opts.enjoyBy` is provided, it should be something that can be passed to `new Date(x)`, such as a `Date` object or a timestamp string. It will be used to filter the selected versions such that only versions less than or equal to `enjoyBy` are considered. If `opts.includeDeprecated` passed in as true, deprecated versions will be selected. By default, deprecated versions other than `defaultTag` are ignored. # lockfile A very polite lock file utility, which endeavors to not litter, and to wait patiently for others. ## Usage ```javascript var lockFile = require('lockfile') // opts is optional, and defaults to {} lockFile.lock('some-file.lock', opts, function (er) { // if the er happens, then it failed to acquire a lock. // if there was not an error, then the file was created, // and won't be deleted until we unlock it. // do my stuff, free of interruptions // then, some time later, do: lockFile.unlock('some-file.lock', function (er) { // er means that an error happened, and is probably bad. }) }) ``` ## Methods Sync methods return the value/throw the error, others don't. Standard node fs stuff. All known locks are removed when the process exits. Of course, it's possible for certain types of failures to cause this to fail, but a best effort is made to not be a litterbug. ### lockFile.lock(path, [opts], cb) Acquire a file lock on the specified path ### lockFile.lockSync(path, [opts]) Acquire a file lock on the specified path ### lockFile.unlock(path, cb) Close and unlink the lockfile. ### lockFile.unlockSync(path) Close and unlink the lockfile. ### lockFile.check(path, [opts], cb) Check if the lockfile is locked and not stale. Callback is called with `cb(error, isLocked)`. ### lockFile.checkSync(path, [opts]) Check if the lockfile is locked and not stale. Returns boolean. ## Options ### opts.wait A number of milliseconds to wait for locks to expire before giving up. Only used by lockFile.lock. Poll for `opts.wait` ms. If the lock is not cleared by the time the wait expires, then it returns with the original error. ### opts.pollPeriod When using `opts.wait`, this is the period in ms in which it polls to check if the lock has expired. Defaults to `100`. ### opts.stale A number of milliseconds before locks are considered to have expired. ### opts.retries Used by lock and lockSync. Retry `n` number of times before giving up. ### opts.retryWait Used by lock. Wait `n` milliseconds before retrying. # es-to-primitive <sup>[![Version Badge][npm-version-svg]][package-url]</sup> [![Build Status][travis-svg]][travis-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][npm-badge-png]][package-url] ECMAScript “ToPrimitive” algorithm. Provides ES5 and ES2015 versions. When different versions of the spec conflict, the default export will be the latest version of the abstract operation. Alternative versions will also be available under an `es5`/`es2015` exported property if you require a specific version. ## Example ```js var toPrimitive = require('es-to-primitive'); var assert = require('assert'); assert(toPrimitive(function () {}) === String(function () {})); var date = new Date(); assert(toPrimitive(date) === String(date)); assert(toPrimitive({ valueOf: function () { return 3; } }) === 3); assert(toPrimitive(['a', 'b', 3]) === String(['a', 'b', 3])); var sym = Symbol(); assert(toPrimitive(Object(sym)) === sym); ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [package-url]: https://npmjs.org/package/es-to-primitive [npm-version-svg]: http://versionbadg.es/ljharb/es-to-primitive.svg [travis-svg]: https://travis-ci.org/ljharb/es-to-primitive.svg [travis-url]: https://travis-ci.org/ljharb/es-to-primitive [deps-svg]: https://david-dm.org/ljharb/es-to-primitive.svg [deps-url]: https://david-dm.org/ljharb/es-to-primitive [dev-deps-svg]: https://david-dm.org/ljharb/es-to-primitive/dev-status.svg [dev-deps-url]: https://david-dm.org/ljharb/es-to-primitive#info=devDependencies [testling-svg]: https://ci.testling.com/ljharb/es-to-primitive.png [testling-url]: https://ci.testling.com/ljharb/es-to-primitive [npm-badge-png]: https://nodei.co/npm/es-to-primitive.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/es-to-primitive.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/es-to-primitive.svg [downloads-url]: http://npm-stat.com/charts.html?package=es-to-primitive # npm audit security report Given a response from the npm security api, render it into a variety of security reports [![Build Status](https://travis-ci.org/npm/npm-audit-report.svg?branch=master)](https://travis-ci.org/npm/npm-audit-report) [![Build status](https://ci.appveyor.com/api/projects/status/qictiokvxmqkiuvi/branch/master?svg=true)](https://ci.appveyor.com/project/evilpacket/npm-audit-report/branch/master) [![Coverage Status](https://coveralls.io/repos/github/npm/npm-audit-report/badge.svg?branch=master)](https://coveralls.io/github/npm/npm-audit-report?branch=master) The response is an object that contains an output string (the report) and a suggested exitCode. ``` { report: 'string that contains the security report', exit: 1 } ``` ## Basic usage example ``` 'use strict' const Report = require('npm-audit-report') const options = { reporter: 'json' } Report(response, options, (result) => { console.log(result.report) process.exitCode = result.exitCode }) ``` ## options | option | values | default | description | | :--- | :--- | :--- |:--- | | reporter     | `install`, `detail`, `json`, `quiet` | `install` | specify which output format you want to use | | withColor     | `true`, `false`   | `true`   | indicates if some report elements should use colors | | withUnicode   | `true`, `false`                  | `true` | indicates if unicode characters should be used| # protoduck [![npm version](https://img.shields.io/npm/v/protoduck.svg)](https://npm.im/protoduck) [![license](https://img.shields.io/npm/l/protoduck.svg)](https://npm.im/protoduck) [![Travis](https://img.shields.io/travis/zkat/protoduck.svg)](https://travis-ci.org/zkat/protoduck) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/protoduck?svg=true)](https://ci.appveyor.com/project/zkat/protoduck) [![Coverage Status](https://coveralls.io/repos/github/zkat/protoduck/badge.svg?branch=latest)](https://coveralls.io/github/zkat/protoduck?branch=latest) [`protoduck`](https://github.com/zkat/protoduck) is a JavaScript library is a library for making groups of methods, called "protocols". If you're familiar with the concept of ["duck typing"](https://en.wikipedia.org/wiki/Duck_typing), then it might make sense to think of protocols as things that explicitly define what methods you need in order to "clearly be a duck". ## Install `$ npm install -S protoduck` ## Table of Contents * [Example](#example) * [Features](#features) * [Guide](#guide) * [Introduction](#introduction) * [Defining protocols](#defining-protocols) * [Implementations](#protocol-impls) * [Multiple dispatch](#multiple-dispatch) * [Constraints](#constraints) * [API](#api) * [`define()`](#define) * [`proto.impl()`](#impl) ### Example ```javascript const protoduck = require('protoduck') // Quackable is a protocol that defines three methods const Quackable = protoduck.define({ walk: [], talk: [], isADuck: [() => true] // default implementation -- it's optional! }) // `duck` must implement `Quackable` for this function to work. It doesn't // matter what type or class duck is, as long as it implements Quackable. function doStuffToDucks (duck) { if (!duck.isADuck()) { throw new Error('I want a duck!') } else { console.log(duck.walk()) console.log(duck.talk()) } } // ...In a different package: const ducks = require('./ducks') class Duck () {} // Implement the protocol on the Duck class. ducks.Quackable.impl(Duck, { walk () { return "*hobble hobble*" } talk () { return "QUACK QUACK" } }) // main.js ducks.doStuffToDucks(new Duck()) // works! ``` ### Features * Verifies implementations in case methods are missing or wrong ones added * Helpful, informative error messages * Optional default method implementations * Fresh JavaScript Feel™ -- methods work just like native methods when called * Methods can dispatch on arguments, not just `this` ([multimethods](https://npm.im/genfun)) * Type constraints ### Guide #### Introduction Like most Object-oriented languages, JavaScript comes with its own way of defining methods: You simply add regular `function`s as properties to regular objects, and when you do `obj.method()`, it calls the right code! ES6/ES2015 further extended this by adding a `class` syntax that allowed this same system to work with more familiar syntax sugar: `class Foo { method() { ... } }`. The point of "protocols" is to have a more explicit definitions of what methods "go together". That is, a protocol is a description of a type of object your code interacts with. If someone passes an object into your library, and it fits your defined protocol, the assumption is that the object will work just as well. Duck typing is a common term for this sort of thing: If it walks like a duck, and it talks like a duck, then it may as well be a duck, as far as any of our code is concerned. Many other languages have similar or identical concepts under different names: Java's interfaces, Haskell's typeclasses, Rust's traits. Elixir and Clojure both call them "protocols" as well. One big advantage to using these protocols is that they let users define their own versions of some abstraction, without requiring the type to inherit from another -- protocols are independent of inheritance, even though they're able to work together with it. If you've ever found yourself in some sort of inheritance mess, this is exactly the sort of thing you use to escape it. #### Defining Protocols The first step to using `protoduck` is to define a protocol. Protocol definitions look like this: ```javascript // import the library first! const protoduck = require('protoduck') // `Ducklike` is the name of our protocol. It defines what it means for // something to be "like a duck", as far as our code is concerned. const Ducklike = protoduck.define([], { walk: [], // This says that the protocol requires a "walk" method. talk: [] // and ducks also need to talk peck: [] // and they can even be pretty scary }) ``` Protocols by themselves don't really *do* anything, they simply define what methods are included in the protocol, and thus what will need to be implemented. #### Protocol Impls The simplest type of definitions for protocols are as regular methods. In this style, protocols end up working exactly like normal JavaScript methods: they're added as properties of the target type/object, and we call them using the `foo.method()` syntax. `this` is accessible inside the methods, as usual. Implementation syntax is very similar to protocol definitions, using `.impl`: ```javascript class Dog {} // Implementing `Ducklike` for `Dog`s Ducklike.impl(Dog, [], { walk () { return '*pads on all fours*' } talk () { return 'woof woof. I mean "quack" >_>' } peck (victim) { return 'Can I just bite ' + victim + ' instead?...' } }) ``` So now, our `Dog` class has two extra methods: `walk`, and `talk`, and we can just call them: ```javascript const pupper = new Dog() pupper.walk() // *pads on all fours* pupper.talk() // woof woof. I mean "quack" >_> pupper.peck('this string') // Can I just bite this string instead?... ``` #### Multiple Dispatch You may have noticed before that we have these `[]` in various places that don't seem to have any obvious purpose. These arrays allow protocols to be implemented not just for a single value of `this`, but across *all arguments*. That is, you can have methods in these protocols that use both `this`, and the first argument (or any other arguments) in order to determine what code to actually execute. This type of method is called a multimethod, and is one of the differences between protoduck and the default `class` syntax. To use it: in the protocol *definitions*, you put matching strings in different spots where those empty arrays were, and when you *implement* the protocol, you give the definition the actual types/objects you want to implement it on, and it takes care of mapping types to the strings you defined, and making sure the right code is run: ```javascript const Playful = protoduck.define(['friend'], {// <---\ playWith: ['friend'] // <------------ these correspond to each other }) class Cat {} class Human {} class Dog {} // The first protocol is for Cat/Human combination Playful.impl(Cat, [Human], { playWith (human) { return '*headbutt* *purr* *cuddle* omg ilu, ' + human.name } }) // And we define it *again* for a different combination Playful.impl(Cat, [Dog], { playWith (dog) { return '*scratches* *hisses* omg i h8 u, ' + dog.name } }) // depending on what you call it with, it runs different methods: const cat = new Cat() const human = new Human() const dog = new Dog() cat.playWith(human) // *headbutt* *purr* *cuddle* omg ilu, Sam cat.playWith(dog) // *scratches* *hisses* omg i h8 u, Pupper ``` #### Constraints Sometimes, you want to have all the functionality of a certain protocol, but you want to add a few requirements or other bits an pieces. Usually, you would have to define the entire functionality of the "parent" protocol in your own protocol in order to pull this off. This isn't very DRY and thus prone to errors, missing or out-of-sync functionality, or other issues. You could also just tell users "hey, if you implement this, make sure to implement that", but there's no guarantee they'll know about it, or know which arguments map to what. This is where constraints come in: You can define a protocol that expects anything that implements it to *also* implement one or more "parent" protocols. ```javascript const Show = proto.define({ // This syntax allows default impls without using arrays. toString () { return Object.prototype.toString.call(this) }, toJSON () { return JSON.stringify(this) } }) const Log = proto.define({ log () { console.log(this.toString()) } }, { where: Show() // Also valid: // [Show('this'), Show('a')] // [Show('this', ['a', 'b'])] }) // This fails with an error: must implement Show: Log.impl(MyThing) // So derive Show first... Show.impl(MyThing) // And now it's ok! Log.impl(MyThing) ``` ### API #### <a name="define"></a> `define(<types>?, <spec>, <opts>)` Defines a new protocol on across arguments of types defined by `<types>`, which will expect implementations for the functions specified in `<spec>`. If `<types>` is missing, it will be treated the same as if it were an empty array. The types in `<spec>` entries must map, by string name, to the type names specified in `<types>`, or be an empty array if `<types>` is omitted. The types in `<spec>` will then be used to map between method implementations for the individual functions, and the provided types in the impl. Protocols can include an `opts` object as the last argument, with the following available options: * `opts.name` `{String}` - The name to use when referring to the protocol. * `opts.where` `{Array[Constraint]|Constraint}` - Protocol constraints to use. * `opts.metaobject` - Accepts an object implementing the `Protoduck` protocol, which can be used to alter protocol definition mechanisms in `protoduck`. ##### Example ```javascript const Eq = protoduck.define(['a'], { eq: ['a'] }) ``` #### <a name="impl"></a> `proto.impl(<target>, <types>?, <implementations>?)` Adds a new implementation to the given protocol across `<types>`. `<implementations>` must be an object with functions matching the protocol's API. If given, the types in `<types>` will be mapped to their corresponding method arguments according to the original protocol definition. If a protocol is derivable -- that is, all its functions have default impls, then the `<implementations>` object can be omitted entirely, and the protocol will be automatically derived for the given `<types>` ##### Example ```javascript import protoduck from 'protoduck' // Singly-dispatched protocols const Show = protoduck.define({ show: [] }) class Foo { constructor (name) { this.name = name } } Show.impl(Foo, { show () { return `[object Foo(${this.name})]` } }) const f = new Foo('alex') f.show() === '[object Foo(alex)]' ``` ```javascript import protoduck from 'protoduck' // Multi-dispatched protocols const Comparable = protoduck.define(['target'], { compare: ['target'], }) class Foo {} class Bar {} class Baz {} Comparable.impl(Foo, [Bar], { compare (bar) { return 'bars are ok' } }) Comparable.impl(Foo, [Baz], { compare (baz) { return 'but bazzes are better' } }) const foo = new Foo() const bar = new Bar() const baz = new Baz() foo.compare(bar) // 'bars are ok' foo.compare(baz) // 'but bazzes are better' ``` This package parses [SPDX license expression](https://spdx.org/spdx-specification-21-web-version#h.jxpfx0ykyb60) strings describing license terms, like [package.json license strings](https://docs.npmjs.com/files/package.json#license), into consistently structured ECMAScript objects. The npm command-line interface depends on this package, as do many automatic license-audit tools. In a nutshell: ```javascript var parse = require('spdx-expression-parse') var assert = require('assert') assert.deepEqual( // Licensed under the terms of the Two-Clause BSD License. parse('BSD-2-Clause'), {license: 'BSD-2-Clause'} ) assert.throws(function () { // An invalid SPDX license expression. // Should be `Apache-2.0`. parse('Apache 2') }) assert.deepEqual( // Dual licensed under either: // - LGPL 2.1 // - a combination of Three-Clause BSD and MIT parse('(LGPL-2.1 OR BSD-3-Clause AND MIT)'), { left: {license: 'LGPL-2.1'}, conjunction: 'or', right: { left: {license: 'BSD-3-Clause'}, conjunction: 'and', right: {license: 'MIT'} } } ) ``` The syntax comes from the [Software Package Data eXchange (SPDX)](https://spdx.org/), a standard from the [Linux Foundation](https://www.linuxfoundation.org) for shareable data about software package license terms. SPDX aims to make sharing and auditing license data easy, especially for users of open-source software. The bulk of the SPDX standard describes syntax and semantics of XML metadata files. This package implements two lightweight, plain-text components of that larger standard: 1. The [license list](https://spdx.org/licenses), a mapping from specific string identifiers, like `Apache-2.0`, to standard form license texts and bolt-on license exceptions. The [spdx-license-ids](https://www.npmjs.com/package/spdx-exceptions) and [spdx-exceptions](https://www.npmjs.com/package/spdx-license-ids) packages implement the license list. `spdx-expression-parse` depends on and `require()`s them. Any license identifier from the license list is a valid license expression: ```javascript var identifiers = [] .concat(require('spdx-license-ids')) .concat(require('spdx-license-ids/deprecated')) identifiers.forEach(function (id) { assert.deepEqual(parse(id), {license: id}) }) ``` So is any license identifier `WITH` a standardized license exception: ```javascript identifiers.forEach(function (id) { require('spdx-exceptions').forEach(function (e) { assert.deepEqual( parse(id + ' WITH ' + e), {license: id, exception: e} ) }) }) ``` 2. The license expression language, for describing simple and complex license terms, like `MIT` for MIT-licensed and `(GPL-2.0 OR Apache-2.0)` for dual-licensing under GPL 2.0 and Apache 2.0. `spdx-expression-parse` itself implements license expression language, exporting a parser. ```javascript assert.deepEqual( // Licensed under a combination of: // - the MIT License AND // - a combination of: // - LGPL 2.1 (or a later version) AND // - Three-Clause BSD parse('(MIT AND (LGPL-2.1+ AND BSD-3-Clause))'), { left: {license: 'MIT'}, conjunction: 'and', right: { left: {license: 'LGPL-2.1', plus: true}, conjunction: 'and', right: {license: 'BSD-3-Clause'} } } ) ``` The Linux Foundation and its contributors license the SPDX standard under the terms of [the Creative Commons Attribution License 3.0 Unported (SPDX: "CC-BY-3.0")](http://spdx.org/licenses/CC-BY-3.0). "SPDX" is a United States federally registered trademark of the Linux Foundation. The authors of this package license their work under the terms of the MIT License. write-file-atomic ----------------- This is an extension for node's `fs.writeFile` that makes its operation atomic and allows you set ownership (uid/gid of the file). ### var writeFileAtomic = require('write-file-atomic')<br>writeFileAtomic(filename, data, [options], callback) * filename **String** * data **String** | **Buffer** * options **Object** | **String** * chown **Object** default, uid & gid of existing file, if any * uid **Number** * gid **Number** * encoding **String** | **Null** default = 'utf8' * fsync **Boolean** default = true * mode **Number** default, from existing file, if any * Promise **Object** default = native Promise object * callback **Function** Atomically and asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer. The file is initially named `filename + "." + murmurhex(__filename, process.pid, ++invocations)`. Note that `require('worker_threads').threadId` is used in addition to `process.pid` if running inside of a worker thread. If writeFile completes successfully then, if passed the **chown** option it will change the ownership of the file. Finally it renames the file back to the filename you specified. If it encounters errors at any of these steps it will attempt to unlink the temporary file and then pass the error back to the caller. If multiple writes are concurrently issued to the same file, the write operations are put into a queue and serialized in the order they were called, using Promises. Native promises are used by default, but you can inject your own promise-like object with the **Promise** option. Writes to different files are still executed in parallel. If provided, the **chown** option requires both **uid** and **gid** properties or else you'll get an error. If **chown** is not specified it will default to using the owner of the previous file. To prevent chown from being ran you can also pass `false`, in which case the file will be created with the current user's credentials. If **mode** is not specified, it will default to using the permissions from an existing file, if any. Expicitly setting this to `false` remove this default, resulting in a file created with the system default permissions. If options is a String, it's assumed to be the **encoding** option. The **encoding** option is ignored if **data** is a buffer. It defaults to 'utf8'. If the **fsync** option is **false**, writeFile will skip the final fsync call. Example: ```javascript writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}}, function (err) { if (err) throw err; console.log('It\'s saved!'); }); ``` ### var writeFileAtomicSync = require('write-file-atomic').sync<br>writeFileAtomicSync(filename, data, [options]) The synchronous version of **writeFileAtomic**. # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; aws4 ---- [![Build Status](https://secure.travis-ci.org/mhart/aws4.png?branch=master)](http://travis-ci.org/mhart/aws4) A small utility to sign vanilla node.js http(s) request options using Amazon's [AWS Signature Version 4](http://docs.amazonwebservices.com/general/latest/gr/signature-version-4.html). Can also be used [in the browser](./browser). This signature is supported by nearly all Amazon services, including [S3](http://docs.aws.amazon.com/AmazonS3/latest/API/), [EC2](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/), [DynamoDB](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/API.html), [Kinesis](http://docs.aws.amazon.com/kinesis/latest/APIReference/), [Lambda](http://docs.aws.amazon.com/lambda/latest/dg/API_Reference.html), [SQS](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/), [SNS](http://docs.aws.amazon.com/sns/latest/api/), [IAM](http://docs.aws.amazon.com/IAM/latest/APIReference/), [STS](http://docs.aws.amazon.com/STS/latest/APIReference/), [RDS](http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/), [CloudWatch](http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/), [CloudWatch Logs](http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/), [CodeDeploy](http://docs.aws.amazon.com/codedeploy/latest/APIReference/), [CloudFront](http://docs.aws.amazon.com/AmazonCloudFront/latest/APIReference/), [CloudTrail](http://docs.aws.amazon.com/awscloudtrail/latest/APIReference/), [ElastiCache](http://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/), [EMR](http://docs.aws.amazon.com/ElasticMapReduce/latest/API/), [Glacier](http://docs.aws.amazon.com/amazonglacier/latest/dev/amazon-glacier-api.html), [CloudSearch](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/APIReq.html), [Elastic Load Balancing](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/APIReference/), [Elastic Transcoder](http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/api-reference.html), [CloudFormation](http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/), [Elastic Beanstalk](http://docs.aws.amazon.com/elasticbeanstalk/latest/api/), [Storage Gateway](http://docs.aws.amazon.com/storagegateway/latest/userguide/AWSStorageGatewayAPI.html), [Data Pipeline](http://docs.aws.amazon.com/datapipeline/latest/APIReference/), [Direct Connect](http://docs.aws.amazon.com/directconnect/latest/APIReference/), [Redshift](http://docs.aws.amazon.com/redshift/latest/APIReference/), [OpsWorks](http://docs.aws.amazon.com/opsworks/latest/APIReference/), [SES](http://docs.aws.amazon.com/ses/latest/APIReference/), [SWF](http://docs.aws.amazon.com/amazonswf/latest/apireference/), [AutoScaling](http://docs.aws.amazon.com/AutoScaling/latest/APIReference/), [Mobile Analytics](http://docs.aws.amazon.com/mobileanalytics/latest/ug/server-reference.html), [Cognito Identity](http://docs.aws.amazon.com/cognitoidentity/latest/APIReference/), [Cognito Sync](http://docs.aws.amazon.com/cognitosync/latest/APIReference/), [Container Service](http://docs.aws.amazon.com/AmazonECS/latest/APIReference/), [AppStream](http://docs.aws.amazon.com/appstream/latest/developerguide/appstream-api-rest.html), [Key Management Service](http://docs.aws.amazon.com/kms/latest/APIReference/), [Config](http://docs.aws.amazon.com/config/latest/APIReference/), [CloudHSM](http://docs.aws.amazon.com/cloudhsm/latest/dg/api-ref.html), [Route53](http://docs.aws.amazon.com/Route53/latest/APIReference/requests-rest.html) and [Route53 Domains](http://docs.aws.amazon.com/Route53/latest/APIReference/requests-rpc.html). Indeed, the only AWS services that *don't* support v4 as of 2014-12-30 are [Import/Export](http://docs.aws.amazon.com/AWSImportExport/latest/DG/api-reference.html) and [SimpleDB](http://docs.aws.amazon.com/AmazonSimpleDB/latest/DeveloperGuide/SDB_API.html) (they only support [AWS Signature Version 2](https://github.com/mhart/aws2)). It also provides defaults for a number of core AWS headers and request parameters, making it very easy to query AWS services, or build out a fully-featured AWS library. Example ------- ```javascript var http = require('http'), https = require('https'), aws4 = require('aws4') // given an options object you could pass to http.request var opts = {host: 'sqs.us-east-1.amazonaws.com', path: '/?Action=ListQueues'} // alternatively (as aws4 can infer the host): opts = {service: 'sqs', region: 'us-east-1', path: '/?Action=ListQueues'} // alternatively (as us-east-1 is default): opts = {service: 'sqs', path: '/?Action=ListQueues'} aws4.sign(opts) // assumes AWS credentials are available in process.env console.log(opts) /* { host: 'sqs.us-east-1.amazonaws.com', path: '/?Action=ListQueues', headers: { Host: 'sqs.us-east-1.amazonaws.com', 'X-Amz-Date': '20121226T061030Z', Authorization: 'AWS4-HMAC-SHA256 Credential=ABCDEF/20121226/us-east-1/sqs/aws4_request, ...' } } */ // we can now use this to query AWS using the standard node.js http API http.request(opts, function(res) { res.pipe(process.stdout) }).end() /* <?xml version="1.0"?> <ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"> ... */ ``` More options ------------ ```javascript // you can also pass AWS credentials in explicitly (otherwise taken from process.env) aws4.sign(opts, {accessKeyId: '', secretAccessKey: ''}) // can also add the signature to query strings aws4.sign({service: 's3', path: '/my-bucket?X-Amz-Expires=12345', signQuery: true}) // create a utility function to pipe to stdout (with https this time) function request(o) { https.request(o, function(res) { res.pipe(process.stdout) }).end(o.body || '') } // aws4 can infer the HTTP method if a body is passed in // method will be POST and Content-Type: 'application/x-www-form-urlencoded; charset=utf-8' request(aws4.sign({service: 'iam', body: 'Action=ListGroups&Version=2010-05-08'})) /* <ListGroupsResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/"> ... */ // can specify any custom option or header as per usual request(aws4.sign({ service: 'dynamodb', region: 'ap-southeast-2', method: 'POST', path: '/', headers: { 'Content-Type': 'application/x-amz-json-1.0', 'X-Amz-Target': 'DynamoDB_20120810.ListTables' }, body: '{}' })) /* {"TableNames":[]} ... */ // works with all other services that support Signature Version 4 request(aws4.sign({service: 's3', path: '/', signQuery: true})) /* <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> ... */ request(aws4.sign({service: 'ec2', path: '/?Action=DescribeRegions&Version=2014-06-15'})) /* <DescribeRegionsResponse xmlns="http://ec2.amazonaws.com/doc/2014-06-15/"> ... */ request(aws4.sign({service: 'sns', path: '/?Action=ListTopics&Version=2010-03-31'})) /* <ListTopicsResponse xmlns="http://sns.amazonaws.com/doc/2010-03-31/"> ... */ request(aws4.sign({service: 'sts', path: '/?Action=GetSessionToken&Version=2011-06-15'})) /* <GetSessionTokenResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/"> ... */ request(aws4.sign({service: 'cloudsearch', path: '/?Action=ListDomainNames&Version=2013-01-01'})) /* <ListDomainNamesResponse xmlns="http://cloudsearch.amazonaws.com/doc/2013-01-01/"> ... */ request(aws4.sign({service: 'ses', path: '/?Action=ListIdentities&Version=2010-12-01'})) /* <ListIdentitiesResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/"> ... */ request(aws4.sign({service: 'autoscaling', path: '/?Action=DescribeAutoScalingInstances&Version=2011-01-01'})) /* <DescribeAutoScalingInstancesResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/"> ... */ request(aws4.sign({service: 'elasticloadbalancing', path: '/?Action=DescribeLoadBalancers&Version=2012-06-01'})) /* <DescribeLoadBalancersResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2012-06-01/"> ... */ request(aws4.sign({service: 'cloudformation', path: '/?Action=ListStacks&Version=2010-05-15'})) /* <ListStacksResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/"> ... */ request(aws4.sign({service: 'elasticbeanstalk', path: '/?Action=ListAvailableSolutionStacks&Version=2010-12-01'})) /* <ListAvailableSolutionStacksResponse xmlns="http://elasticbeanstalk.amazonaws.com/docs/2010-12-01/"> ... */ request(aws4.sign({service: 'rds', path: '/?Action=DescribeDBInstances&Version=2012-09-17'})) /* <DescribeDBInstancesResponse xmlns="http://rds.amazonaws.com/doc/2012-09-17/"> ... */ request(aws4.sign({service: 'monitoring', path: '/?Action=ListMetrics&Version=2010-08-01'})) /* <ListMetricsResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/"> ... */ request(aws4.sign({service: 'redshift', path: '/?Action=DescribeClusters&Version=2012-12-01'})) /* <DescribeClustersResponse xmlns="http://redshift.amazonaws.com/doc/2012-12-01/"> ... */ request(aws4.sign({service: 'cloudfront', path: '/2014-05-31/distribution'})) /* <DistributionList xmlns="http://cloudfront.amazonaws.com/doc/2014-05-31/"> ... */ request(aws4.sign({service: 'elasticache', path: '/?Action=DescribeCacheClusters&Version=2014-07-15'})) /* <DescribeCacheClustersResponse xmlns="http://elasticache.amazonaws.com/doc/2014-07-15/"> ... */ request(aws4.sign({service: 'elasticmapreduce', path: '/?Action=DescribeJobFlows&Version=2009-03-31'})) /* <DescribeJobFlowsResponse xmlns="http://elasticmapreduce.amazonaws.com/doc/2009-03-31"> ... */ request(aws4.sign({service: 'route53', path: '/2013-04-01/hostedzone'})) /* <ListHostedZonesResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"> ... */ request(aws4.sign({service: 'appstream', path: '/applications'})) /* {"_links":{"curie":[{"href":"http://docs.aws.amazon.com/appstream/latest/... ... */ request(aws4.sign({service: 'cognito-sync', path: '/identitypools'})) /* {"Count":0,"IdentityPoolUsages":[],"MaxResults":16,"NextToken":null} ... */ request(aws4.sign({service: 'elastictranscoder', path: '/2012-09-25/pipelines'})) /* {"NextPageToken":null,"Pipelines":[]} ... */ request(aws4.sign({service: 'lambda', path: '/2014-11-13/functions/'})) /* {"Functions":[],"NextMarker":null} ... */ request(aws4.sign({service: 'ecs', path: '/?Action=ListClusters&Version=2014-11-13'})) /* <ListClustersResponse xmlns="http://ecs.amazonaws.com/doc/2014-11-13/"> ... */ request(aws4.sign({service: 'glacier', path: '/-/vaults', headers: {'X-Amz-Glacier-Version': '2012-06-01'}})) /* {"Marker":null,"VaultList":[]} ... */ request(aws4.sign({service: 'storagegateway', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'StorageGateway_20120630.ListGateways' }})) /* {"Gateways":[]} ... */ request(aws4.sign({service: 'datapipeline', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'DataPipeline.ListPipelines' }})) /* {"hasMoreResults":false,"pipelineIdList":[]} ... */ request(aws4.sign({service: 'opsworks', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'OpsWorks_20130218.DescribeStacks' }})) /* {"Stacks":[]} ... */ request(aws4.sign({service: 'route53domains', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'Route53Domains_v20140515.ListDomains' }})) /* {"Domains":[]} ... */ request(aws4.sign({service: 'kinesis', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'Kinesis_20131202.ListStreams' }})) /* {"HasMoreStreams":false,"StreamNames":[]} ... */ request(aws4.sign({service: 'cloudtrail', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'CloudTrail_20131101.DescribeTrails' }})) /* {"trailList":[]} ... */ request(aws4.sign({service: 'logs', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'Logs_20140328.DescribeLogGroups' }})) /* {"logGroups":[]} ... */ request(aws4.sign({service: 'codedeploy', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'CodeDeploy_20141006.ListApplications' }})) /* {"applications":[]} ... */ request(aws4.sign({service: 'directconnect', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'OvertureService.DescribeConnections' }})) /* {"connections":[]} ... */ request(aws4.sign({service: 'kms', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'TrentService.ListKeys' }})) /* {"Keys":[],"Truncated":false} ... */ request(aws4.sign({service: 'config', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'StarlingDoveService.DescribeDeliveryChannels' }})) /* {"DeliveryChannels":[]} ... */ request(aws4.sign({service: 'cloudhsm', body: '{}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'CloudHsmFrontendService.ListAvailableZones' }})) /* {"AZList":["us-east-1a","us-east-1b","us-east-1c"]} ... */ request(aws4.sign({ service: 'swf', body: '{"registrationStatus":"REGISTERED"}', headers: { 'Content-Type': 'application/x-amz-json-1.0', 'X-Amz-Target': 'SimpleWorkflowService.ListDomains' } })) /* {"domainInfos":[]} ... */ request(aws4.sign({ service: 'cognito-identity', body: '{"MaxResults": 1}', headers: { 'Content-Type': 'application/x-amz-json-1.1', 'X-Amz-Target': 'AWSCognitoIdentityService.ListIdentityPools' } })) /* {"IdentityPools":[]} ... */ request(aws4.sign({ service: 'mobileanalytics', path: '/2014-06-05/events', body: JSON.stringify({events:[{ eventType: 'a', timestamp: new Date().toISOString(), session: {}, }]}), headers: { 'Content-Type': 'application/json', 'X-Amz-Client-Context': JSON.stringify({ client: {client_id: 'a', app_title: 'a'}, custom: {}, env: {platform: 'a'}, services: {}, }), } })) /* (HTTP 202, empty response) */ // Generate CodeCommit Git access password var signer = new aws4.RequestSigner({ service: 'codecommit', host: 'git-codecommit.us-east-1.amazonaws.com', method: 'GIT', path: '/v1/repos/MyAwesomeRepo', }) var password = signer.getDateTime() + 'Z' + signer.signature() ``` API --- ### aws4.sign(requestOptions, [credentials]) This calculates and populates the `Authorization` header of `requestOptions`, and any other necessary AWS headers and/or request options. Returns `requestOptions` as a convenience for chaining. `requestOptions` is an object holding the same options that the node.js [http.request](http://nodejs.org/docs/latest/api/http.html#http_http_request_options_callback) function takes. The following properties of `requestOptions` are used in the signing or populated if they don't already exist: - `hostname` or `host` (will be determined from `service` and `region` if not given) - `method` (will use `'GET'` if not given or `'POST'` if there is a `body`) - `path` (will use `'/'` if not given) - `body` (will use `''` if not given) - `service` (will be calculated from `hostname` or `host` if not given) - `region` (will be calculated from `hostname` or `host` or use `'us-east-1'` if not given) - `headers['Host']` (will use `hostname` or `host` or be calculated if not given) - `headers['Content-Type']` (will use `'application/x-www-form-urlencoded; charset=utf-8'` if not given and there is a `body`) - `headers['Date']` (used to calculate the signature date if given, otherwise `new Date` is used) Your AWS credentials (which can be found in your [AWS console](https://portal.aws.amazon.com/gp/aws/securityCredentials)) can be specified in one of two ways: - As the second argument, like this: ```javascript aws4.sign(requestOptions, { secretAccessKey: "<your-secret-access-key>", accessKeyId: "<your-access-key-id>", sessionToken: "<your-session-token>" }) ``` - From `process.env`, such as this: ``` export AWS_SECRET_ACCESS_KEY="<your-secret-access-key>" export AWS_ACCESS_KEY_ID="<your-access-key-id>" export AWS_SESSION_TOKEN="<your-session-token>" ``` (will also use `AWS_ACCESS_KEY` and `AWS_SECRET_KEY` if available) The `sessionToken` property and `AWS_SESSION_TOKEN` environment variable are optional for signing with [IAM STS temporary credentials](http://docs.aws.amazon.com/STS/latest/UsingSTS/using-temp-creds.html). Installation ------------ With [npm](http://npmjs.org/) do: ``` npm install aws4 ``` Can also be used [in the browser](./browser). Thanks ------ Thanks to [@jed](https://github.com/jed) for his [dynamo-client](https://github.com/jed/dynamo-client) lib where I first committed and subsequently extracted this code. Also thanks to the [official node.js AWS SDK](https://github.com/aws/aws-sdk-js) for giving me a start on implementing the v4 signature. # lodash._createcache v3.1.2 The [modern build](https://github.com/lodash/lodash/wiki/Build-Differences) of [lodash’s](https://lodash.com/) internal `createCache` exported as a [Node.js](http://nodejs.org/)/[io.js](https://iojs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash._createcache ``` In Node.js/io.js: ```js var createCache = require('lodash._createcache'); ``` See the [package source](https://github.com/lodash/lodash/blob/3.1.2-npm-packages/lodash._createcache) for more details. ## Pure JS character encoding conversion [![Build Status](https://travis-ci.org/ashtuchkin/iconv-lite.svg?branch=master)](https://travis-ci.org/ashtuchkin/iconv-lite) * Doesn't need native code compilation. Works on Windows and in sandboxed environments like [Cloud9](http://c9.io). * Used in popular projects like [Express.js (body_parser)](https://github.com/expressjs/body-parser), [Grunt](http://gruntjs.com/), [Nodemailer](http://www.nodemailer.com/), [Yeoman](http://yeoman.io/) and others. * Faster than [node-iconv](https://github.com/bnoordhuis/node-iconv) (see below for performance comparison). * Intuitive encode/decode API * Streaming support for Node v0.10+ * [Deprecated] Can extend Node.js primitives (buffers, streams) to support all iconv-lite encodings. * In-browser usage via [Browserify](https://github.com/substack/node-browserify) (~180k gzip compressed with Buffer shim included). * Typescript [type definition file](https://github.com/ashtuchkin/iconv-lite/blob/master/lib/index.d.ts) included. * React Native is supported (need to explicitly `npm install` two more modules: `buffer` and `stream`). * License: MIT. [![NPM Stats](https://nodei.co/npm/iconv-lite.png?downloads=true&downloadRank=true)](https://npmjs.org/packages/iconv-lite/) ## Usage ### Basic API ```javascript var iconv = require('iconv-lite'); // Convert from an encoded buffer to js string. str = iconv.decode(Buffer.from([0x68, 0x65, 0x6c, 0x6c, 0x6f]), 'win1251'); // Convert from js string to an encoded buffer. buf = iconv.encode("Sample input string", 'win1251'); // Check if encoding is supported iconv.encodingExists("us-ascii") ``` ### Streaming API (Node v0.10+) ```javascript // Decode stream (from binary stream to js strings) http.createServer(function(req, res) { var converterStream = iconv.decodeStream('win1251'); req.pipe(converterStream); converterStream.on('data', function(str) { console.log(str); // Do something with decoded strings, chunk-by-chunk. }); }); // Convert encoding streaming example fs.createReadStream('file-in-win1251.txt') .pipe(iconv.decodeStream('win1251')) .pipe(iconv.encodeStream('ucs2')) .pipe(fs.createWriteStream('file-in-ucs2.txt')); // Sugar: all encode/decode streams have .collect(cb) method to accumulate data. http.createServer(function(req, res) { req.pipe(iconv.decodeStream('win1251')).collect(function(err, body) { assert(typeof body == 'string'); console.log(body); // full request body string }); }); ``` ### [Deprecated] Extend Node.js own encodings > NOTE: This doesn't work on latest Node versions. See [details](https://github.com/ashtuchkin/iconv-lite/wiki/Node-v4-compatibility). ```javascript // After this call all Node basic primitives will understand iconv-lite encodings. iconv.extendNodeEncodings(); // Examples: buf = new Buffer(str, 'win1251'); buf.write(str, 'gbk'); str = buf.toString('latin1'); assert(Buffer.isEncoding('iso-8859-15')); Buffer.byteLength(str, 'us-ascii'); http.createServer(function(req, res) { req.setEncoding('big5'); req.collect(function(err, body) { console.log(body); }); }); fs.createReadStream("file.txt", "shift_jis"); // External modules are also supported (if they use Node primitives, which they probably do). request = require('request'); request({ url: "http://github.com/", encoding: "cp932" }); // To remove extensions iconv.undoExtendNodeEncodings(); ``` ## Supported encodings * All node.js native encodings: utf8, ucs2 / utf16-le, ascii, binary, base64, hex. * Additional unicode encodings: utf16, utf16-be, utf-7, utf-7-imap. * All widespread singlebyte encodings: Windows 125x family, ISO-8859 family, IBM/DOS codepages, Macintosh family, KOI8 family, all others supported by iconv library. Aliases like 'latin1', 'us-ascii' also supported. * All widespread multibyte encodings: CP932, CP936, CP949, CP950, GB2312, GBK, GB18030, Big5, Shift_JIS, EUC-JP. See [all supported encodings on wiki](https://github.com/ashtuchkin/iconv-lite/wiki/Supported-Encodings). Most singlebyte encodings are generated automatically from [node-iconv](https://github.com/bnoordhuis/node-iconv). Thank you Ben Noordhuis and libiconv authors! Multibyte encodings are generated from [Unicode.org mappings](http://www.unicode.org/Public/MAPPINGS/) and [WHATWG Encoding Standard mappings](http://encoding.spec.whatwg.org/). Thank you, respective authors! ## Encoding/decoding speed Comparison with node-iconv module (1000x256kb, on MacBook Pro, Core i5/2.6 GHz, Node v0.12.0). Note: your results may vary, so please always check on your hardware. operation iconv@2.1.4 iconv-lite@0.4.7 ---------------------------------------------------------- encode('win1251') ~96 Mb/s ~320 Mb/s decode('win1251') ~95 Mb/s ~246 Mb/s ## BOM handling * Decoding: BOM is stripped by default, unless overridden by passing `stripBOM: false` in options (f.ex. `iconv.decode(buf, enc, {stripBOM: false})`). A callback might also be given as a `stripBOM` parameter - it'll be called if BOM character was actually found. * If you want to detect UTF-8 BOM when decoding other encodings, use [node-autodetect-decoder-stream](https://github.com/danielgindi/node-autodetect-decoder-stream) module. * Encoding: No BOM added, unless overridden by `addBOM: true` option. ## UTF-16 Encodings This library supports UTF-16LE, UTF-16BE and UTF-16 encodings. First two are straightforward, but UTF-16 is trying to be smart about endianness in the following ways: * Decoding: uses BOM and 'spaces heuristic' to determine input endianness. Default is UTF-16LE, but can be overridden with `defaultEncoding: 'utf-16be'` option. Strips BOM unless `stripBOM: false`. * Encoding: uses UTF-16LE and writes BOM by default. Use `addBOM: false` to override. ## Other notes When decoding, be sure to supply a Buffer to decode() method, otherwise [bad things usually happen](https://github.com/ashtuchkin/iconv-lite/wiki/Use-Buffers-when-decoding). Untranslatable characters are set to � or ?. No transliteration is currently supported. Node versions 0.10.31 and 0.11.13 are buggy, don't use them (see #65, #77). ## Testing ```bash $ git clone git@github.com:ashtuchkin/iconv-lite.git $ cd iconv-lite $ npm install $ npm test $ # To view performance: $ node test/performance.js $ # To view test coverage: $ npm run coverage $ open coverage/lcov-report/index.html ``` # stream-shift Returns the next buffer/object in a stream's readable queue ``` npm install stream-shift ``` [![build status](http://img.shields.io/travis/mafintosh/stream-shift.svg?style=flat)](http://travis-ci.org/mafintosh/stream-shift) ## Usage ``` js var shift = require('stream-shift') console.log(shift(someStream)) // first item in its buffer ``` ## Credit Thanks [@dignifiedquire](https://github.com/dignifiedquire) for making this work on node 6 ## License MIT # jsprim: utilities for primitive JavaScript types This module provides miscellaneous facilities for working with strings, numbers, dates, and objects and arrays of these basic types. ### deepCopy(obj) Creates a deep copy of a primitive type, object, or array of primitive types. ### deepEqual(obj1, obj2) Returns whether two objects are equal. ### isEmpty(obj) Returns true if the given object has no properties and false otherwise. This is O(1) (unlike `Object.keys(obj).length === 0`, which is O(N)). ### hasKey(obj, key) Returns true if the given object has an enumerable, non-inherited property called `key`. [For information on enumerability and ownership of properties, see the MDN documentation.](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Enumerability_and_ownership_of_properties) ### forEachKey(obj, callback) Like Array.forEach, but iterates enumerable, owned properties of an object rather than elements of an array. Equivalent to: for (var key in obj) { if (Object.prototype.hasOwnProperty.call(obj, key)) { callback(key, obj[key]); } } ### flattenObject(obj, depth) Flattens an object up to a given level of nesting, returning an array of arrays of length "depth + 1", where the first "depth" elements correspond to flattened columns and the last element contains the remaining object . For example: flattenObject({ 'I': { 'A': { 'i': { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] }, 'ii': { 'datum1': [ 3, 4 ] } }, 'B': { 'i': { 'datum1': [ 5, 6 ] }, 'ii': { 'datum1': [ 7, 8 ], 'datum2': [ 3, 4 ], }, 'iii': { } } }, 'II': { 'A': { 'i': { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] } } } }, 3) becomes: [ [ 'I', 'A', 'i', { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] } ], [ 'I', 'A', 'ii', { 'datum1': [ 3, 4 ] } ], [ 'I', 'B', 'i', { 'datum1': [ 5, 6 ] } ], [ 'I', 'B', 'ii', { 'datum1': [ 7, 8 ], 'datum2': [ 3, 4 ] } ], [ 'I', 'B', 'iii', {} ], [ 'II', 'A', 'i', { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] } ] ] This function is strict: "depth" must be a non-negative integer and "obj" must be a non-null object with at least "depth" levels of nesting under all keys. ### flattenIter(obj, depth, func) This is similar to `flattenObject` except that instead of returning an array, this function invokes `func(entry)` for each `entry` in the array that `flattenObject` would return. `flattenIter(obj, depth, func)` is logically equivalent to `flattenObject(obj, depth).forEach(func)`. Importantly, this version never constructs the full array. Its memory usage is O(depth) rather than O(n) (where `n` is the number of flattened elements). There's another difference between `flattenObject` and `flattenIter` that's related to the special case where `depth === 0`. In this case, `flattenObject` omits the array wrapping `obj` (which is regrettable). ### pluck(obj, key) Fetch nested property "key" from object "obj", traversing objects as needed. For example, `pluck(obj, "foo.bar.baz")` is roughly equivalent to `obj.foo.bar.baz`, except that: 1. If traversal fails, the resulting value is undefined, and no error is thrown. For example, `pluck({}, "foo.bar")` is just undefined. 2. If "obj" has property "key" directly (without traversing), the corresponding property is returned. For example, `pluck({ 'foo.bar': 1 }, 'foo.bar')` is 1, not undefined. This is also true recursively, so `pluck({ 'a': { 'foo.bar': 1 } }, 'a.foo.bar')` is also 1, not undefined. ### randElt(array) Returns an element from "array" selected uniformly at random. If "array" is empty, throws an Error. ### startsWith(str, prefix) Returns true if the given string starts with the given prefix and false otherwise. ### endsWith(str, suffix) Returns true if the given string ends with the given suffix and false otherwise. ### parseInteger(str, options) Parses the contents of `str` (a string) as an integer. On success, the integer value is returned (as a number). On failure, an error is **returned** describing why parsing failed. By default, leading and trailing whitespace characters are not allowed, nor are trailing characters that are not part of the numeric representation. This behaviour can be toggled by using the options below. The empty string (`''`) is not considered valid input. If the return value cannot be precisely represented as a number (i.e., is smaller than `Number.MIN_SAFE_INTEGER` or larger than `Number.MAX_SAFE_INTEGER`), an error is returned. Additionally, the string `'-0'` will be parsed as the integer `0`, instead of as the IEEE floating point value `-0`. This function accepts both upper and lowercase characters for digits, similar to `parseInt()`, `Number()`, and [strtol(3C)](https://illumos.org/man/3C/strtol). The following may be specified in `options`: Option | Type | Default | Meaning ------------------ | ------- | ------- | --------------------------- base | number | 10 | numeric base (radix) to use, in the range 2 to 36 allowSign | boolean | true | whether to interpret any leading `+` (positive) and `-` (negative) characters allowImprecise | boolean | false | whether to accept values that may have lost precision (past `MAX_SAFE_INTEGER` or below `MIN_SAFE_INTEGER`) allowPrefix | boolean | false | whether to interpret the prefixes `0b` (base 2), `0o` (base 8), `0t` (base 10), or `0x` (base 16) allowTrailing | boolean | false | whether to ignore trailing characters trimWhitespace | boolean | false | whether to trim any leading or trailing whitespace/line terminators leadingZeroIsOctal | boolean | false | whether a leading zero indicates octal Note that if `base` is unspecified, and `allowPrefix` or `leadingZeroIsOctal` are, then the leading characters can change the default base from 10. If `base` is explicitly specified and `allowPrefix` is true, then the prefix will only be accepted if it matches the specified base. `base` and `leadingZeroIsOctal` cannot be used together. **Context:** It's tricky to parse integers with JavaScript's built-in facilities for several reasons: - `parseInt()` and `Number()` by default allow the base to be specified in the input string by a prefix (e.g., `0x` for hex). - `parseInt()` allows trailing nonnumeric characters. - `Number(str)` returns 0 when `str` is the empty string (`''`). - Both functions return incorrect values when the input string represents a valid integer outside the range of integers that can be represented precisely. Specifically, `parseInt('9007199254740993')` returns 9007199254740992. - Both functions always accept `-` and `+` signs before the digit. - Some older JavaScript engines always interpret a leading 0 as indicating octal, which can be surprising when parsing input from users who expect a leading zero to be insignificant. While each of these may be desirable in some contexts, there are also times when none of them are wanted. `parseInteger()` grants greater control over what input's permissible. ### iso8601(date) Converts a Date object to an ISO8601 date string of the form "YYYY-MM-DDTHH:MM:SS.sssZ". This format is not customizable. ### parseDateTime(str) Parses a date expressed as a string, as either a number of milliseconds since the epoch or any string format that Date accepts, giving preference to the former where these two sets overlap (e.g., strings containing small numbers). ### hrtimeDiff(timeA, timeB) Given two hrtime readings (as from Node's `process.hrtime()`), where timeA is later than timeB, compute the difference and return that as an hrtime. It is illegal to invoke this for a pair of times where timeB is newer than timeA. ### hrtimeAdd(timeA, timeB) Add two hrtime intervals (as from Node's `process.hrtime()`), returning a new hrtime interval array. This function does not modify either input argument. ### hrtimeAccum(timeA, timeB) Add two hrtime intervals (as from Node's `process.hrtime()`), storing the result in `timeA`. This function overwrites (and returns) the first argument passed in. ### hrtimeNanosec(timeA), hrtimeMicrosec(timeA), hrtimeMillisec(timeA) This suite of functions converts a hrtime interval (as from Node's `process.hrtime()`) into a scalar number of nanoseconds, microseconds or milliseconds. Results are truncated, as with `Math.floor()`. ### validateJsonObject(schema, object) Uses JSON validation (via JSV) to validate the given object against the given schema. On success, returns null. On failure, *returns* (does not throw) a useful Error object. ### extraProperties(object, allowed) Check an object for unexpected properties. Accepts the object to check, and an array of allowed property name strings. If extra properties are detected, an array of extra property names is returned. If no properties other than those in the allowed list are present on the object, the returned array will be of zero length. ### mergeObjects(provided, overrides, defaults) Merge properties from objects "provided", "overrides", and "defaults". The intended use case is for functions that accept named arguments in an "args" object, but want to provide some default values and override other values. In that case, "provided" is what the caller specified, "overrides" are what the function wants to override, and "defaults" contains default values. The function starts with the values in "defaults", overrides them with the values in "provided", and then overrides those with the values in "overrides". For convenience, any of these objects may be falsey, in which case they will be ignored. The input objects are never modified, but properties in the returned object are not deep-copied. For example: mergeObjects(undefined, { 'objectMode': true }, { 'highWaterMark': 0 }) returns: { 'objectMode': true, 'highWaterMark': 0 } For another example: mergeObjects( { 'highWaterMark': 16, 'objectMode': 7 }, /* from caller */ { 'objectMode': true }, /* overrides */ { 'highWaterMark': 0 }); /* default */ returns: { 'objectMode': true, 'highWaterMark': 16 } # Contributing See separate [contribution guidelines](CONTRIBUTING.md). # hosted-git-info This will let you identify and transform various git hosts URLs between protocols. It also can tell you what the URL is for the raw path for particular file for direct access without git. ## Example ```javascript var hostedGitInfo = require("hosted-git-info") var info = hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git", opts) /* info looks like: { type: "github", domain: "github.com", user: "npm", project: "hosted-git-info" } */ ``` If the URL can't be matched with a git host, `null` will be returned. We can match git, ssh and https urls. Additionally, we can match ssh connect strings (`git@github.com:npm/hosted-git-info`) and shortcuts (eg, `github:npm/hosted-git-info`). Github specifically, is detected in the case of a third, unprefixed, form: `npm/hosted-git-info`. If it does match, the returned object has properties of: * info.type -- The short name of the service * info.domain -- The domain for git protocol use * info.user -- The name of the user/org on the git host * info.project -- The name of the project on the git host ## Version Contract The major version will be bumped any time… * The constructor stops accepting URLs that it previously accepted. * A method is removed. * A method can no longer accept the number and type of arguments it previously accepted. * A method can return a different type than it currently returns. Implications: * I do not consider the specific format of the urls returned from, say `.https()` to be a part of the contract. The contract is that it will return a string that can be used to fetch the repo via HTTPS. But what that string looks like, specifically, can change. * Dropping support for a hosted git provider would constitute a breaking change. ## Usage ### var info = hostedGitInfo.fromUrl(gitSpecifier[, options]) * *gitSpecifer* is a URL of a git repository or a SCP-style specifier of one. * *options* is an optional object. It can have the following properties: * *noCommittish* — If true then committishes won't be included in generated URLs. * *noGitPlus* — If true then `git+` won't be prefixed on URLs. ## Methods All of the methods take the same options as the `fromUrl` factory. Options provided to a method override those provided to the constructor. * info.file(path, opts) Given the path of a file relative to the repository, returns a URL for directly fetching it from the githost. If no committish was set then `master` will be used as the default. For example `hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git#v1.0.0").file("package.json")` would return `https://raw.githubusercontent.com/npm/hosted-git-info/v1.0.0/package.json` * info.shortcut(opts) eg, `github:npm/hosted-git-info` * info.browse(path, fragment, opts) eg, `https://github.com/npm/hosted-git-info/tree/v1.2.0`, `https://github.com/npm/hosted-git-info/tree/v1.2.0/package.json`, `https://github.com/npm/hosted-git-info/tree/v1.2.0/REAMDE.md#supported-hosts` * info.bugs(opts) eg, `https://github.com/npm/hosted-git-info/issues` * info.docs(opts) eg, `https://github.com/npm/hosted-git-info/tree/v1.2.0#readme` * info.https(opts) eg, `git+https://github.com/npm/hosted-git-info.git` * info.sshurl(opts) eg, `git+ssh://git@github.com/npm/hosted-git-info.git` * info.ssh(opts) eg, `git@github.com:npm/hosted-git-info.git` * info.path(opts) eg, `npm/hosted-git-info` * info.tarball(opts) eg, `https://github.com/npm/hosted-git-info/archive/v1.2.0.tar.gz` * info.getDefaultRepresentation() Returns the default output type. The default output type is based on the string you passed in to be parsed * info.toString(opts) Uses the getDefaultRepresentation to call one of the other methods to get a URL for this resource. As such `hostedGitInfo.fromUrl(url).toString()` will give you a normalized version of the URL that still uses the same protocol. Shortcuts will still be returned as shortcuts, but the special case github form of `org/project` will be normalized to `github:org/project`. SSH connect strings will be normalized into `git+ssh` URLs. ## Supported hosts Currently this supports Github, Bitbucket and Gitlab. Pull requests for additional hosts welcome. # ci-info Get details about the current Continuous Integration environment. Please [open an issue](https://github.com/watson/ci-info/issues/new?template=ci-server-not-detected.md) if your CI server isn't properly detected :) [![npm](https://img.shields.io/npm/v/ci-info.svg)](https://www.npmjs.com/package/ci-info) [![Build status](https://travis-ci.org/watson/ci-info.svg?branch=master)](https://travis-ci.org/watson/ci-info) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat)](https://github.com/feross/standard) ## Installation ```bash npm install ci-info --save ``` ## Usage ```js var ci = require('ci-info') if (ci.isCI) { console.log('The name of the CI server is:', ci.name) } else { console.log('This program is not running on a CI server') } ``` ## Supported CI tools Officially supported CI servers: | Name | Constant | |------|----------| | [AWS CodeBuild](https://aws.amazon.com/codebuild/) | `ci.CODEBUILD` | | [AppVeyor](http://www.appveyor.com) | `ci.APPVEYOR` | | [Bamboo](https://www.atlassian.com/software/bamboo) by Atlassian | `ci.BAMBOO` | | [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) | `ci.BITBUCKET` | | [Bitrise](https://www.bitrise.io/) | `ci.BITRISE` | | [Buddy](https://buddy.works/) | `ci.BUDDY` | | [Buildkite](https://buildkite.com) | `ci.BUILDKITE` | | [CircleCI](http://circleci.com) | `ci.CIRCLE` | | [Cirrus CI](https://cirrus-ci.org) | `ci.CIRRUS` | | [Codeship](https://codeship.com) | `ci.CODESHIP` | | [Drone](https://drone.io) | `ci.DRONE` | | [dsari](https://github.com/rfinnie/dsari) | `ci.DSARI` | | [GitLab CI](https://about.gitlab.com/gitlab-ci/) | `ci.GITLAB` | | [GoCD](https://www.go.cd/) | `ci.GOCD` | | [Hudson](http://hudson-ci.org) | `ci.HUDSON` | | [Jenkins CI](https://jenkins-ci.org) | `ci.JENKINS` | | [Magnum CI](https://magnum-ci.com) | `ci.MAGNUM` | | [Sail CI](https://sail.ci/) | `ci.SAIL` | | [Semaphore](https://semaphoreci.com) | `ci.SEMAPHORE` | | [Shippable](https://www.shippable.com/) | `ci.SHIPPABLE` | | [Solano CI](https://www.solanolabs.com/) | `ci.SOLANO` | | [Strider CD](https://strider-cd.github.io/) | `ci.STRIDER` | | [TaskCluster](http://docs.taskcluster.net) | `ci.TASKCLUSTER` | | [Team Foundation Server](https://www.visualstudio.com/en-us/products/tfs-overview-vs.aspx) by Microsoft | `ci.TFS` | | [TeamCity](https://www.jetbrains.com/teamcity/) by JetBrains | `ci.TEAMCITY` | | [Travis CI](http://travis-ci.org) | `ci.TRAVIS` | ## API ### `ci.name` A string. Will contain the name of the CI server the code is running on. If not CI server is detected, it will be `null`. Don't depend on the value of this string not to change for a specific vendor. If you find your self writing `ci.name === 'Travis CI'`, you most likely want to use `ci.TRAVIS` instead. ### `ci.isCI` A boolean. Will be `true` if the code is running on a CI server. Otherwise `false`. Some CI servers not listed here might still trigger the `ci.isCI` boolean to be set to `true` if they use certain vendor neutral environment variables. In those cases `ci.name` will be `null` and no vendor specific boolean will be set to `true`. ### `ci.isPR` A boolean if PR detection is supported for the current CI server. Will be `true` if a PR is being tested. Otherwise `false`. If PR detection is not supported for the current CI server, the value will be `null`. ### `ci.<VENDOR-CONSTANT>` A vendor specific boolean constants is exposed for each support CI vendor. A constant will be `true` if the code is determined to run on the given CI server. Otherwise `false`. Examples of vendor constants are `ci.TRAVIS` or `ci.APPVEYOR`. For a complete list, see the support table above. Deprecated vendor constants that will be removed in the next major release: - `ci.TDDIUM` (Solano CI) This have been renamed `ci.SOLANO` ## License [MIT](LICENSE) # xtend [![browser support][3]][4] [![locked](http://badges.github.io/stability-badges/dist/locked.svg)](http://github.com/badges/stability-badges) Extend like a boss xtend is a basic utility library which allows you to extend an object by appending all of the properties from each object in a list. When there are identical properties, the right-most property takes precedence. ## Examples ```js var extend = require("xtend") // extend returns a new object. Does not mutate arguments var combination = extend({ a: "a", b: 'c' }, { b: "b" }) // { a: "a", b: "b" } ``` ## Stability status: Locked ## MIT Licenced [3]: http://ci.testling.com/Raynos/xtend.png [4]: http://ci.testling.com/Raynos/xtend # asynckit [![NPM Module](https://img.shields.io/npm/v/asynckit.svg?style=flat)](https://www.npmjs.com/package/asynckit) Minimal async jobs utility library, with streams support. [![PhantomJS Build](https://img.shields.io/travis/alexindigo/asynckit/v0.4.0.svg?label=browser&style=flat)](https://travis-ci.org/alexindigo/asynckit) [![Linux Build](https://img.shields.io/travis/alexindigo/asynckit/v0.4.0.svg?label=linux:0.12-6.x&style=flat)](https://travis-ci.org/alexindigo/asynckit) [![Windows Build](https://img.shields.io/appveyor/ci/alexindigo/asynckit/v0.4.0.svg?label=windows:0.12-6.x&style=flat)](https://ci.appveyor.com/project/alexindigo/asynckit) [![Coverage Status](https://img.shields.io/coveralls/alexindigo/asynckit/v0.4.0.svg?label=code+coverage&style=flat)](https://coveralls.io/github/alexindigo/asynckit?branch=master) [![Dependency Status](https://img.shields.io/david/alexindigo/asynckit/v0.4.0.svg?style=flat)](https://david-dm.org/alexindigo/asynckit) [![bitHound Overall Score](https://www.bithound.io/github/alexindigo/asynckit/badges/score.svg)](https://www.bithound.io/github/alexindigo/asynckit) <!-- [![Readme](https://img.shields.io/badge/readme-tested-brightgreen.svg?style=flat)](https://www.npmjs.com/package/reamde) --> AsyncKit provides harness for `parallel` and `serial` iterators over list of items represented by arrays or objects. Optionally it accepts abort function (should be synchronously return by iterator for each item), and terminates left over jobs upon an error event. For specific iteration order built-in (`ascending` and `descending`) and custom sort helpers also supported, via `asynckit.serialOrdered` method. It ensures async operations to keep behavior more stable and prevent `Maximum call stack size exceeded` errors, from sync iterators. | compression | size | | :----------------- | -------: | | asynckit.js | 12.34 kB | | asynckit.min.js | 4.11 kB | | asynckit.min.js.gz | 1.47 kB | ## Install ```sh $ npm install --save asynckit ``` ## Examples ### Parallel Jobs Runs iterator over provided array in parallel. Stores output in the `result` array, on the matching positions. In unlikely event of an error from one of the jobs, will terminate rest of the active jobs (if abort function is provided) and return error along with salvaged data to the main callback function. #### Input Array ```javascript var parallel = require('asynckit').parallel , assert = require('assert') ; var source = [ 1, 1, 4, 16, 64, 32, 8, 2 ] , expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ] , expectedTarget = [ 1, 1, 2, 4, 8, 16, 32, 64 ] , target = [] ; parallel(source, asyncJob, function(err, result) { assert.deepEqual(result, expectedResult); assert.deepEqual(target, expectedTarget); }); // async job accepts one element from the array // and a callback function function asyncJob(item, cb) { // different delays (in ms) per item var delay = item * 25; // pretend different jobs take different time to finish // and not in consequential order var timeoutId = setTimeout(function() { target.push(item); cb(null, item * 2); }, delay); // allow to cancel "leftover" jobs upon error // return function, invoking of which will abort this job return clearTimeout.bind(null, timeoutId); } ``` More examples could be found in [test/test-parallel-array.js](test/test-parallel-array.js). #### Input Object Also it supports named jobs, listed via object. ```javascript var parallel = require('asynckit/parallel') , assert = require('assert') ; var source = { first: 1, one: 1, four: 4, sixteen: 16, sixtyFour: 64, thirtyTwo: 32, eight: 8, two: 2 } , expectedResult = { first: 2, one: 2, four: 8, sixteen: 32, sixtyFour: 128, thirtyTwo: 64, eight: 16, two: 4 } , expectedTarget = [ 1, 1, 2, 4, 8, 16, 32, 64 ] , expectedKeys = [ 'first', 'one', 'two', 'four', 'eight', 'sixteen', 'thirtyTwo', 'sixtyFour' ] , target = [] , keys = [] ; parallel(source, asyncJob, function(err, result) { assert.deepEqual(result, expectedResult); assert.deepEqual(target, expectedTarget); assert.deepEqual(keys, expectedKeys); }); // supports full value, key, callback (shortcut) interface function asyncJob(item, key, cb) { // different delays (in ms) per item var delay = item * 25; // pretend different jobs take different time to finish // and not in consequential order var timeoutId = setTimeout(function() { keys.push(key); target.push(item); cb(null, item * 2); }, delay); // allow to cancel "leftover" jobs upon error // return function, invoking of which will abort this job return clearTimeout.bind(null, timeoutId); } ``` More examples could be found in [test/test-parallel-object.js](test/test-parallel-object.js). ### Serial Jobs Runs iterator over provided array sequentially. Stores output in the `result` array, on the matching positions. In unlikely event of an error from one of the jobs, will not proceed to the rest of the items in the list and return error along with salvaged data to the main callback function. #### Input Array ```javascript var serial = require('asynckit/serial') , assert = require('assert') ; var source = [ 1, 1, 4, 16, 64, 32, 8, 2 ] , expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ] , expectedTarget = [ 0, 1, 2, 3, 4, 5, 6, 7 ] , target = [] ; serial(source, asyncJob, function(err, result) { assert.deepEqual(result, expectedResult); assert.deepEqual(target, expectedTarget); }); // extended interface (item, key, callback) // also supported for arrays function asyncJob(item, key, cb) { target.push(key); // it will be automatically made async // even it iterator "returns" in the same event loop cb(null, item * 2); } ``` More examples could be found in [test/test-serial-array.js](test/test-serial-array.js). #### Input Object Also it supports named jobs, listed via object. ```javascript var serial = require('asynckit').serial , assert = require('assert') ; var source = [ 1, 1, 4, 16, 64, 32, 8, 2 ] , expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ] , expectedTarget = [ 0, 1, 2, 3, 4, 5, 6, 7 ] , target = [] ; var source = { first: 1, one: 1, four: 4, sixteen: 16, sixtyFour: 64, thirtyTwo: 32, eight: 8, two: 2 } , expectedResult = { first: 2, one: 2, four: 8, sixteen: 32, sixtyFour: 128, thirtyTwo: 64, eight: 16, two: 4 } , expectedTarget = [ 1, 1, 4, 16, 64, 32, 8, 2 ] , target = [] ; serial(source, asyncJob, function(err, result) { assert.deepEqual(result, expectedResult); assert.deepEqual(target, expectedTarget); }); // shortcut interface (item, callback) // works for object as well as for the arrays function asyncJob(item, cb) { target.push(item); // it will be automatically made async // even it iterator "returns" in the same event loop cb(null, item * 2); } ``` More examples could be found in [test/test-serial-object.js](test/test-serial-object.js). _Note: Since _object_ is an _unordered_ collection of properties, it may produce unexpected results with sequential iterations. Whenever order of the jobs' execution is important please use `serialOrdered` method._ ### Ordered Serial Iterations TBD For example [compare-property](compare-property) package. ### Streaming interface TBD ## Want to Know More? More examples can be found in [test folder](test/). Or open an [issue](https://github.com/alexindigo/asynckit/issues) with questions and/or suggestions. ## License AsyncKit is licensed under the MIT license. unique-slug =========== Generate a unique character string suitible for use in files and URLs. ``` var uniqueSlug = require('unique-slug') var randomSlug = uniqueSlug() var fileSlug = uniqueSlug('/etc/passwd') ``` ### uniqueSlug(*str*) → String (8 chars) If *str* is passed in then the return value will be its murmur hash in hex. If *str* is not passed in, it will be 4 bytes coverted into 8 hex characters, generated by `crypto.pseudoRandomBytes`. # function-bind <!-- [![build status][travis-svg]][travis-url] [![NPM version][npm-badge-svg]][npm-url] [![Coverage Status][5]][6] [![gemnasium Dependency Status][7]][8] [![Dependency status][deps-svg]][deps-url] [![Dev Dependency status][dev-deps-svg]][dev-deps-url] --> <!-- [![browser support][11]][12] --> Implementation of function.prototype.bind ## Example I mainly do this for unit tests I run on phantomjs. PhantomJS does not have Function.prototype.bind :( ```js Function.prototype.bind = require("function-bind") ``` ## Installation `npm install function-bind` ## Contributors - Raynos ## MIT Licenced [travis-svg]: https://travis-ci.org/Raynos/function-bind.svg [travis-url]: https://travis-ci.org/Raynos/function-bind [npm-badge-svg]: https://badge.fury.io/js/function-bind.svg [npm-url]: https://npmjs.org/package/function-bind [5]: https://coveralls.io/repos/Raynos/function-bind/badge.png [6]: https://coveralls.io/r/Raynos/function-bind [7]: https://gemnasium.com/Raynos/function-bind.png [8]: https://gemnasium.com/Raynos/function-bind [deps-svg]: https://david-dm.org/Raynos/function-bind.svg [deps-url]: https://david-dm.org/Raynos/function-bind [dev-deps-svg]: https://david-dm.org/Raynos/function-bind/dev-status.svg [dev-deps-url]: https://david-dm.org/Raynos/function-bind#info=devDependencies [11]: https://ci.testling.com/Raynos/function-bind.png [12]: https://ci.testling.com/Raynos/function-bind # ansi-align > align-text with ANSI support for CLIs [![Build Status](https://travis-ci.org/nexdrew/ansi-align.svg?branch=master)](https://travis-ci.org/nexdrew/ansi-align) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/ansi-align/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/ansi-align?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Easily center- or right- align a block of text, carefully ignoring ANSI escape codes. E.g. turn this: <img width="281" alt="ansi text block no alignment :(" src="https://cloud.githubusercontent.com/assets/1929625/14937509/7c3076dc-0ed7-11e6-8c16-4f6a4ccc8346.png"> Into this: <img width="278" alt="ansi text block center aligned!" src="https://cloud.githubusercontent.com/assets/1929625/14937510/7c3ca0b0-0ed7-11e6-8f0a-541ca39b6e0a.png"> ## Install ```sh npm install --save ansi-align ``` ```js var ansiAlign = require('ansi-align') ``` ## API ### `ansiAlign(text, [opts])` Align the given text per the line with the greatest [`string-width`](https://github.com/sindresorhus/string-width), returning a new string (or array). #### Arguments - `text`: required, string or array The text to align. If a string is given, it will be split using either the `opts.split` value or `'\n'` by default. If an array is given, a different array of modified strings will be returned. - `opts`: optional, object Options to change behavior, see below. #### Options - `opts.align`: string, default `'center'` The alignment mode. Use `'center'` for center-alignment, `'right'` for right-alignment, or `'left'` for left-alignment. Note that the given `text` is assumed to be left-aligned already, so specifying `align: 'left'` just returns the `text` as is (no-op). - `opts.split`: string or RegExp, default `'\n'` The separator to use when splitting the text. Only used if text is given as a string. - `opts.pad`: string, default `' '` The value used to left-pad (prepend to) lines of lesser width. Will be repeated as necessary to adjust alignment to the line with the greatest width. ### `ansiAlign.center(text)` Alias for `ansiAlign(text, { align: 'center' })`. ### `ansiAlign.right(text)` Alias for `ansiAlign(text, { align: 'right' })`. ### `ansiAlign.left(text)` Alias for `ansiAlign(text, { align: 'left' })`, which is a no-op. ## Similar Packages - [`center-align`](https://github.com/jonschlinkert/center-align): Very close to this package, except it doesn't support ANSI codes. - [`left-pad`](https://github.com/camwest/left-pad): Great for left-padding but does not support center alignment or ANSI codes. - Pretty much anything by the [chalk](https://github.com/chalk) team ## License ISC © Contributors # byline — buffered stream for reading lines ![npm package](https://nodei.co/npm/byline.png?downloads=true&downloadRank=true) `byline` is a simple module providing a `LineStream`. - node v0.10 `streams2` (transform stream) - supports `pipe` - supports both UNIX and Windows line endings - supports [Unicode UTS #18 line boundaries](http://www.unicode.org/reports/tr18/#Line_Boundaries) - can wrap any readable stream - can be used as a readable-writable "through-stream" (transform stream) - super-simple: `stream = byline(stream);` ## Install npm install byline or from source: git clone git://github.com/jahewson/node-byline.git cd node-byline npm link # Convenience API The `byline` module can be used as a function to quickly wrap a readable stream: ```javascript var fs = require('fs'), byline = require('byline'); var stream = byline(fs.createReadStream('sample.txt', { encoding: 'utf8' })); ``` The `data` event then emits lines: ```javascript stream.on('data', function(line) { console.log(line); }); ``` # Standard API You just need to add one line to wrap your readable `Stream` with a `LineStream`. ```javascript var fs = require('fs'), byline = require('byline'); var stream = fs.createReadStream('sample.txt'); stream = byline.createStream(stream); stream.on('data', function(line) { console.log(line); }); ``` # Piping `byline` supports `pipe` (though it strips the line endings, of course). ```javascript var stream = fs.createReadStream('sample.txt'); stream = byline.createStream(stream); stream.pipe(fs.createWriteStream('nolines.txt')); ``` Alternatively, you can create a readable/writable "through-stream" which doesn't wrap any specific stream: ```javascript var stream = fs.createReadStream('sample.txt'); stream = byline.createStream(stream); stream.pipe(fs.createWriteStream('nolines.txt')); var input = fs.createReadStream('LICENSE'); var lineStream = byline.createStream(); input.pipe(lineStream); var output = fs.createWriteStream('test.txt'); lineStream.pipe(output); ``` # Streams2 API Node v0.10 added a new streams2 API. This allows the stream to be used in non-flowing mode and is preferred over the legacy pause() and resume() methods. ```javascript var stream = fs.createReadStream('sample.txt'); stream = byline.createStream(stream); stream.on('readable', function() { var line; while (null !== (line = stream.read())) { console.log(line); } }); ``` # Transform Stream The `byline` transform stream can be directly manipulated like so: ```javascript var LineStream = require('byline').LineStream; var input = fs.createReadStream('sample.txt'); var output = fs.createWriteStream('nolines.txt'); var lineStream = new LineStream(); input.pipe(lineStream); lineStream.pipe(output); ``` # Empty Lines By default byline skips empty lines, if you want to keep them, pass the `keepEmptyLines` option in the call to `byline.createStream(stream, options)` or `byline(stream, options)`. # Tests npm test # v0.8 If you want to use `node-byline` with node v0.8 then you can use the 2.1.x series. Simply use the following in your `package.json`: ```javascript "dependencies": { "byline": ">=2.1.0 <3.0.0" }, ``` # Simple Unlike other modules (of which there are many), `byline` contains no: - monkeypatching - dependencies - non-standard 'line' events which break `pipe` - limitations to only file streams - CoffeeScript - unnecessary code # Genfun [![Travis](https://img.shields.io/travis/zkat/genfun.svg)](https://travis-ci.org/zkat/genfun) [![npm](https://img.shields.io/npm/v/genfun.svg)](https://npm.im/genfun) [![npm](https://img.shields.io/npm/l/genfun.svg)](https://npm.im/genfun) [`genfun`](https://github.com/zkat/genfun) is a Javascript library that lets you define generic functions: regular-seeming functions that can be invoked just like any other function, but that automatically dispatch methods based on the combination of arguments passed to it when it's called, also known as multiple dispatch. It was inspired by [Slate](http://slatelanguage.org/), [CLOS](http://en.wikipedia.org/wiki/CLOS) and [Sheeple](http://github.com/zkat/sheeple). ## Install `$ npm install genfun` ## Table of Contents * [Example](#example) * [API](#api) * [`Genfun()`](#genfun) * [`gf.add()`](#addMethod) * [`Genfun.callNextMethod()`](#callNextMethod) * [`Genfun.noApplicableMethod()`](#noApplicableMethod) * [Performance](#performance) ### Example Various examples are available to look at in the examples/ folder included in this project. Most examples are also runnable by just invoking them with node. ```javascript import Genfun from "genfun" class Person {} class Dog {} const frobnicate = Genfun() frobnicate.add([Person], (person) => { console.log('Got a person!') }) frobnicate.add([Dog], (dog) => { console.log('Got a dog!') }) frobnicate.add([String, Person, Dog], (greeting, person, dog) => { console.log(person, ' greets ', dog, ', \'' + greeting + '\'') }) const person = new Person() const dog = new Dog() frobnicate(person) // Got a person! frobnicate(dog) // Got a dog! frobnicate('Hi, dog!', person, dog); // {} greets {}, 'Hi, dog!' ``` ### API The basic API for `Genfun` is fairly simple: You create a new `genfun` by calling `Genfun()`, and add methods to them. Then you call the `genfun` object like a regular function, and it takes care of dispatching the appropriate methods! #### `Genfun()` Takes no arguments. Simply creates a new `genfun`. A `genfun` is a regular function object with overriden function call/dispatch behavior. When called, it will look at its arguments and determine if a matching method has been defined that applies to **all** arguments passed in, considered together. New methods may be added to the `genfun` object with [`gf.add()`](#addMethod). If no method is found, or none has been defined, it will invoke [`Genfun.noApplicableMethod`](#noApplicableMethod) with the appropriate arguments. Genfuns preserve the value of `this` if invoked using `.call` or `.apply`. ##### Example ```javascript var gf = Genfun() //... add some methods .. // These calls are all identical. gf(1, 2, 3) gf.call(null, 1, 2, 3) gf.apply(null, [1, 2, 3]) ``` #### <a name="addMethod"></a> `gf.add(<selector>, <body>)` Adds a new method to `gf` and returns `gf` to allow chaining multiple `add`s. `<selector>` must be an array of objects that will receive new `Role`s (dispatch positions) for the method. If an object in the selector is a function, its `.prototype` field will receive the new `Role`. The array must not contain any frozen objects. When a `genfun` is called (like a function), it will look at its set of added methods and, based on the `Role`s assigned, and corresponding prototype chains, will determine which method, if any, will be invoked. On invocation, a method's `<body>` argument will be the called with the arguments passed to the `genfun`, including its `this` and `arguments` values`. Within the `<body>`, [`Genfun.callNextMethod`](#callNextMethod) may be called. ##### Example ```javascript var numStr = Genfun() numStr.add([String, Number], function (str, num) { console.log('got a str:', str, 'and a num: ', num) }) numStr.add([Number, String], function (num, str) { console.log('got a num:', num, 'and a str:', str) }) ``` #### <a name="callNextMethod"></a> `Genfun.callNextMethod([...<arguments>])` **NOTE**: This function can only be called synchronously. To call it asynchronously (for example, in a `Promise` or in a callback), use [`getContext`](#getContext) Calls the "next" applicable method in the method chain. Can only be called within the body of a method. If no arguments are given, `callNextMethod` will pass the current method's original arguments to the next method. If arguments are passed to `callNextMethod`, it will invoke the next applicable method (based on the **original** method list calculation), with **the given arguments**, even if they would otherwise not have triggered that method. Returns whatever value the next method returns. There **must** be a next method available when invoked. This function **will not** call `noApplicableMethod` when it runs out of methods to call. It will instead throw an error. ##### Example ```javascript class Foo {} class Bar extends Foo {} var cnm = Genfun() cnm.add([Foo], function (foo) { console.log('calling the method on Foo with', foo) return foo }) cnm.add([Bar], function (bar) { console.log('calling the method on Bar with', bar) return Genfun.callNextMethod('some other value!') }) cnm(new Bar()) // calling the method on Bar with {} // calling the method on Foo with "some other value!" // => 'some other value!' ``` #### <a name="getContext"></a> `Genfun.getContext()` The `context` returned by this function will have a `callNextMethod` method which can be used to invoke the correct next method even during asynchronous calls (for example, when used in a callback or a `Promise`). This function must be called synchronously within the body of the method before any asynchronous calls, and will error if invoked outside the context of a method call. ##### Example ```javascript someGenfun.add([MyThing], function (thing) { const ctx = Genfun.getContext() return somePromisedCall(thing).then(res => ctx.callNextMethod(res)) }) ``` #### <a name="noApplicableMethod"></a> `Genfun.noApplicableMethod(<gf>, <this>, <args>)` `Genfun.noApplicableMethod` is a `genfun` itself, which is called whenever **any `genfun`** fails to find a matching method for its given arguments. It will be called with the `genfun` as its first argument, then the `this` value, and then the arguments it was called with. By default, this will simply throw a NoApplicableMethod error. Users may override this behavior for particular `genfun` and `this` combinations, although `args` will always be an `Array`. The value returned from the dispatched `noApplicableMethod` method will be returned by `genfun` as if it had been its original method. Comparable to [Ruby's `method_missing`](http://ruby-doc.org/core-2.1.0/BasicObject.html#method-i-method_missing). ### Performance `Genfun` pulls a few caching tricks to make sure dispatch, specially for common cases, is as fast as possible. How fast? Well, not much slower than native methods: ``` Regular function: 30.402ms Native method: 28.109ms Singly-dispatched genfun: 64.467ms Double-dispatched genfun: 70.052ms Double-dispatched genfun with string primitive: 76.742ms ``` # prr [![Build Status](https://secure.travis-ci.org/rvagg/prr.png)](http://travis-ci.org/rvagg/prr) An sensible alternative to `Object.defineProperty()`. Available in npm and Ender as **prr**. ## Usage Set the property `'foo'` (`obj.foo`) to have the value `'bar'` with default options (`'enumerable'`, `'configurable'` and `'writable'` are all `false`): ```js prr(obj, 'foo', 'bar') ``` Adjust the default options: ```js prr(obj, 'foo', 'bar', { enumerable: true, writable: true }) ``` Do the same operation for multiple properties: ```js prr(obj, { one: 'one', two: 'two' }) // or with options: prr(obj, { one: 'one', two: 'two' }, { enumerable: true, writable: true }) ``` ### Simplify! But obviously, having to write out the full options object makes it nearly as bad as the original `Object.defineProperty()` so we can simplify. As an alternative method we can use an options string where each character represents a option: `'e'=='enumerable'`, `'c'=='configurable'` and `'w'=='writable'`: ```js prr(obj, 'foo', 'bar', 'ew') // enumerable and writable but not configurable // muliple properties: prr(obj, { one: 'one', two: 'two' }, 'ewc') // configurable too ``` ## Where can I use it? Anywhere! For pre-ES5 environments *prr* will simply fall-back to an `object[property] = value` so you can get close to what you want. *prr* is Ender-compatible so you can include it in your Ender build and `$.prr(...)` or `var prr = require('prr'); prr(...)`. ## Licence prr is Copyright (c) 2013 Rod Vagg [@rvagg](https://twitter.com/rvagg) and licensed under the MIT licence. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE.md file for more details. # osenv Look up environment settings specific to different operating systems. ## Usage ```javascript var osenv = require('osenv') var path = osenv.path() var user = osenv.user() // etc. // Some things are not reliably in the env, and have a fallback command: var h = osenv.hostname(function (er, hostname) { h = hostname }) // This will still cause it to be memoized, so calling osenv.hostname() // is now an immediate operation. // You can always send a cb, which will get called in the nextTick // if it's been memoized, or wait for the fallback data if it wasn't // found in the environment. osenv.hostname(function (er, hostname) { if (er) console.error('error looking up hostname') else console.log('this machine calls itself %s', hostname) }) ``` ## osenv.hostname() The machine name. Calls `hostname` if not found. ## osenv.user() The currently logged-in user. Calls `whoami` if not found. ## osenv.prompt() Either PS1 on unix, or PROMPT on Windows. ## osenv.tmpdir() The place where temporary files should be created. ## osenv.home() No place like it. ## osenv.path() An array of the places that the operating system will search for executables. ## osenv.editor() Return the executable name of the editor program. This uses the EDITOR and VISUAL environment variables, and falls back to `vi` on Unix, or `notepad.exe` on Windows. ## osenv.shell() The SHELL on Unix, which Windows calls the ComSpec. Defaults to 'bash' or 'cmd'. # isarray `Array#isArray` for older browsers. [![build status](https://secure.travis-ci.org/juliangruber/isarray.svg)](http://travis-ci.org/juliangruber/isarray) [![downloads](https://img.shields.io/npm/dm/isarray.svg)](https://www.npmjs.org/package/isarray) [![browser support](https://ci.testling.com/juliangruber/isarray.png) ](https://ci.testling.com/juliangruber/isarray) ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. snapCash Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) stream.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` Port of the OpenBSD `bcrypt_pbkdf` function to pure Javascript. `npm`-ified version of [Devi Mandiri's port](https://github.com/devi/tmp/blob/master/js/bcrypt_pbkdf.js), with some minor performance improvements. The code is copied verbatim (and un-styled) from Devi's work. This product includes software developed by Niels Provos. ## API ### `bcrypt_pbkdf.pbkdf(pass, passlen, salt, saltlen, key, keylen, rounds)` Derive a cryptographic key of arbitrary length from a given password and salt, using the OpenBSD `bcrypt_pbkdf` function. This is a combination of Blowfish and SHA-512. See [this article](http://www.tedunangst.com/flak/post/bcrypt-pbkdf) for further information. Parameters: * `pass`, a Uint8Array of length `passlen` * `passlen`, an integer Number * `salt`, a Uint8Array of length `saltlen` * `saltlen`, an integer Number * `key`, a Uint8Array of length `keylen`, will be filled with output * `keylen`, an integer Number * `rounds`, an integer Number, number of rounds of the PBKDF to run ### `bcrypt_pbkdf.hash(sha2pass, sha2salt, out)` Calculate a Blowfish hash, given SHA2-512 output of a password and salt. Used as part of the inner round function in the PBKDF. Parameters: * `sha2pass`, a Uint8Array of length 64 * `sha2salt`, a Uint8Array of length 64 * `out`, a Uint8Array of length 32, will be filled with output ## License This source form is a 1:1 port from the OpenBSD `blowfish.c` and `bcrypt_pbkdf.c`. As a result, it retains the original copyright and license. The two files are under slightly different (but compatible) licenses, and are here combined in one file. For each of the full license texts see `LICENSE`. # ignore-walk [![Build Status](https://travis-ci.org/npm/ignore-walk.svg?branch=master)](https://travis-ci.org/npm/ignore-walk) Nested/recursive `.gitignore`/`.npmignore` parsing and filtering. Walk a directory creating a list of entries, parsing any `.ignore` files met along the way to exclude files. ## USAGE ```javascript const walk = require('ignore-walk') // All options are optional, defaults provided. // this function returns a promise, but you can also pass a cb // if you like that approach better. walk({ path: '...', // root dir to start in. defaults to process.cwd() ignoreFiles: [ '.gitignore' ], // list of filenames. defaults to ['.ignore'] includeEmpty: true|false, // true to include empty dirs, default false follow: true|false // true to follow symlink dirs, default false }, callback) // to walk synchronously, do it this way: const result = walk.sync({ path: '/wow/such/filepath' }) ``` If you want to get at the underlying classes, they're at `walk.Walker` and `walk.WalkerSync`. ## OPTIONS * `path` The path to start in. Defaults to `process.cwd()` * `ignoreFiles` Filenames to treat as ignore files. The default is `['.ignore']`. (This is where you'd put `.gitignore` or `.npmignore` or whatever.) If multiple ignore files are in a directory, then rules from each are applied in the order that the files are listed. * `includeEmpty` Set to `true` to include empty directories, assuming they are not excluded by any of the ignore rules. If not set, then this follows the standard `git` behavior of not including directories that are empty. Note: this will cause an empty directory to be included if it would contain an included entry, even if it would have otherwise been excluded itself. For example, given the rules `*` (ignore everything) and `!/a/b/c` (re-include the entry at `/a/b/c`), the directory `/a/b` will be included if it is empty. * `follow` Set to `true` to treat symbolically linked directories as directories, recursing into them. There is no handling for nested symlinks, so `ELOOP` errors can occur in some cases when using this option. Defaults to `false`. node-asn1 is a library for encoding and decoding ASN.1 datatypes in pure JS. Currently BER encoding is supported; at some point I'll likely have to do DER. ## Usage Mostly, if you're *actually* needing to read and write ASN.1, you probably don't need this readme to explain what and why. If you have no idea what ASN.1 is, see this: ftp://ftp.rsa.com/pub/pkcs/ascii/layman.asc The source is pretty much self-explanatory, and has read/write methods for the common types out there. ### Decoding The following reads an ASN.1 sequence with a boolean. var Ber = require('asn1').Ber; var reader = new Ber.Reader(Buffer.from([0x30, 0x03, 0x01, 0x01, 0xff])); reader.readSequence(); console.log('Sequence len: ' + reader.length); if (reader.peek() === Ber.Boolean) console.log(reader.readBoolean()); ### Encoding The following generates the same payload as above. var Ber = require('asn1').Ber; var writer = new Ber.Writer(); writer.startSequence(); writer.writeBoolean(true); writer.endSequence(); console.log(writer.buffer); ## Installation npm install asn1 ## License MIT. ## Bugs See <https://github.com/joyent/node-asn1/issues>. # util-promisify Node 8's [`require('util').promisify`](https://nodejs.org/api/util.html#util_util_promisify_original) as a node module, so you can use it right now! Supports [all major node versions](https://github.com/nodejs/LTS#lts-schedule1). [![build status](https://travis-ci.org/juliangruber/util-promisify.svg?branch=master)](http://travis-ci.org/juliangruber/util-promisify) [![downloads](https://img.shields.io/npm/dm/util-promisify.svg)](https://www.npmjs.org/package/util-promisify) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/util-promisify.svg)](https://greenkeeper.io/) ## Usage ```js const promisify = require('util-promisify'); const fs = require('fs'); const stat = promisify(fs.stat); stat('/tmp/').then(s => { // ... }); ``` ## Installation ```bash $ npm install util-promisify ``` ## API See `util.promisify`'s [API docs](https://nodejs.org/api/util.html#util_util_promisify_original). ### promisify(original) ### (Symbol) promisify.custom If available, the Symbol is reexported from node core's `util` module. ## License MIT # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # duplexer3 [![Build Status](https://travis-ci.org/floatdrop/duplexer3.svg?branch=master)](https://travis-ci.org/floatdrop/duplexer3) [![Coverage Status](https://coveralls.io/repos/floatdrop/duplexer3/badge.svg?branch=master&service=github)](https://coveralls.io/github/floatdrop/duplexer3?branch=master) Like [duplexer2](https://github.com/deoxxa/duplexer2) but using Streams3 without readable-stream dependency ```javascript var stream = require("stream"); var duplexer3 = require("duplexer3"); var writable = new stream.Writable({objectMode: true}), readable = new stream.Readable({objectMode: true}); writable._write = function _write(input, encoding, done) { if (readable.push(input)) { return done(); } else { readable.once("drain", done); } }; readable._read = function _read(n) { // no-op }; // simulate the readable thing closing after a bit writable.once("finish", function() { setTimeout(function() { readable.push(null); }, 500); }); var duplex = duplexer3(writable, readable); duplex.on("data", function(e) { console.log("got data", JSON.stringify(e)); }); duplex.on("finish", function() { console.log("got finish event"); }); duplex.on("end", function() { console.log("got end event"); }); duplex.write("oh, hi there", function() { console.log("finished writing"); }); duplex.end(function() { console.log("finished ending"); }); ``` ``` got data "oh, hi there" finished writing got finish event finished ending got end event ``` ## Overview This is a reimplementation of [duplexer](https://www.npmjs.com/package/duplexer) using the Streams3 API which is standard in Node as of v4. Everything largely works the same. ## Installation [Available via `npm`](https://docs.npmjs.com/cli/install): ``` $ npm i duplexer3 ``` ## API ### duplexer3 Creates a new `DuplexWrapper` object, which is the actual class that implements most of the fun stuff. All that fun stuff is hidden. DON'T LOOK. ```javascript duplexer3([options], writable, readable) ``` ```javascript const duplex = duplexer3(new stream.Writable(), new stream.Readable()); ``` Arguments * __options__ - an object specifying the regular `stream.Duplex` options, as well as the properties described below. * __writable__ - a writable stream * __readable__ - a readable stream Options * __bubbleErrors__ - a boolean value that specifies whether to bubble errors from the underlying readable/writable streams. Default is `true`. ## License 3-clause BSD. [A copy](./LICENSE) is included with the source. ## Contact * GitHub ([deoxxa](http://github.com/deoxxa)) * Twitter ([@deoxxa](http://twitter.com/deoxxa)) * Email ([deoxxa@fknsrs.biz](mailto:deoxxa@fknsrs.biz)) # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # retry Abstraction for exponential and custom retry strategies for failed operations. ## Installation npm install retry ## Current Status This module has been tested and is ready to be used. ## Tutorial The example below will retry a potentially failing `dns.resolve` operation `10` times using an exponential backoff strategy. With the default settings, this means the last attempt is made after `17 minutes and 3 seconds`. ``` javascript var dns = require('dns'); var retry = require('retry'); function faultTolerantResolve(address, cb) { var operation = retry.operation(); operation.attempt(function(currentAttempt) { dns.resolve(address, function(err, addresses) { if (operation.retry(err)) { return; } cb(err ? operation.mainError() : null, addresses); }); }); } faultTolerantResolve('nodejs.org', function(err, addresses) { console.log(err, addresses); }); ``` Of course you can also configure the factors that go into the exponential backoff. See the API documentation below for all available settings. currentAttempt is an int representing the number of attempts so far. ``` javascript var operation = retry.operation({ retries: 5, factor: 3, minTimeout: 1 * 1000, maxTimeout: 60 * 1000, randomize: true, }); ``` ## API ### retry.operation([options]) Creates a new `RetryOperation` object. `options` is the same as `retry.timeouts()`'s `options`, with two additions: * `forever`: Whether to retry forever, defaults to `false`. * `unref`: Wether to [unref](https://nodejs.org/api/timers.html#timers_unref) the setTimeout's, defaults to `false`. ### retry.timeouts([options]) Returns an array of timeouts. All time `options` and return values are in milliseconds. If `options` is an array, a copy of that array is returned. `options` is a JS object that can contain any of the following keys: * `retries`: The maximum amount of times to retry the operation. Default is `10`. * `factor`: The exponential factor to use. Default is `2`. * `minTimeout`: The number of milliseconds before starting the first retry. Default is `1000`. * `maxTimeout`: The maximum number of milliseconds between two retries. Default is `Infinity`. * `randomize`: Randomizes the timeouts by multiplying with a factor between `1` to `2`. Default is `false`. The formula used to calculate the individual timeouts is: ``` Math.min(random * minTimeout * Math.pow(factor, attempt), maxTimeout) ``` Have a look at [this article][article] for a better explanation of approach. If you want to tune your `factor` / `times` settings to attempt the last retry after a certain amount of time, you can use wolfram alpha. For example in order to tune for `10` attempts in `5 minutes`, you can use this equation: ![screenshot](https://github.com/tim-kos/node-retry/raw/master/equation.gif) Explaining the various values from left to right: * `k = 0 ... 9`: The `retries` value (10) * `1000`: The `minTimeout` value in ms (1000) * `x^k`: No need to change this, `x` will be your resulting factor * `5 * 60 * 1000`: The desired total amount of time for retrying in ms (5 minutes) To make this a little easier for you, use wolfram alpha to do the calculations: <http://www.wolframalpha.com/input/?i=Sum%5B1000*x^k%2C+{k%2C+0%2C+9}%5D+%3D+5+*+60+*+1000> [article]: http://dthain.blogspot.com/2009/02/exponential-backoff-in-distributed.html ### retry.createTimeout(attempt, opts) Returns a new `timeout` (integer in milliseconds) based on the given parameters. `attempt` is an integer representing for which retry the timeout should be calculated. If your retry operation was executed 4 times you had one attempt and 3 retries. If you then want to calculate a new timeout, you should set `attempt` to 4 (attempts are zero-indexed). `opts` can include `factor`, `minTimeout`, `randomize` (boolean) and `maxTimeout`. They are documented above. `retry.createTimeout()` is used internally by `retry.timeouts()` and is public for you to be able to create your own timeouts for reinserting an item, see [issue #13](https://github.com/tim-kos/node-retry/issues/13). ### retry.wrap(obj, [options], [methodNames]) Wrap all functions of the `obj` with retry. Optionally you can pass operation options and an array of method names which need to be wrapped. ``` retry.wrap(obj) retry.wrap(obj, ['method1', 'method2']) retry.wrap(obj, {retries: 3}) retry.wrap(obj, {retries: 3}, ['method1', 'method2']) ``` The `options` object can take any options that the usual call to `retry.operation` can take. ### new RetryOperation(timeouts, [options]) Creates a new `RetryOperation` where `timeouts` is an array where each value is a timeout given in milliseconds. Available options: * `forever`: Whether to retry forever, defaults to `false`. * `unref`: Wether to [unref](https://nodejs.org/api/timers.html#timers_unref) the setTimeout's, defaults to `false`. If `forever` is true, the following changes happen: * `RetryOperation.errors()` will only output an array of one item: the last error. * `RetryOperation` will repeatedly use the `timeouts` array. Once all of its timeouts have been used up, it restarts with the first timeout, then uses the second and so on. #### retryOperation.errors() Returns an array of all errors that have been passed to `retryOperation.retry()` so far. #### retryOperation.mainError() A reference to the error object that occured most frequently. Errors are compared using the `error.message` property. If multiple error messages occured the same amount of time, the last error object with that message is returned. If no errors occured so far, the value is `null`. #### retryOperation.attempt(fn, timeoutOps) Defines the function `fn` that is to be retried and executes it for the first time right away. The `fn` function can receive an optional `currentAttempt` callback that represents the number of attempts to execute `fn` so far. Optionally defines `timeoutOps` which is an object having a property `timeout` in miliseconds and a property `cb` callback function. Whenever your retry operation takes longer than `timeout` to execute, the timeout callback function `cb` is called. #### retryOperation.try(fn) This is an alias for `retryOperation.attempt(fn)`. This is deprecated. Please use `retryOperation.attempt(fn)` instead. #### retryOperation.start(fn) This is an alias for `retryOperation.attempt(fn)`. This is deprecated. Please use `retryOperation.attempt(fn)` instead. #### retryOperation.retry(error) Returns `false` when no `error` value is given, or the maximum amount of retries has been reached. Otherwise it returns `true`, and retries the operation after the timeout for the current attempt number. #### retryOperation.stop() Allows you to stop the operation being retried. Useful for aborting the operation on a fatal error etc. #### retryOperation.attempts() Returns an int representing the number of attempts it took to call `fn` before it was successful. ## License retry is licensed under the MIT license. # Changelog 0.10.0 Adding `stop` functionality, thanks to @maxnachlinger. 0.9.0 Adding `unref` functionality, thanks to @satazor. 0.8.0 Implementing retry.wrap. 0.7.0 Some bug fixes and made retry.createTimeout() public. Fixed issues [#10](https://github.com/tim-kos/node-retry/issues/10), [#12](https://github.com/tim-kos/node-retry/issues/12), and [#13](https://github.com/tim-kos/node-retry/issues/13). 0.6.0 Introduced optional timeOps parameter for the attempt() function which is an object having a property timeout in milliseconds and a property cb callback function. Whenever your retry operation takes longer than timeout to execute, the timeout callback function cb is called. 0.5.0 Some minor refactoring. 0.4.0 Changed retryOperation.try() to retryOperation.attempt(). Deprecated the aliases start() and try() for it. 0.3.0 Added retryOperation.start() which is an alias for retryOperation.try(). 0.2.0 Added attempts() function and parameter to retryOperation.try() representing the number of attempts it took to call fn(). # bin-links [![npm version](https://img.shields.io/npm/v/bin-links.svg)](https://npm.im/bin-links) [![license](https://img.shields.io/npm/l/bin-links.svg)](https://npm.im/bin-links) [![Travis](https://img.shields.io/travis/npm/bin-links.svg)](https://travis-ci.org/npm/bin-links) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/bin-links?svg=true)](https://ci.appveyor.com/project/npm/bin-links) [![Coverage Status](https://coveralls.io/repos/github/npm/bin-links/badge.svg?branch=latest)](https://coveralls.io/github/npm/bin-links?branch=latest) [`bin-links`](https://github.com/npm/bin-links) is a standalone library that links binaries and man pages for Javascript packages ## Install `$ npm install bin-links` ## Table of Contents * [Example](#example) * [Features](#features) * [Contributing](#contributing) * [API](#api) * [`binLinks`](#binLinks) ### Example ```javascript // todo ``` ### Features * Links bin files listed under the `bin` property of pkg to the node_modules/.bin directory of the installing environment. * Links man files listed under the `man` property of pkg to the share/man directory of the provided optional directory prefix. ### Contributing The npm team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear. ### API #### <a name="binLinks"></a> `> binLinks(pkg, folder, global, opts, cb)` ##### Example ```javascript binLinks(pkg, folder, global, opts, cb) ``` # which Like the unix `which` utility. Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so `hash -r` is not needed when the PATH changes. ## USAGE ```javascript var which = require('which') // async usage which('node', function (er, resolvedPath) { // er is returned if no "node" is found on the PATH // if it is found, then the absolute path to the exec is returned }) // sync usage // throws if not found var resolved = which.sync('node') // if nothrow option is used, returns null if not found resolved = which.sync('node', {nothrow: true}) // Pass options to override the PATH and PATHEXT environment vars. which('node', { path: someOtherPath }, function (er, resolved) { if (er) throw er console.log('found at %j', resolved) }) ``` ## CLI USAGE Same as the BSD `which(1)` binary. ``` usage: which [-as] program ... ``` ## OPTIONS You may pass an options object as the second argument. - `path`: Use instead of the `PATH` environment variable. - `pathExt`: Use instead of the `PATHEXT` environment variable. - `all`: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string. # mute-stream Bytes go in, but they don't come out (when muted). This is a basic pass-through stream, but when muted, the bytes are silently dropped, rather than being passed through. ## Usage ```javascript var MuteStream = require('mute-stream') var ms = new MuteStream(options) ms.pipe(process.stdout) ms.write('foo') // writes 'foo' to stdout ms.mute() ms.write('bar') // does not write 'bar' ms.unmute() ms.write('baz') // writes 'baz' to stdout // can also be used to mute incoming data var ms = new MuteStream input.pipe(ms) ms.on('data', function (c) { console.log('data: ' + c) }) input.emit('data', 'foo') // logs 'foo' ms.mute() input.emit('data', 'bar') // does not log 'bar' ms.unmute() input.emit('data', 'baz') // logs 'baz' ``` ## Options All options are optional. * `replace` Set to a string to replace each character with the specified string when muted. (So you can show `****` instead of the password, for example.) * `prompt` If you are using a replacement char, and also using a prompt with a readline stream (as for a `Password: *****` input), then specify what the prompt is so that backspace will work properly. Otherwise, pressing backspace will overwrite the prompt with the replacement character, which is weird. ## ms.mute() Set `muted` to `true`. Turns `.write()` into a no-op. ## ms.unmute() Set `muted` to `false` ## ms.isTTY True if the pipe destination is a TTY, or if the incoming pipe source is a TTY. ## Other stream methods... The other standard readable and writable stream methods are all available. The MuteStream object acts as a facade to its pipe source and destination. #is-symbol <sup>[![Version Badge][2]][1]</sup> [![Build Status][3]][4] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] [![browser support][9]][10] Is this an ES6 Symbol value? ## Example ```js var isSymbol = require('is-symbol'); assert(!isSymbol(function () {})); assert(!isSymbol(null)); assert(!isSymbol(function* () { yield 42; return Infinity; }); assert(isSymbol(Symbol.iterator)); assert(isSymbol(Symbol('foo'))); assert(isSymbol(Symbol.for('foo'))); assert(isSymbol(Object(Symbol('foo')))); ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-symbol [2]: http://versionbadg.es/ljharb/is-symbol.svg [3]: https://travis-ci.org/ljharb/is-symbol.svg [4]: https://travis-ci.org/ljharb/is-symbol [5]: https://david-dm.org/ljharb/is-symbol.svg [6]: https://david-dm.org/ljharb/is-symbol [7]: https://david-dm.org/ljharb/is-symbol/dev-status.svg [8]: https://david-dm.org/ljharb/is-symbol#info=devDependencies [9]: https://ci.testling.com/ljharb/is-symbol.png [10]: https://ci.testling.com/ljharb/is-symbol [11]: https://nodei.co/npm/is-symbol.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/is-symbol.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/is-symbol.svg [downloads-url]: http://npm-stat.com/charts.html?package=is-symbol <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ````javascript #!/usr/bin/env node const argv = require('yargs').argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ```` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [coveralls-url]: https://coveralls.io/github/yargs/yargs [coveralls-image]: https://img.shields.io/coveralls/yargs/yargs.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs # libnpm [`libnpm`](https://github.com/npm/libnpm) is the programmatic API for npm. For bug reports and support, please head over to [npm.community](https://npm.community). ## Install `$ npm install libnpm` ## Table of Contents * [Example](#example) * [Features](#features) * [API](#api) * Fetching Packages and Their Info * [`manifest`](https://www.npmjs.com/package/pacote#manifest) * [`packument`](https://www.npmjs.com/package/pacote#packument) * [`tarball`](https://www.npmjs.com/package/pacote#tarball) * [`extract`](https://www.npmjs.com/package/pacote#extract) * [`search`](https://npm.im/libnpmsearch) * Package-related Registry APIs * [`publish`]() * [`unpublish`](#unpublish) * [`access`](https://npm.im/libnpmaccess) * Account-related Registry APIs * [`login`](https://www.npmjs.com/package/npm-profile#login) * [`adduser`](https://www.npmjs.com/package/npm-profile#adduser) * [`profile`](https://npm.im/npm-profile) * [`hook`](https://npm.im/libnpmhook) * [`team`](https://npm.im/libnpmteam) * [`org`](https://npm.im/libnpmorg) * Miscellaneous * [`parseArg`](https://npm.im/npm-package-arg) * [`config`](https://npm.im/libnpmconfig) * [`readJSON`](https://npm.im/read-package-json) * [`verifyLock`](https://npm.im/lock-verify) * [`getPrefix`](https://npm.im/find-npm-prefix) * [`logicalTree`](https://npm.im/npm-logical-tree) * [`stringifyPackage`](https://npm.im/stringify-package) * [`runScript`](https://www.npmjs.com/package/npm-lifecycle) * [`log`](https://npm.im/npmlog) * [`fetch`](https://npm.im/npm-registry-fetch) (plain ol' client for registry interaction) * [`linkBin`](https://npm.im/bin-links) ### Example ```javascript await libnpm.manifest('libnpm') // => Manifest { name: 'libnpm', ... } ``` ### API This package re-exports the APIs from other packages for convenience. Refer to the [table of contents](#table-of-contents) for detailed documentation on each individual exported API. # figgy-pudding [![npm version](https://img.shields.io/npm/v/figgy-pudding.svg)](https://npm.im/figgy-pudding) [![license](https://img.shields.io/npm/l/figgy-pudding.svg)](https://npm.im/figgy-pudding) [![Travis](https://img.shields.io/travis/zkat/figgy-pudding.svg)](https://travis-ci.org/zkat/figgy-pudding) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/zkat/figgy-pudding?svg=true)](https://ci.appveyor.com/project/zkat/figgy-pudding) [![Coverage Status](https://coveralls.io/repos/github/zkat/figgy-pudding/badge.svg?branch=latest)](https://coveralls.io/github/zkat/figgy-pudding?branch=latest) [`figgy-pudding`](https://github.com/zkat/figgy-pudding) is a small JavaScript library for managing and composing cascading options objects -- hiding what needs to be hidden from each layer, without having to do a lot of manual munging and passing of options. ### The God Object is Dead! ### Now Bring Us Some Figgy Pudding! ## Install `$ npm install figgy-pudding` ## Table of Contents * [Example](#example) * [Features](#features) * [API](#api) * [`figgyPudding(spec)`](#figgy-pudding) * [`PuddingFactory(values)`](#pudding-factory) * [`opts.get()`](#opts-get) * [`opts.concat()`](#opts-concat) * [`opts.toJSON()`](#opts-to-json) * [`opts.forEach()`](#opts-for-each) * [`opts[Symbol.iterator]()`](#opts-symbol-iterator) * [`opts.entries()`](#opts-entries) * [`opts.keys()`](#opts-keys) * [`opts.value()`](#opts-values) ### Example ```javascript // print-package.js const fetch = require('./fetch.js') const puddin = require('figgy-pudding') const PrintOpts = puddin({ json: { default: false } }) async function printPkg (name, opts) { // Expected pattern is to call this in every interface function. If `opts` is // not passed in, it will automatically create an (empty) object for it. opts = PrintOpts(opts) const uri = `https://registry.npmjs.com/${name}` const res = await fetch(uri, opts.concat({ // Add or override any passed-in configs and pass them down. log: customLogger })) // The following would throw an error, because it's not in PrintOpts: // console.log(opts.log) if (opts.json) { return res.json() } else { return res.text() } } console.log(await printPkg('figgy', { // Pass in *all* configs at the toplevel, as a regular object. json: true, cache: './tmp-cache' })) ``` ```javascript // fetch.js const puddin = require('figgy-pudding') const FetchOpts = puddin({ log: { default: require('npmlog') }, cache: {} }) module.exports = async function (..., opts) { opts = FetchOpts(opts) } ``` ### Features * hide options from layer that didn't ask for it * shared multi-layer options * make sure `opts` argument is available * transparent key access like normal keys, through a Proxy. No need for`.get()`! * default values * key aliases * arbitrary key filter functions * key/value iteration * serialization * 100% test coverage using `tap --100` ### API #### <a name="figgy-pudding"></a> `> figgyPudding({ key: { default: val } | String }, [opts]) -> PuddingFactory` Defines an Options constructor that can be used to collect only the needed options. An optional `default` property for specs can be used to specify default values if nothing was passed in. If the value for a spec is a string, it will be treated as an alias to that other key. ##### Example ```javascript const MyAppOpts = figgyPudding({ lg: 'log', log: { default: () => require('npmlog') }, cache: {} }) ``` #### <a name="pudding-factory"></a> `> PuddingFactory(...providers) -> FiggyPudding{}` Instantiates an options object defined by `figgyPudding()`, which uses `providers`, in order, to find requested properties. Each provider can be either a plain object, a `Map`-like object (that is, one with a `.get()` method) or another figgyPudding `Opts` object. When nesting `Opts` objects, their properties will not become available to the new object, but any further nested `Opts` that reference that property _will_ be able to read from their grandparent, as long as they define that key. Default values for nested `Opts` parents will be used, if found. ##### Example ```javascript const ReqOpts = figgyPudding({ follow: {} }) const opts = ReqOpts({ follow: true, log: require('npmlog') }) opts.follow // => true opts.log // => Error: ReqOpts does not define `log` const MoreOpts = figgyPudding({ log: {} }) MoreOpts(opts).log // => npmlog object (passed in from original plain obj) MoreOpts(opts).follow // => Error: MoreOpts does not define `follow` ``` #### <a name="opts-get"></a> `> opts.get(key) -> Value` Gets a value from the options object. ##### Example ```js const opts = MyOpts(config) opts.get('foo') // value of `foo` opts.foo // Proxy-based access through `.get()` ``` #### <a name="opts-concat"></a> `> opts.concat(...moreProviders) -> FiggyPudding{}` Creates a new opts object of the same type as `opts` with additional providers. Providers further to the right shadow providers to the left, with properties in the original `opts` being shadows by the new providers. ##### Example ```js const opts = MyOpts({x: 1}) opts.get('x') // 1 opts.concat({x: 2}).get('x') // 2 opts.get('x') // 1 (original opts object left intact) ``` #### <a name="opts-to-json"></a> `> opts.toJSON() -> Value` Converts `opts` to a plain, JSON-stringifiable JavaScript value. Used internally by JavaScript to get `JSON.stringify()` working. Only keys that are readable by the current pudding type will be serialized. ##### Example ```js const opts = MyOpts({x: 1}) opts.toJSON() // {x: 1} JSON.stringify(opts) // '{"x":1}' ``` #### <a name="opts-for-each"></a> `> opts.forEach((value, key, opts) => {}, thisArg) -> undefined` Iterates over the values of `opts`, limited to the keys readable by the current pudding type. `thisArg` will be used to set the `this` argument when calling the `fn`. ##### Example ```js const opts = MyOpts({x: 1, y: 2}) opts.forEach((value, key) => console.log(key, '=', value)) ``` #### <a name="opts-entries"></a> `> opts.entries() -> Iterator<[[key, value], ...]>` Returns an iterator that iterates over the keys and values in `opts`, limited to the keys readable by the current pudding type. Each iteration returns an array of `[key, value]`. ##### Example ```js const opts = MyOpts({x: 1, y: 2}) [...opts({x: 1, y: 2}).entries()] // [['x', 1], ['y', 2]] ``` #### <a name="opts-symbol-iterator"></a> `> opts[Symbol.iterator]() -> Iterator<[[key, value], ...]>` Returns an iterator that iterates over the keys and values in `opts`, limited to the keys readable by the current pudding type. Each iteration returns an array of `[key, value]`. Makes puddings work natively with JS iteration mechanisms. ##### Example ```js const opts = MyOpts({x: 1, y: 2}) [...opts({x: 1, y: 2})] // [['x', 1], ['y', 2]] for (let [key, value] of opts({x: 1, y: 2})) { console.log(key, '=', value) } ``` #### <a name="opts-keys"></a> `> opts.keys() -> Iterator<[key, ...]>` Returns an iterator that iterates over the keys in `opts`, limited to the keys readable by the current pudding type. ##### Example ```js const opts = MyOpts({x: 1, y: 2}) [...opts({x: 1, y: 2}).keys()] // ['x', 'y'] ``` #### <a name="opts-values"></a> `> opts.values() -> Iterator<[value, ...]>` Returns an iterator that iterates over the values in `opts`, limited to the keys readable by the current pudding type. ##### Example ' ```js const opts = MyOpts({x: 1, y: 2}) [...opts({x: 1, y: 2}).values()] // [1, 2] ``` # read-installed Read all the installed packages in a folder, and return a tree structure with all the data. npm uses this. ## 2.0.0 Breaking changes in `2.0.0`: The second argument is now an `Object` that contains the following keys: * `depth` optional, defaults to Infinity * `log` optional log Function * `dev` optional, default false, set to true to include devDependencies ## Usage ```javascript var readInstalled = require("read-installed") // optional options var options = { dev: false, log: fn, depth: 2 } readInstalled(folder, options, function (er, data) { ... }) ``` forever-agent ============= HTTP Agent that keeps socket connections alive between keep-alive requests. Formerly part of mikeal/request, now a standalone module. # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` # registry-auth-token [![npm version](http://img.shields.io/npm/v/registry-auth-token.svg?style=flat-square)](http://browsenpm.org/package/registry-auth-token)[![Build Status](http://img.shields.io/travis/rexxars/registry-auth-token/master.svg?style=flat-square)](https://travis-ci.org/rexxars/registry-auth-token) Get the auth token set for an npm registry from `.npmrc`. Also allows fetching the configured registry URL for a given npm scope. ## Installing ``` npm install --save registry-auth-token ``` ## Usage Returns an object containing `token` and `type`, or `undefined` if no token can be found. `type` can be either `Bearer` or `Basic`. ```js var getAuthToken = require('registry-auth-token') var getRegistryUrl = require('registry-auth-token/registry-url') // Get auth token and type for default `registry` set in `.npmrc` console.log(getAuthToken()) // {token: 'someToken', type: 'Bearer'} // Get auth token for a specific registry URL console.log(getAuthToken('//registry.foo.bar')) // Find the registry auth token for a given URL (with deep path): // If registry is at `//some.host/registry` // URL passed is `//some.host/registry/deep/path` // Will find token the closest matching path; `//some.host/registry` console.log(getAuthToken('//some.host/registry/deep/path', {recursive: true})) // Find the configured registry url for scope `@foobar`. // Falls back to the global registry if not defined. console.log(getRegistryUrl('@foobar')) // Use the npm config that is passed in console.log(getRegistryUrl('http://registry.foobar.eu/', { npmrc: { 'registry': 'http://registry.foobar.eu/', '//registry.foobar.eu/:_authToken': 'qar' } })) ``` ## Return value ```js // If auth info can be found: {token: 'someToken', type: 'Bearer'} // Or: {token: 'someOtherToken', type: 'Basic'} // Or, if nothing is found: undefined ``` ## Security Please be careful when using this. Leaking your auth token is dangerous. ## License MIT-licensed. See LICENSE. [RFC6265](https://tools.ietf.org/html/rfc6265) Cookies and CookieJar for Node.js [![npm package](https://nodei.co/npm/tough-cookie.png?downloads=true&downloadRank=true&stars=true)](https://nodei.co/npm/tough-cookie/) [![Build Status](https://travis-ci.org/salesforce/tough-cookie.png?branch=master)](https://travis-ci.org/salesforce/tough-cookie) # Synopsis ``` javascript var tough = require('tough-cookie'); var Cookie = tough.Cookie; var cookie = Cookie.parse(header); cookie.value = 'somethingdifferent'; header = cookie.toString(); var cookiejar = new tough.CookieJar(); cookiejar.setCookie(cookie, 'http://currentdomain.example.com/path', cb); // ... cookiejar.getCookies('http://example.com/otherpath',function(err,cookies) { res.headers['cookie'] = cookies.join('; '); }); ``` # Installation It's _so_ easy! `npm install tough-cookie` Why the name? NPM modules `cookie`, `cookies` and `cookiejar` were already taken. ## Version Support Support for versions of node.js will follow that of the [request](https://www.npmjs.com/package/request) module. # API ## tough Functions on the module you get from `require('tough-cookie')`. All can be used as pure functions and don't need to be "bound". **Note**: prior to 1.0.x, several of these functions took a `strict` parameter. This has since been removed from the API as it was no longer necessary. ### `parseDate(string)` Parse a cookie date string into a `Date`. Parses according to RFC6265 Section 5.1.1, not `Date.parse()`. ### `formatDate(date)` Format a Date into a RFC1123 string (the RFC6265-recommended format). ### `canonicalDomain(str)` Transforms a domain-name into a canonical domain-name. The canonical domain-name is a trimmed, lowercased, stripped-of-leading-dot and optionally punycode-encoded domain-name (Section 5.1.2 of RFC6265). For the most part, this function is idempotent (can be run again on its output without ill effects). ### `domainMatch(str,domStr[,canonicalize=true])` Answers "does this real domain match the domain in a cookie?". The `str` is the "current" domain-name and the `domStr` is the "cookie" domain-name. Matches according to RFC6265 Section 5.1.3, but it helps to think of it as a "suffix match". The `canonicalize` parameter will run the other two parameters through `canonicalDomain` or not. ### `defaultPath(path)` Given a current request/response path, gives the Path apropriate for storing in a cookie. This is basically the "directory" of a "file" in the path, but is specified by Section 5.1.4 of the RFC. The `path` parameter MUST be _only_ the pathname part of a URI (i.e. excludes the hostname, query, fragment, etc.). This is the `.pathname` property of node's `uri.parse()` output. ### `pathMatch(reqPath,cookiePath)` Answers "does the request-path path-match a given cookie-path?" as per RFC6265 Section 5.1.4. Returns a boolean. This is essentially a prefix-match where `cookiePath` is a prefix of `reqPath`. ### `parse(cookieString[, options])` alias for `Cookie.parse(cookieString[, options])` ### `fromJSON(string)` alias for `Cookie.fromJSON(string)` ### `getPublicSuffix(hostname)` Returns the public suffix of this hostname. The public suffix is the shortest domain-name upon which a cookie can be set. Returns `null` if the hostname cannot have cookies set for it. For example: `www.example.com` and `www.subdomain.example.com` both have public suffix `example.com`. For further information, see http://publicsuffix.org/. This module derives its list from that site. This call is currently a wrapper around [`psl`](https://www.npmjs.com/package/psl)'s [get() method](https://www.npmjs.com/package/psl#pslgetdomain). ### `cookieCompare(a,b)` For use with `.sort()`, sorts a list of cookies into the recommended order given in the RFC (Section 5.4 step 2). The sort algorithm is, in order of precedence: * Longest `.path` * oldest `.creation` (which has a 1ms precision, same as `Date`) * lowest `.creationIndex` (to get beyond the 1ms precision) ``` javascript var cookies = [ /* unsorted array of Cookie objects */ ]; cookies = cookies.sort(cookieCompare); ``` **Note**: Since JavaScript's `Date` is limited to a 1ms precision, cookies within the same milisecond are entirely possible. This is especially true when using the `now` option to `.setCookie()`. The `.creationIndex` property is a per-process global counter, assigned during construction with `new Cookie()`. This preserves the spirit of the RFC sorting: older cookies go first. This works great for `MemoryCookieStore`, since `Set-Cookie` headers are parsed in order, but may not be so great for distributed systems. Sophisticated `Store`s may wish to set this to some other _logical clock_ such that if cookies A and B are created in the same millisecond, but cookie A is created before cookie B, then `A.creationIndex < B.creationIndex`. If you want to alter the global counter, which you probably _shouldn't_ do, it's stored in `Cookie.cookiesCreated`. ### `permuteDomain(domain)` Generates a list of all possible domains that `domainMatch()` the parameter. May be handy for implementing cookie stores. ### `permutePath(path)` Generates a list of all possible paths that `pathMatch()` the parameter. May be handy for implementing cookie stores. ## Cookie Exported via `tough.Cookie`. ### `Cookie.parse(cookieString[, options])` Parses a single Cookie or Set-Cookie HTTP header into a `Cookie` object. Returns `undefined` if the string can't be parsed. The options parameter is not required and currently has only one property: * _loose_ - boolean - if `true` enable parsing of key-less cookies like `=abc` and `=`, which are not RFC-compliant. If options is not an object, it is ignored, which means you can use `Array#map` with it. Here's how to process the Set-Cookie header(s) on a node HTTP/HTTPS response: ``` javascript if (res.headers['set-cookie'] instanceof Array) cookies = res.headers['set-cookie'].map(Cookie.parse); else cookies = [Cookie.parse(res.headers['set-cookie'])]; ``` _Note:_ in version 2.3.3, tough-cookie limited the number of spaces before the `=` to 256 characters. This limitation has since been removed. See [Issue 92](https://github.com/salesforce/tough-cookie/issues/92) ### Properties Cookie object properties: * _key_ - string - the name or key of the cookie (default "") * _value_ - string - the value of the cookie (default "") * _expires_ - `Date` - if set, the `Expires=` attribute of the cookie (defaults to the string `"Infinity"`). See `setExpires()` * _maxAge_ - seconds - if set, the `Max-Age=` attribute _in seconds_ of the cookie. May also be set to strings `"Infinity"` and `"-Infinity"` for non-expiry and immediate-expiry, respectively. See `setMaxAge()` * _domain_ - string - the `Domain=` attribute of the cookie * _path_ - string - the `Path=` of the cookie * _secure_ - boolean - the `Secure` cookie flag * _httpOnly_ - boolean - the `HttpOnly` cookie flag * _extensions_ - `Array` - any unrecognized cookie attributes as strings (even if equal-signs inside) * _creation_ - `Date` - when this cookie was constructed * _creationIndex_ - number - set at construction, used to provide greater sort precision (please see `cookieCompare(a,b)` for a full explanation) After a cookie has been passed through `CookieJar.setCookie()` it will have the following additional attributes: * _hostOnly_ - boolean - is this a host-only cookie (i.e. no Domain field was set, but was instead implied) * _pathIsDefault_ - boolean - if true, there was no Path field on the cookie and `defaultPath()` was used to derive one. * _creation_ - `Date` - **modified** from construction to when the cookie was added to the jar * _lastAccessed_ - `Date` - last time the cookie got accessed. Will affect cookie cleaning once implemented. Using `cookiejar.getCookies(...)` will update this attribute. ### `Cookie([{properties}])` Receives an options object that can contain any of the above Cookie properties, uses the default for unspecified properties. ### `.toString()` encode to a Set-Cookie header value. The Expires cookie field is set using `formatDate()`, but is omitted entirely if `.expires` is `Infinity`. ### `.cookieString()` encode to a Cookie header value (i.e. the `.key` and `.value` properties joined with '='). ### `.setExpires(String)` sets the expiry based on a date-string passed through `parseDate()`. If parseDate returns `null` (i.e. can't parse this date string), `.expires` is set to `"Infinity"` (a string) is set. ### `.setMaxAge(number)` sets the maxAge in seconds. Coerces `-Infinity` to `"-Infinity"` and `Infinity` to `"Infinity"` so it JSON serializes correctly. ### `.expiryTime([now=Date.now()])` ### `.expiryDate([now=Date.now()])` expiryTime() Computes the absolute unix-epoch milliseconds that this cookie expires. expiryDate() works similarly, except it returns a `Date` object. Note that in both cases the `now` parameter should be milliseconds. Max-Age takes precedence over Expires (as per the RFC). The `.creation` attribute -- or, by default, the `now` parameter -- is used to offset the `.maxAge` attribute. If Expires (`.expires`) is set, that's returned. Otherwise, `expiryTime()` returns `Infinity` and `expiryDate()` returns a `Date` object for "Tue, 19 Jan 2038 03:14:07 GMT" (latest date that can be expressed by a 32-bit `time_t`; the common limit for most user-agents). ### `.TTL([now=Date.now()])` compute the TTL relative to `now` (milliseconds). The same precedence rules as for `expiryTime`/`expiryDate` apply. The "number" `Infinity` is returned for cookies without an explicit expiry and `0` is returned if the cookie is expired. Otherwise a time-to-live in milliseconds is returned. ### `.canonicalizedDoman()` ### `.cdomain()` return the canonicalized `.domain` field. This is lower-cased and punycode (RFC3490) encoded if the domain has any non-ASCII characters. ### `.toJSON()` For convenience in using `JSON.serialize(cookie)`. Returns a plain-old `Object` that can be JSON-serialized. Any `Date` properties (i.e., `.expires`, `.creation`, and `.lastAccessed`) are exported in ISO format (`.toISOString()`). **NOTE**: Custom `Cookie` properties will be discarded. In tough-cookie 1.x, since there was no `.toJSON` method explicitly defined, all enumerable properties were captured. If you want a property to be serialized, add the property name to the `Cookie.serializableProperties` Array. ### `Cookie.fromJSON(strOrObj)` Does the reverse of `cookie.toJSON()`. If passed a string, will `JSON.parse()` that first. Any `Date` properties (i.e., `.expires`, `.creation`, and `.lastAccessed`) are parsed via `Date.parse()`, not the tough-cookie `parseDate`, since it's JavaScript/JSON-y timestamps being handled at this layer. Returns `null` upon JSON parsing error. ### `.clone()` Does a deep clone of this cookie, exactly implemented as `Cookie.fromJSON(cookie.toJSON())`. ### `.validate()` Status: *IN PROGRESS*. Works for a few things, but is by no means comprehensive. validates cookie attributes for semantic correctness. Useful for "lint" checking any Set-Cookie headers you generate. For now, it returns a boolean, but eventually could return a reason string -- you can future-proof with this construct: ``` javascript if (cookie.validate() === true) { // it's tasty } else { // yuck! } ``` ## CookieJar Exported via `tough.CookieJar`. ### `CookieJar([store],[options])` Simply use `new CookieJar()`. If you'd like to use a custom store, pass that to the constructor otherwise a `MemoryCookieStore` will be created and used. The `options` object can be omitted and can have the following properties: * _rejectPublicSuffixes_ - boolean - default `true` - reject cookies with domains like "com" and "co.uk" * _looseMode_ - boolean - default `false` - accept malformed cookies like `bar` and `=bar`, which have an implied empty name. This is not in the standard, but is used sometimes on the web and is accepted by (most) browsers. Since eventually this module would like to support database/remote/etc. CookieJars, continuation passing style is used for CookieJar methods. ### `.setCookie(cookieOrString, currentUrl, [{options},] cb(err,cookie))` Attempt to set the cookie in the cookie jar. If the operation fails, an error will be given to the callback `cb`, otherwise the cookie is passed through. The cookie will have updated `.creation`, `.lastAccessed` and `.hostOnly` properties. The `options` object can be omitted and can have the following properties: * _http_ - boolean - default `true` - indicates if this is an HTTP or non-HTTP API. Affects HttpOnly cookies. * _secure_ - boolean - autodetect from url - indicates if this is a "Secure" API. If the currentUrl starts with `https:` or `wss:` then this is defaulted to `true`, otherwise `false`. * _now_ - Date - default `new Date()` - what to use for the creation/access time of cookies * _ignoreError_ - boolean - default `false` - silently ignore things like parse errors and invalid domains. `Store` errors aren't ignored by this option. As per the RFC, the `.hostOnly` property is set if there was no "Domain=" parameter in the cookie string (or `.domain` was null on the Cookie object). The `.domain` property is set to the fully-qualified hostname of `currentUrl` in this case. Matching this cookie requires an exact hostname match (not a `domainMatch` as per usual). ### `.setCookieSync(cookieOrString, currentUrl, [{options}])` Synchronous version of `setCookie`; only works with synchronous stores (e.g. the default `MemoryCookieStore`). ### `.getCookies(currentUrl, [{options},] cb(err,cookies))` Retrieve the list of cookies that can be sent in a Cookie header for the current url. If an error is encountered, that's passed as `err` to the callback, otherwise an `Array` of `Cookie` objects is passed. The array is sorted with `cookieCompare()` unless the `{sort:false}` option is given. The `options` object can be omitted and can have the following properties: * _http_ - boolean - default `true` - indicates if this is an HTTP or non-HTTP API. Affects HttpOnly cookies. * _secure_ - boolean - autodetect from url - indicates if this is a "Secure" API. If the currentUrl starts with `https:` or `wss:` then this is defaulted to `true`, otherwise `false`. * _now_ - Date - default `new Date()` - what to use for the creation/access time of cookies * _expire_ - boolean - default `true` - perform expiry-time checking of cookies and asynchronously remove expired cookies from the store. Using `false` will return expired cookies and **not** remove them from the store (which is useful for replaying Set-Cookie headers, potentially). * _allPaths_ - boolean - default `false` - if `true`, do not scope cookies by path. The default uses RFC-compliant path scoping. **Note**: may not be supported by the underlying store (the default `MemoryCookieStore` supports it). The `.lastAccessed` property of the returned cookies will have been updated. ### `.getCookiesSync(currentUrl, [{options}])` Synchronous version of `getCookies`; only works with synchronous stores (e.g. the default `MemoryCookieStore`). ### `.getCookieString(...)` Accepts the same options as `.getCookies()` but passes a string suitable for a Cookie header rather than an array to the callback. Simply maps the `Cookie` array via `.cookieString()`. ### `.getCookieStringSync(...)` Synchronous version of `getCookieString`; only works with synchronous stores (e.g. the default `MemoryCookieStore`). ### `.getSetCookieStrings(...)` Returns an array of strings suitable for **Set-Cookie** headers. Accepts the same options as `.getCookies()`. Simply maps the cookie array via `.toString()`. ### `.getSetCookieStringsSync(...)` Synchronous version of `getSetCookieStrings`; only works with synchronous stores (e.g. the default `MemoryCookieStore`). ### `.serialize(cb(err,serializedObject))` Serialize the Jar if the underlying store supports `.getAllCookies`. **NOTE**: Custom `Cookie` properties will be discarded. If you want a property to be serialized, add the property name to the `Cookie.serializableProperties` Array. See [Serialization Format]. ### `.serializeSync()` Sync version of .serialize ### `.toJSON()` Alias of .serializeSync() for the convenience of `JSON.stringify(cookiejar)`. ### `CookieJar.deserialize(serialized, [store], cb(err,object))` A new Jar is created and the serialized Cookies are added to the underlying store. Each `Cookie` is added via `store.putCookie` in the order in which they appear in the serialization. The `store` argument is optional, but should be an instance of `Store`. By default, a new instance of `MemoryCookieStore` is created. As a convenience, if `serialized` is a string, it is passed through `JSON.parse` first. If that throws an error, this is passed to the callback. ### `CookieJar.deserializeSync(serialized, [store])` Sync version of `.deserialize`. _Note_ that the `store` must be synchronous for this to work. ### `CookieJar.fromJSON(string)` Alias of `.deserializeSync` to provide consistency with `Cookie.fromJSON()`. ### `.clone([store,]cb(err,newJar))` Produces a deep clone of this jar. Modifications to the original won't affect the clone, and vice versa. The `store` argument is optional, but should be an instance of `Store`. By default, a new instance of `MemoryCookieStore` is created. Transferring between store types is supported so long as the source implements `.getAllCookies()` and the destination implements `.putCookie()`. ### `.cloneSync([store])` Synchronous version of `.clone`, returning a new `CookieJar` instance. The `store` argument is optional, but must be a _synchronous_ `Store` instance if specified. If not passed, a new instance of `MemoryCookieStore` is used. The _source_ and _destination_ must both be synchronous `Store`s. If one or both stores are asynchronous, use `.clone` instead. Recall that `MemoryCookieStore` supports both synchronous and asynchronous API calls. ## Store Base class for CookieJar stores. Available as `tough.Store`. ## Store API The storage model for each `CookieJar` instance can be replaced with a custom implementation. The default is `MemoryCookieStore` which can be found in the `lib/memstore.js` file. The API uses continuation-passing-style to allow for asynchronous stores. Stores should inherit from the base `Store` class, which is available as `require('tough-cookie').Store`. Stores are asynchronous by default, but if `store.synchronous` is set to `true`, then the `*Sync` methods on the of the containing `CookieJar` can be used (however, the continuation-passing style All `domain` parameters will have been normalized before calling. The Cookie store must have all of the following methods. ### `store.findCookie(domain, path, key, cb(err,cookie))` Retrieve a cookie with the given domain, path and key (a.k.a. name). The RFC maintains that exactly one of these cookies should exist in a store. If the store is using versioning, this means that the latest/newest such cookie should be returned. Callback takes an error and the resulting `Cookie` object. If no cookie is found then `null` MUST be passed instead (i.e. not an error). ### `store.findCookies(domain, path, cb(err,cookies))` Locates cookies matching the given domain and path. This is most often called in the context of `cookiejar.getCookies()` above. If no cookies are found, the callback MUST be passed an empty array. The resulting list will be checked for applicability to the current request according to the RFC (domain-match, path-match, http-only-flag, secure-flag, expiry, etc.), so it's OK to use an optimistic search algorithm when implementing this method. However, the search algorithm used SHOULD try to find cookies that `domainMatch()` the domain and `pathMatch()` the path in order to limit the amount of checking that needs to be done. As of version 0.9.12, the `allPaths` option to `cookiejar.getCookies()` above will cause the path here to be `null`. If the path is `null`, path-matching MUST NOT be performed (i.e. domain-matching only). ### `store.putCookie(cookie, cb(err))` Adds a new cookie to the store. The implementation SHOULD replace any existing cookie with the same `.domain`, `.path`, and `.key` properties -- depending on the nature of the implementation, it's possible that between the call to `fetchCookie` and `putCookie` that a duplicate `putCookie` can occur. The `cookie` object MUST NOT be modified; the caller will have already updated the `.creation` and `.lastAccessed` properties. Pass an error if the cookie cannot be stored. ### `store.updateCookie(oldCookie, newCookie, cb(err))` Update an existing cookie. The implementation MUST update the `.value` for a cookie with the same `domain`, `.path` and `.key`. The implementation SHOULD check that the old value in the store is equivalent to `oldCookie` - how the conflict is resolved is up to the store. The `.lastAccessed` property will always be different between the two objects (to the precision possible via JavaScript's clock). Both `.creation` and `.creationIndex` are guaranteed to be the same. Stores MAY ignore or defer the `.lastAccessed` change at the cost of affecting how cookies are selected for automatic deletion (e.g., least-recently-used, which is up to the store to implement). Stores may wish to optimize changing the `.value` of the cookie in the store versus storing a new cookie. If the implementation doesn't define this method a stub that calls `putCookie(newCookie,cb)` will be added to the store object. The `newCookie` and `oldCookie` objects MUST NOT be modified. Pass an error if the newCookie cannot be stored. ### `store.removeCookie(domain, path, key, cb(err))` Remove a cookie from the store (see notes on `findCookie` about the uniqueness constraint). The implementation MUST NOT pass an error if the cookie doesn't exist; only pass an error due to the failure to remove an existing cookie. ### `store.removeCookies(domain, path, cb(err))` Removes matching cookies from the store. The `path` parameter is optional, and if missing means all paths in a domain should be removed. Pass an error ONLY if removing any existing cookies failed. ### `store.getAllCookies(cb(err, cookies))` Produces an `Array` of all cookies during `jar.serialize()`. The items in the array can be true `Cookie` objects or generic `Object`s with the [Serialization Format] data structure. Cookies SHOULD be returned in creation order to preserve sorting via `compareCookies()`. For reference, `MemoryCookieStore` will sort by `.creationIndex` since it uses true `Cookie` objects internally. If you don't return the cookies in creation order, they'll still be sorted by creation time, but this only has a precision of 1ms. See `compareCookies` for more detail. Pass an error if retrieval fails. ## MemoryCookieStore Inherits from `Store`. A just-in-memory CookieJar synchronous store implementation, used by default. Despite being a synchronous implementation, it's usable with both the synchronous and asynchronous forms of the `CookieJar` API. ## Community Cookie Stores These are some Store implementations authored and maintained by the community. They aren't official and we don't vouch for them but you may be interested to have a look: - [`db-cookie-store`](https://github.com/JSBizon/db-cookie-store): SQL including SQLite-based databases - [`file-cookie-store`](https://github.com/JSBizon/file-cookie-store): Netscape cookie file format on disk - [`redis-cookie-store`](https://github.com/benkroeger/redis-cookie-store): Redis - [`tough-cookie-filestore`](https://github.com/mitsuru/tough-cookie-filestore): JSON on disk - [`tough-cookie-web-storage-store`](https://github.com/exponentjs/tough-cookie-web-storage-store): DOM localStorage and sessionStorage # Serialization Format **NOTE**: if you want to have custom `Cookie` properties serialized, add the property name to `Cookie.serializableProperties`. ```js { // The version of tough-cookie that serialized this jar. version: 'tough-cookie@1.x.y', // add the store type, to make humans happy: storeType: 'MemoryCookieStore', // CookieJar configuration: rejectPublicSuffixes: true, // ... future items go here // Gets filled from jar.store.getAllCookies(): cookies: [ { key: 'string', value: 'string', // ... /* other Cookie.serializableProperties go here */ } ] } ``` # Copyright and License (tl;dr: BSD-3-Clause with some MPL/2.0) ```text Copyright (c) 2015, Salesforce.com, Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Salesforce.com nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` # through2 [![NPM](https://nodei.co/npm/through2.png?downloads&downloadRank)](https://nodei.co/npm/through2/) **A tiny wrapper around Node streams.Transform (Streams2) to avoid explicit subclassing noise** Inspired by [Dominic Tarr](https://github.com/dominictarr)'s [through](https://github.com/dominictarr/through) in that it's so much easier to make a stream out of a function than it is to set up the prototype chain properly: `through(function (chunk) { ... })`. Note: As 2.x.x this module starts using **Streams3** instead of Stream2. To continue using a Streams2 version use `npm install through2@0` to fetch the latest version of 0.x.x. More information about Streams2 vs Streams3 and recommendations see the article **[Why I don't use Node's core 'stream' module](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html)**. ```js fs.createReadStream('ex.txt') .pipe(through2(function (chunk, enc, callback) { for (var i = 0; i < chunk.length; i++) if (chunk[i] == 97) chunk[i] = 122 // swap 'a' for 'z' this.push(chunk) callback() })) .pipe(fs.createWriteStream('out.txt')) .on('finish', function () { doSomethingSpecial() }) ``` Or object streams: ```js var all = [] fs.createReadStream('data.csv') .pipe(csv2()) .pipe(through2.obj(function (chunk, enc, callback) { var data = { name : chunk[0] , address : chunk[3] , phone : chunk[10] } this.push(data) callback() })) .on('data', function (data) { all.push(data) }) .on('end', function () { doSomethingSpecial(all) }) ``` Note that `through2.obj(fn)` is a convenience wrapper around `through2({ objectMode: true }, fn)`. ## API <b><code>through2([ options, ] [ transformFunction ] [, flushFunction ])</code></b> Consult the **[stream.Transform](http://nodejs.org/docs/latest/api/stream.html#stream_class_stream_transform)** documentation for the exact rules of the `transformFunction` (i.e. `this._transform`) and the optional `flushFunction` (i.e. `this._flush`). ### options The options argument is optional and is passed straight through to `stream.Transform`. So you can use `objectMode:true` if you are processing non-binary streams (or just use `through2.obj()`). The `options` argument is first, unlike standard convention, because if I'm passing in an anonymous function then I'd prefer for the options argument to not get lost at the end of the call: ```js fs.createReadStream('/tmp/important.dat') .pipe(through2({ objectMode: true, allowHalfOpen: false }, function (chunk, enc, cb) { cb(null, 'wut?') // note we can use the second argument on the callback // to provide data as an alternative to this.push('wut?') } ) .pipe(fs.createWriteStream('/tmp/wut.txt')) ``` ### transformFunction The `transformFunction` must have the following signature: `function (chunk, encoding, callback) {}`. A minimal implementation should call the `callback` function to indicate that the transformation is done, even if that transformation means discarding the chunk. To queue a new chunk, call `this.push(chunk)`&mdash;this can be called as many times as required before the `callback()` if you have multiple pieces to send on. Alternatively, you may use `callback(err, chunk)` as shorthand for emitting a single chunk or an error. If you **do not provide a `transformFunction`** then you will get a simple pass-through stream. ### flushFunction The optional `flushFunction` is provided as the last argument (2nd or 3rd, depending on whether you've supplied options) is called just prior to the stream ending. Can be used to finish up any processing that may be in progress. ```js fs.createReadStream('/tmp/important.dat') .pipe(through2( function (chunk, enc, cb) { cb(null, chunk) }, // transform is a noop function (cb) { // flush function this.push('tacking on an extra buffer to the end'); cb(); } )) .pipe(fs.createWriteStream('/tmp/wut.txt')); ``` <b><code>through2.ctor([ options, ] transformFunction[, flushFunction ])</code></b> Instead of returning a `stream.Transform` instance, `through2.ctor()` returns a **constructor** for a custom Transform. This is useful when you want to use the same transform logic in multiple instances. ```js var FToC = through2.ctor({objectMode: true}, function (record, encoding, callback) { if (record.temp != null && record.unit == "F") { record.temp = ( ( record.temp - 32 ) * 5 ) / 9 record.unit = "C" } this.push(record) callback() }) // Create instances of FToC like so: var converter = new FToC() // Or: var converter = FToC() // Or specify/override options when you instantiate, if you prefer: var converter = FToC({objectMode: true}) ``` ## See Also - [through2-map](https://github.com/brycebaril/through2-map) - Array.prototype.map analog for streams. - [through2-filter](https://github.com/brycebaril/through2-filter) - Array.prototype.filter analog for streams. - [through2-reduce](https://github.com/brycebaril/through2-reduce) - Array.prototype.reduce analog for streams. - [through2-spy](https://github.com/brycebaril/through2-spy) - Wrapper for simple stream.PassThrough spies. - the [mississippi stream utility collection](https://github.com/maxogden/mississippi) includes `through2` as well as many more useful stream modules similar to this one ## License **through2** is Copyright (c) 2013 Rod Vagg [@rvagg](https://twitter.com/rvagg) and licensed under the MIT license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details. # cmd-shim The cmd-shim used in npm to create executable scripts on Windows, since symlinks are not suitable for this purpose there. On Unix systems, you should use a symbolic link instead. [![Build Status](https://img.shields.io/travis/npm/cmd-shim/master.svg)](https://travis-ci.org/npm/cmd-shim) [![Dependency Status](https://img.shields.io/david/npm/cmd-shim.svg)](https://david-dm.org/npm/cmd-shim) [![NPM version](https://img.shields.io/npm/v/cmd-shim.svg)](https://www.npmjs.com/package/cmd-shim) ## Installation ``` npm install cmd-shim ``` ## API ### cmdShim(from, to, cb) Create a cmd shim at `to` for the command line program at `from`. e.g. ```javascript var cmdShim = require('cmd-shim'); cmdShim(__dirname + '/cli.js', '/usr/bin/command-name', function (err) { if (err) throw err; }); ``` ### cmdShim.ifExists(from, to, cb) The same as above, but will just continue if the file does not exist. Source: ```javascript function cmdShimIfExists (from, to, cb) { fs.stat(from, function (er) { if (er) return cb() cmdShim(from, to, cb) }) } ``` #object.getownpropertydescriptors <sup>[![Version Badge][npm-version-svg]][package-url]</sup> [![Build Status][travis-svg]][travis-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][npm-badge-png]][package-url] [![browser support][testling-svg]][testling-url] An ES2017 spec-compliant shim for `Object.getOwnPropertyDescriptors` that works in ES5. Invoke its "shim" method to shim `Object.getOwnPropertyDescriptors` if it is unavailable, and if `Object.getOwnPropertyDescriptor` is available. This package implements the [es-shim API](https://github.com/es-shims/api) interface. It works in an ES3-supported environment and complies with the [spec](https://github.com/tc39/ecma262/pull/582). ## Example ```js var getDescriptors = require('object.getownpropertydescriptors'); var assert = require('assert'); var obj = { normal: Infinity }; var enumDescriptor = { enumerable: false, writable: false, configurable: true, value: true }; var writableDescriptor = { enumerable: true, writable: true, configurable: true, value: 42 }; var symbol = Symbol(); var symDescriptor = { enumerable: true, writable: true, configurable: false, value: [symbol] }; Object.defineProperty(obj, 'enumerable', enumDescriptor); Object.defineProperty(obj, 'writable', writableDescriptor); Object.defineProperty(obj, 'symbol', symDescriptor); var descriptors = getDescriptors(obj); assert.deepEqual(descriptors, { normal: { enumerable: true, writable: true, configurable: true, value: Infinity }, enumerable: enumDescriptor, writable: writableDescriptor, symbol: symDescriptor }); ``` ```js var getDescriptors = require('object.getownpropertydescriptors'); var assert = require('assert'); /* when Object.getOwnPropertyDescriptors is not present */ delete Object.getOwnPropertyDescriptors; var shimmedDescriptors = getDescriptors.shim(); assert.equal(shimmedDescriptors, getDescriptors); assert.deepEqual(shimmedDescriptors(obj), getDescriptors(obj)); ``` ```js var getDescriptors = require('object.getownpropertydescriptors'); var assert = require('assert'); /* when Object.getOwnPropertyDescriptors is present */ var shimmedDescriptors = getDescriptors.shim(); assert.notEqual(shimmedDescriptors, getDescriptors); assert.deepEqual(shimmedDescriptors(obj), getDescriptors(obj)); ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [package-url]: https://npmjs.org/package/object.getownpropertydescriptors [npm-version-svg]: http://versionbadg.es/ljharb/object.getownpropertydescriptors.svg [travis-svg]: https://travis-ci.org/ljharb/object.getownpropertydescriptors.svg [travis-url]: https://travis-ci.org/ljharb/object.getownpropertydescriptors [deps-svg]: https://david-dm.org/ljharb/object.getownpropertydescriptors.svg [deps-url]: https://david-dm.org/ljharb/object.getownpropertydescriptors [dev-deps-svg]: https://david-dm.org/ljharb/object.getownpropertydescriptors/dev-status.svg [dev-deps-url]: https://david-dm.org/ljharb/object.getownpropertydescriptors#info=devDependencies [testling-svg]: https://ci.testling.com/ljharb/object.getownpropertydescriptors.png [testling-url]: https://ci.testling.com/ljharb/object.getownpropertydescriptors [npm-badge-png]: https://nodei.co/npm/object.getownpropertydescriptors.png?downloads=true&stars=true [license-image]: http://img.shields.io/npm/l/object.getownpropertydescriptors.svg [license-url]: LICENSE [downloads-image]: http://img.shields.io/npm/dm/object.getownpropertydescriptors.svg [downloads-url]: http://npm-stat.com/charts.html?package=object.getownpropertydescriptors # pseudomap A thing that is a lot like ES6 `Map`, but without iterators, for use in environments where `for..of` syntax and `Map` are not available. If you need iterators, or just in general a more faithful polyfill to ES6 Maps, check out [es6-map](http://npm.im/es6-map). If you are in an environment where `Map` is supported, then that will be returned instead, unless `process.env.TEST_PSEUDOMAP` is set. You can use any value as keys, and any value as data. Setting again with the identical key will overwrite the previous value. Internally, data is stored on an `Object.create(null)` style object. The key is coerced to a string to generate the key on the internal data-bag object. The original key used is stored along with the data. In the event of a stringified-key collision, a new key is generated by appending an increasing number to the stringified-key until finding either the intended key or an empty spot. Note that because object traversal order of plain objects is not guaranteed to be identical to insertion order, the insertion order guarantee of `Map.prototype.forEach` is not guaranteed in this implementation. However, in all versions of Node.js and V8 where this module works, `forEach` does traverse data in insertion order. ## API Most of the [Map API](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map), with the following exceptions: 1. A `Map` object is not an iterator. 2. `values`, `keys`, and `entries` methods are not implemented, because they return iterators. 3. The argument to the constructor can be an Array of `[key, value]` pairs, or a `Map` or `PseudoMap` object. But, since iterators aren't used, passing any plain-old iterator won't initialize the map properly. ## USAGE Use just like a regular ES6 Map. ```javascript var PseudoMap = require('pseudomap') // optionally provide a pseudomap, or an array of [key,value] pairs // as the argument to initialize the map with var myMap = new PseudoMap() myMap.set(1, 'number 1') myMap.set('1', 'string 1') var akey = {} var bkey = {} myMap.set(akey, { some: 'data' }) myMap.set(bkey, { some: 'other data' }) ``` # lock-verify Report if your package.json is out of sync with your package-lock.json. ## USAGE ``` const lockVerify = require('lock-verify') lockVerify(moduleDir).then(result => { result.warnings.forEach(w => console.error('Warning:', w)) if (!result.status) { result.errors.forEach(e => console.error(e)) process.exit(1) } }) ``` As a library it's a function that takes the path to a module and returns a promise that resolves to an object with `.status`, `.warnings` and `.errors` properties. The first will be true if everything was ok (though warnings may exist). If there's no `package.json` or no lockfile in `moduleDir` or they're unreadable then the promise will be rejected. # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # jsbn: javascript big number [Tom Wu's Original Website](http://www-cs-students.stanford.edu/~tjw/jsbn/) I felt compelled to put this on github and publish to npm. I haven't tested every other big integer library out there, but the few that I have tested in comparison to this one have not even come close in performance. I am aware of the `bi` module on npm, however it has been modified and I wanted to publish the original without modifications. This is jsbn and jsbn2 from Tom Wu's original website above, with the modular pattern applied to prevent global leaks and to allow for use with node.js on the server side. ## usage var BigInteger = require('jsbn'); var a = new BigInteger('91823918239182398123'); alert(a.bitLength()); // 67 ## API ### bi.toString() returns the base-10 number as a string ### bi.negate() returns a new BigInteger equal to the negation of `bi` ### bi.abs returns new BI of absolute value ### bi.compareTo ### bi.bitLength ### bi.mod ### bi.modPowInt ### bi.clone ### bi.intValue ### bi.byteValue ### bi.shortValue ### bi.signum ### bi.toByteArray ### bi.equals ### bi.min ### bi.max ### bi.and ### bi.or ### bi.xor ### bi.andNot ### bi.not ### bi.shiftLeft ### bi.shiftRight ### bi.getLowestSetBit ### bi.bitCount ### bi.testBit ### bi.setBit ### bi.clearBit ### bi.flipBit ### bi.add ### bi.subtract ### bi.multiply ### bi.divide ### bi.remainder ### bi.divideAndRemainder ### bi.modPow ### bi.modInverse ### bi.pow ### bi.gcd ### bi.isProbablePrime # Request - Simplified HTTP client [![npm package](https://nodei.co/npm/request.png?downloads=true&downloadRank=true&stars=true)](https://nodei.co/npm/request/) [![Build status](https://img.shields.io/travis/request/request/master.svg?style=flat-square)](https://travis-ci.org/request/request) [![Coverage](https://img.shields.io/codecov/c/github/request/request.svg?style=flat-square)](https://codecov.io/github/request/request?branch=master) [![Coverage](https://img.shields.io/coveralls/request/request.svg?style=flat-square)](https://coveralls.io/r/request/request) [![Dependency Status](https://img.shields.io/david/request/request.svg?style=flat-square)](https://david-dm.org/request/request) [![Known Vulnerabilities](https://snyk.io/test/npm/request/badge.svg?style=flat-square)](https://snyk.io/test/npm/request) [![Gitter](https://img.shields.io/badge/gitter-join_chat-blue.svg?style=flat-square)](https://gitter.im/request/request?utm_source=badge) ## Super simple to use Request is designed to be the simplest way possible to make http calls. It supports HTTPS and follows redirects by default. ```js var request = require('request'); request('http://www.google.com', function (error, response, body) { console.log('error:', error); // Print the error if one occurred console.log('statusCode:', response && response.statusCode); // Print the response status code if a response was received console.log('body:', body); // Print the HTML for the Google homepage. }); ``` ## Table of contents - [Streaming](#streaming) - [Promises & Async/Await](#promises--asyncawait) - [Forms](#forms) - [HTTP Authentication](#http-authentication) - [Custom HTTP Headers](#custom-http-headers) - [OAuth Signing](#oauth-signing) - [Proxies](#proxies) - [Unix Domain Sockets](#unix-domain-sockets) - [TLS/SSL Protocol](#tlsssl-protocol) - [Support for HAR 1.2](#support-for-har-12) - [**All Available Options**](#requestoptions-callback) Request also offers [convenience methods](#convenience-methods) like `request.defaults` and `request.post`, and there are lots of [usage examples](#examples) and several [debugging techniques](#debugging). --- ## Streaming You can stream any response to a file stream. ```js request('http://google.com/doodle.png').pipe(fs.createWriteStream('doodle.png')) ``` You can also stream a file to a PUT or POST request. This method will also check the file extension against a mapping of file extensions to content-types (in this case `application/json`) and use the proper `content-type` in the PUT request (if the headers don’t already provide one). ```js fs.createReadStream('file.json').pipe(request.put('http://mysite.com/obj.json')) ``` Request can also `pipe` to itself. When doing so, `content-type` and `content-length` are preserved in the PUT headers. ```js request.get('http://google.com/img.png').pipe(request.put('http://mysite.com/img.png')) ``` Request emits a "response" event when a response is received. The `response` argument will be an instance of [http.IncomingMessage](https://nodejs.org/api/http.html#http_class_http_incomingmessage). ```js request .get('http://google.com/img.png') .on('response', function(response) { console.log(response.statusCode) // 200 console.log(response.headers['content-type']) // 'image/png' }) .pipe(request.put('http://mysite.com/img.png')) ``` To easily handle errors when streaming requests, listen to the `error` event before piping: ```js request .get('http://mysite.com/doodle.png') .on('error', function(err) { console.log(err) }) .pipe(fs.createWriteStream('doodle.png')) ``` Now let’s get fancy. ```js http.createServer(function (req, resp) { if (req.url === '/doodle.png') { if (req.method === 'PUT') { req.pipe(request.put('http://mysite.com/doodle.png')) } else if (req.method === 'GET' || req.method === 'HEAD') { request.get('http://mysite.com/doodle.png').pipe(resp) } } }) ``` You can also `pipe()` from `http.ServerRequest` instances, as well as to `http.ServerResponse` instances. The HTTP method, headers, and entity-body data will be sent. Which means that, if you don't really care about security, you can do: ```js http.createServer(function (req, resp) { if (req.url === '/doodle.png') { var x = request('http://mysite.com/doodle.png') req.pipe(x) x.pipe(resp) } }) ``` And since `pipe()` returns the destination stream in ≥ Node 0.5.x you can do one line proxying. :) ```js req.pipe(request('http://mysite.com/doodle.png')).pipe(resp) ``` Also, none of this new functionality conflicts with requests previous features, it just expands them. ```js var r = request.defaults({'proxy':'http://localproxy.com'}) http.createServer(function (req, resp) { if (req.url === '/doodle.png') { r.get('http://google.com/doodle.png').pipe(resp) } }) ``` You can still use intermediate proxies, the requests will still follow HTTP forwards, etc. [back to top](#table-of-contents) --- ## Promises & Async/Await `request` supports both streaming and callback interfaces natively. If you'd like `request` to return a Promise instead, you can use an alternative interface wrapper for `request`. These wrappers can be useful if you prefer to work with Promises, or if you'd like to use `async`/`await` in ES2017. Several alternative interfaces are provided by the request team, including: - [`request-promise`](https://github.com/request/request-promise) (uses [Bluebird](https://github.com/petkaantonov/bluebird) Promises) - [`request-promise-native`](https://github.com/request/request-promise-native) (uses native Promises) - [`request-promise-any`](https://github.com/request/request-promise-any) (uses [any-promise](https://www.npmjs.com/package/any-promise) Promises) [back to top](#table-of-contents) --- ## Forms `request` supports `application/x-www-form-urlencoded` and `multipart/form-data` form uploads. For `multipart/related` refer to the `multipart` API. #### application/x-www-form-urlencoded (URL-Encoded Forms) URL-encoded forms are simple. ```js request.post('http://service.com/upload', {form:{key:'value'}}) // or request.post('http://service.com/upload').form({key:'value'}) // or request.post({url:'http://service.com/upload', form: {key:'value'}}, function(err,httpResponse,body){ /* ... */ }) ``` #### multipart/form-data (Multipart Form Uploads) For `multipart/form-data` we use the [form-data](https://github.com/form-data/form-data) library by [@felixge](https://github.com/felixge). For the most cases, you can pass your upload form data via the `formData` option. ```js var formData = { // Pass a simple key-value pair my_field: 'my_value', // Pass data via Buffers my_buffer: Buffer.from([1, 2, 3]), // Pass data via Streams my_file: fs.createReadStream(__dirname + '/unicycle.jpg'), // Pass multiple values /w an Array attachments: [ fs.createReadStream(__dirname + '/attachment1.jpg'), fs.createReadStream(__dirname + '/attachment2.jpg') ], // Pass optional meta-data with an 'options' object with style: {value: DATA, options: OPTIONS} // Use case: for some types of streams, you'll need to provide "file"-related information manually. // See the `form-data` README for more information about options: https://github.com/form-data/form-data custom_file: { value: fs.createReadStream('/dev/urandom'), options: { filename: 'topsecret.jpg', contentType: 'image/jpeg' } } }; request.post({url:'http://service.com/upload', formData: formData}, function optionalCallback(err, httpResponse, body) { if (err) { return console.error('upload failed:', err); } console.log('Upload successful! Server responded with:', body); }); ``` For advanced cases, you can access the form-data object itself via `r.form()`. This can be modified until the request is fired on the next cycle of the event-loop. (Note that this calling `form()` will clear the currently set form data for that request.) ```js // NOTE: Advanced use-case, for normal use see 'formData' usage above var r = request.post('http://service.com/upload', function optionalCallback(err, httpResponse, body) {...}) var form = r.form(); form.append('my_field', 'my_value'); form.append('my_buffer', Buffer.from([1, 2, 3])); form.append('custom_file', fs.createReadStream(__dirname + '/unicycle.jpg'), {filename: 'unicycle.jpg'}); ``` See the [form-data README](https://github.com/form-data/form-data) for more information & examples. #### multipart/related Some variations in different HTTP implementations require a newline/CRLF before, after, or both before and after the boundary of a `multipart/related` request (using the multipart option). This has been observed in the .NET WebAPI version 4.0. You can turn on a boundary preambleCRLF or postamble by passing them as `true` to your request options. ```js request({ method: 'PUT', preambleCRLF: true, postambleCRLF: true, uri: 'http://service.com/upload', multipart: [ { 'content-type': 'application/json', body: JSON.stringify({foo: 'bar', _attachments: {'message.txt': {follows: true, length: 18, 'content_type': 'text/plain' }}}) }, { body: 'I am an attachment' }, { body: fs.createReadStream('image.png') } ], // alternatively pass an object containing additional options multipart: { chunked: false, data: [ { 'content-type': 'application/json', body: JSON.stringify({foo: 'bar', _attachments: {'message.txt': {follows: true, length: 18, 'content_type': 'text/plain' }}}) }, { body: 'I am an attachment' } ] } }, function (error, response, body) { if (error) { return console.error('upload failed:', error); } console.log('Upload successful! Server responded with:', body); }) ``` [back to top](#table-of-contents) --- ## HTTP Authentication ```js request.get('http://some.server.com/').auth('username', 'password', false); // or request.get('http://some.server.com/', { 'auth': { 'user': 'username', 'pass': 'password', 'sendImmediately': false } }); // or request.get('http://some.server.com/').auth(null, null, true, 'bearerToken'); // or request.get('http://some.server.com/', { 'auth': { 'bearer': 'bearerToken' } }); ``` If passed as an option, `auth` should be a hash containing values: - `user` || `username` - `pass` || `password` - `sendImmediately` (optional) - `bearer` (optional) The method form takes parameters `auth(username, password, sendImmediately, bearer)`. `sendImmediately` defaults to `true`, which causes a basic or bearer authentication header to be sent. If `sendImmediately` is `false`, then `request` will retry with a proper authentication header after receiving a `401` response from the server (which must contain a `WWW-Authenticate` header indicating the required authentication method). Note that you can also specify basic authentication using the URL itself, as detailed in [RFC 1738](http://www.ietf.org/rfc/rfc1738.txt). Simply pass the `user:password` before the host with an `@` sign: ```js var username = 'username', password = 'password', url = 'http://' + username + ':' + password + '@some.server.com'; request({url: url}, function (error, response, body) { // Do more stuff with 'body' here }); ``` Digest authentication is supported, but it only works with `sendImmediately` set to `false`; otherwise `request` will send basic authentication on the initial request, which will probably cause the request to fail. Bearer authentication is supported, and is activated when the `bearer` value is available. The value may be either a `String` or a `Function` returning a `String`. Using a function to supply the bearer token is particularly useful if used in conjunction with `defaults` to allow a single function to supply the last known token at the time of sending a request, or to compute one on the fly. [back to top](#table-of-contents) --- ## Custom HTTP Headers HTTP Headers, such as `User-Agent`, can be set in the `options` object. In the example below, we call the github API to find out the number of stars and forks for the request repository. This requires a custom `User-Agent` header as well as https. ```js var request = require('request'); var options = { url: 'https://api.github.com/repos/request/request', headers: { 'User-Agent': 'request' } }; function callback(error, response, body) { if (!error && response.statusCode == 200) { var info = JSON.parse(body); console.log(info.stargazers_count + " Stars"); console.log(info.forks_count + " Forks"); } } request(options, callback); ``` [back to top](#table-of-contents) --- ## OAuth Signing [OAuth version 1.0](https://tools.ietf.org/html/rfc5849) is supported. The default signing algorithm is [HMAC-SHA1](https://tools.ietf.org/html/rfc5849#section-3.4.2): ```js // OAuth1.0 - 3-legged server side flow (Twitter example) // step 1 var qs = require('querystring') , oauth = { callback: 'http://mysite.com/callback/' , consumer_key: CONSUMER_KEY , consumer_secret: CONSUMER_SECRET } , url = 'https://api.twitter.com/oauth/request_token' ; request.post({url:url, oauth:oauth}, function (e, r, body) { // Ideally, you would take the body in the response // and construct a URL that a user clicks on (like a sign in button). // The verifier is only available in the response after a user has // verified with twitter that they are authorizing your app. // step 2 var req_data = qs.parse(body) var uri = 'https://api.twitter.com/oauth/authenticate' + '?' + qs.stringify({oauth_token: req_data.oauth_token}) // redirect the user to the authorize uri // step 3 // after the user is redirected back to your server var auth_data = qs.parse(body) , oauth = { consumer_key: CONSUMER_KEY , consumer_secret: CONSUMER_SECRET , token: auth_data.oauth_token , token_secret: req_data.oauth_token_secret , verifier: auth_data.oauth_verifier } , url = 'https://api.twitter.com/oauth/access_token' ; request.post({url:url, oauth:oauth}, function (e, r, body) { // ready to make signed requests on behalf of the user var perm_data = qs.parse(body) , oauth = { consumer_key: CONSUMER_KEY , consumer_secret: CONSUMER_SECRET , token: perm_data.oauth_token , token_secret: perm_data.oauth_token_secret } , url = 'https://api.twitter.com/1.1/users/show.json' , qs = { screen_name: perm_data.screen_name , user_id: perm_data.user_id } ; request.get({url:url, oauth:oauth, qs:qs, json:true}, function (e, r, user) { console.log(user) }) }) }) ``` For [RSA-SHA1 signing](https://tools.ietf.org/html/rfc5849#section-3.4.3), make the following changes to the OAuth options object: * Pass `signature_method : 'RSA-SHA1'` * Instead of `consumer_secret`, specify a `private_key` string in [PEM format](http://how2ssl.com/articles/working_with_pem_files/) For [PLAINTEXT signing](http://oauth.net/core/1.0/#anchor22), make the following changes to the OAuth options object: * Pass `signature_method : 'PLAINTEXT'` To send OAuth parameters via query params or in a post body as described in The [Consumer Request Parameters](http://oauth.net/core/1.0/#consumer_req_param) section of the oauth1 spec: * Pass `transport_method : 'query'` or `transport_method : 'body'` in the OAuth options object. * `transport_method` defaults to `'header'` To use [Request Body Hash](https://oauth.googlecode.com/svn/spec/ext/body_hash/1.0/oauth-bodyhash.html) you can either * Manually generate the body hash and pass it as a string `body_hash: '...'` * Automatically generate the body hash by passing `body_hash: true` [back to top](#table-of-contents) --- ## Proxies If you specify a `proxy` option, then the request (and any subsequent redirects) will be sent via a connection to the proxy server. If your endpoint is an `https` url, and you are using a proxy, then request will send a `CONNECT` request to the proxy server *first*, and then use the supplied connection to connect to the endpoint. That is, first it will make a request like: ``` HTTP/1.1 CONNECT endpoint-server.com:80 Host: proxy-server.com User-Agent: whatever user agent you specify ``` and then the proxy server make a TCP connection to `endpoint-server` on port `80`, and return a response that looks like: ``` HTTP/1.1 200 OK ``` At this point, the connection is left open, and the client is communicating directly with the `endpoint-server.com` machine. See [the wikipedia page on HTTP Tunneling](https://en.wikipedia.org/wiki/HTTP_tunnel) for more information. By default, when proxying `http` traffic, request will simply make a standard proxied `http` request. This is done by making the `url` section of the initial line of the request a fully qualified url to the endpoint. For example, it will make a single request that looks like: ``` HTTP/1.1 GET http://endpoint-server.com/some-url Host: proxy-server.com Other-Headers: all go here request body or whatever ``` Because a pure "http over http" tunnel offers no additional security or other features, it is generally simpler to go with a straightforward HTTP proxy in this case. However, if you would like to force a tunneling proxy, you may set the `tunnel` option to `true`. You can also make a standard proxied `http` request by explicitly setting `tunnel : false`, but **note that this will allow the proxy to see the traffic to/from the destination server**. If you are using a tunneling proxy, you may set the `proxyHeaderWhiteList` to share certain headers with the proxy. You can also set the `proxyHeaderExclusiveList` to share certain headers only with the proxy and not with destination host. By default, this set is: ``` accept accept-charset accept-encoding accept-language accept-ranges cache-control content-encoding content-language content-length content-location content-md5 content-range content-type connection date expect max-forwards pragma proxy-authorization referer te transfer-encoding user-agent via ``` Note that, when using a tunneling proxy, the `proxy-authorization` header and any headers from custom `proxyHeaderExclusiveList` are *never* sent to the endpoint server, but only to the proxy server. ### Controlling proxy behaviour using environment variables The following environment variables are respected by `request`: * `HTTP_PROXY` / `http_proxy` * `HTTPS_PROXY` / `https_proxy` * `NO_PROXY` / `no_proxy` When `HTTP_PROXY` / `http_proxy` are set, they will be used to proxy non-SSL requests that do not have an explicit `proxy` configuration option present. Similarly, `HTTPS_PROXY` / `https_proxy` will be respected for SSL requests that do not have an explicit `proxy` configuration option. It is valid to define a proxy in one of the environment variables, but then override it for a specific request, using the `proxy` configuration option. Furthermore, the `proxy` configuration option can be explicitly set to false / null to opt out of proxying altogether for that request. `request` is also aware of the `NO_PROXY`/`no_proxy` environment variables. These variables provide a granular way to opt out of proxying, on a per-host basis. It should contain a comma separated list of hosts to opt out of proxying. It is also possible to opt of proxying when a particular destination port is used. Finally, the variable may be set to `*` to opt out of the implicit proxy configuration of the other environment variables. Here's some examples of valid `no_proxy` values: * `google.com` - don't proxy HTTP/HTTPS requests to Google. * `google.com:443` - don't proxy HTTPS requests to Google, but *do* proxy HTTP requests to Google. * `google.com:443, yahoo.com:80` - don't proxy HTTPS requests to Google, and don't proxy HTTP requests to Yahoo! * `*` - ignore `https_proxy`/`http_proxy` environment variables altogether. [back to top](#table-of-contents) --- ## UNIX Domain Sockets `request` supports making requests to [UNIX Domain Sockets](https://en.wikipedia.org/wiki/Unix_domain_socket). To make one, use the following URL scheme: ```js /* Pattern */ 'http://unix:SOCKET:PATH' /* Example */ request.get('http://unix:/absolute/path/to/unix.socket:/request/path') ``` Note: The `SOCKET` path is assumed to be absolute to the root of the host file system. [back to top](#table-of-contents) --- ## TLS/SSL Protocol TLS/SSL Protocol options, such as `cert`, `key` and `passphrase`, can be set directly in `options` object, in the `agentOptions` property of the `options` object, or even in `https.globalAgent.options`. Keep in mind that, although `agentOptions` allows for a slightly wider range of configurations, the recommended way is via `options` object directly, as using `agentOptions` or `https.globalAgent.options` would not be applied in the same way in proxied environments (as data travels through a TLS connection instead of an http/https agent). ```js var fs = require('fs') , path = require('path') , certFile = path.resolve(__dirname, 'ssl/client.crt') , keyFile = path.resolve(__dirname, 'ssl/client.key') , caFile = path.resolve(__dirname, 'ssl/ca.cert.pem') , request = require('request'); var options = { url: 'https://api.some-server.com/', cert: fs.readFileSync(certFile), key: fs.readFileSync(keyFile), passphrase: 'password', ca: fs.readFileSync(caFile) }; request.get(options); ``` ### Using `options.agentOptions` In the example below, we call an API that requires client side SSL certificate (in PEM format) with passphrase protected private key (in PEM format) and disable the SSLv3 protocol: ```js var fs = require('fs') , path = require('path') , certFile = path.resolve(__dirname, 'ssl/client.crt') , keyFile = path.resolve(__dirname, 'ssl/client.key') , request = require('request'); var options = { url: 'https://api.some-server.com/', agentOptions: { cert: fs.readFileSync(certFile), key: fs.readFileSync(keyFile), // Or use `pfx` property replacing `cert` and `key` when using private key, certificate and CA certs in PFX or PKCS12 format: // pfx: fs.readFileSync(pfxFilePath), passphrase: 'password', securityOptions: 'SSL_OP_NO_SSLv3' } }; request.get(options); ``` It is able to force using SSLv3 only by specifying `secureProtocol`: ```js request.get({ url: 'https://api.some-server.com/', agentOptions: { secureProtocol: 'SSLv3_method' } }); ``` It is possible to accept other certificates than those signed by generally allowed Certificate Authorities (CAs). This can be useful, for example, when using self-signed certificates. To require a different root certificate, you can specify the signing CA by adding the contents of the CA's certificate file to the `agentOptions`. The certificate the domain presents must be signed by the root certificate specified: ```js request.get({ url: 'https://api.some-server.com/', agentOptions: { ca: fs.readFileSync('ca.cert.pem') } }); ``` [back to top](#table-of-contents) --- ## Support for HAR 1.2 The `options.har` property will override the values: `url`, `method`, `qs`, `headers`, `form`, `formData`, `body`, `json`, as well as construct multipart data and read files from disk when `request.postData.params[].fileName` is present without a matching `value`. A validation step will check if the HAR Request format matches the latest spec (v1.2) and will skip parsing if not matching. ```js var request = require('request') request({ // will be ignored method: 'GET', uri: 'http://www.google.com', // HTTP Archive Request Object har: { url: 'http://www.mockbin.com/har', method: 'POST', headers: [ { name: 'content-type', value: 'application/x-www-form-urlencoded' } ], postData: { mimeType: 'application/x-www-form-urlencoded', params: [ { name: 'foo', value: 'bar' }, { name: 'hello', value: 'world' } ] } } }) // a POST request will be sent to http://www.mockbin.com // with body an application/x-www-form-urlencoded body: // foo=bar&hello=world ``` [back to top](#table-of-contents) --- ## request(options, callback) The first argument can be either a `url` or an `options` object. The only required option is `uri`; all others are optional. - `uri` || `url` - fully qualified uri or a parsed url object from `url.parse()` - `baseUrl` - fully qualified uri string used as the base url. Most useful with `request.defaults`, for example when you want to do many requests to the same domain. If `baseUrl` is `https://example.com/api/`, then requesting `/end/point?test=true` will fetch `https://example.com/api/end/point?test=true`. When `baseUrl` is given, `uri` must also be a string. - `method` - http method (default: `"GET"`) - `headers` - http headers (default: `{}`) --- - `qs` - object containing querystring values to be appended to the `uri` - `qsParseOptions` - object containing options to pass to the [qs.parse](https://github.com/hapijs/qs#parsing-objects) method. Alternatively pass options to the [querystring.parse](https://nodejs.org/docs/v0.12.0/api/querystring.html#querystring_querystring_parse_str_sep_eq_options) method using this format `{sep:';', eq:':', options:{}}` - `qsStringifyOptions` - object containing options to pass to the [qs.stringify](https://github.com/hapijs/qs#stringifying) method. Alternatively pass options to the [querystring.stringify](https://nodejs.org/docs/v0.12.0/api/querystring.html#querystring_querystring_stringify_obj_sep_eq_options) method using this format `{sep:';', eq:':', options:{}}`. For example, to change the way arrays are converted to query strings using the `qs` module pass the `arrayFormat` option with one of `indices|brackets|repeat` - `useQuerystring` - if true, use `querystring` to stringify and parse querystrings, otherwise use `qs` (default: `false`). Set this option to `true` if you need arrays to be serialized as `foo=bar&foo=baz` instead of the default `foo[0]=bar&foo[1]=baz`. --- - `body` - entity body for PATCH, POST and PUT requests. Must be a `Buffer`, `String` or `ReadStream`. If `json` is `true`, then `body` must be a JSON-serializable object. - `form` - when passed an object or a querystring, this sets `body` to a querystring representation of value, and adds `Content-type: application/x-www-form-urlencoded` header. When passed no options, a `FormData` instance is returned (and is piped to request). See "Forms" section above. - `formData` - data to pass for a `multipart/form-data` request. See [Forms](#forms) section above. - `multipart` - array of objects which contain their own headers and `body` attributes. Sends a `multipart/related` request. See [Forms](#forms) section above. - Alternatively you can pass in an object `{chunked: false, data: []}` where `chunked` is used to specify whether the request is sent in [chunked transfer encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding) In non-chunked requests, data items with body streams are not allowed. - `preambleCRLF` - append a newline/CRLF before the boundary of your `multipart/form-data` request. - `postambleCRLF` - append a newline/CRLF at the end of the boundary of your `multipart/form-data` request. - `json` - sets `body` to JSON representation of value and adds `Content-type: application/json` header. Additionally, parses the response body as JSON. - `jsonReviver` - a [reviver function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse) that will be passed to `JSON.parse()` when parsing a JSON response body. - `jsonReplacer` - a [replacer function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify) that will be passed to `JSON.stringify()` when stringifying a JSON request body. --- - `auth` - a hash containing values `user` || `username`, `pass` || `password`, and `sendImmediately` (optional). See documentation above. - `oauth` - options for OAuth HMAC-SHA1 signing. See documentation above. - `hawk` - options for [Hawk signing](https://github.com/hueniverse/hawk). The `credentials` key must contain the necessary signing info, [see hawk docs for details](https://github.com/hueniverse/hawk#usage-example). - `aws` - `object` containing AWS signing information. Should have the properties `key`, `secret`, and optionally `session` (note that this only works for services that require session as part of the canonical string). Also requires the property `bucket`, unless you’re specifying your `bucket` as part of the path, or the request doesn’t use a bucket (i.e. GET Services). If you want to use AWS sign version 4 use the parameter `sign_version` with value `4` otherwise the default is version 2. If you are using SigV4, you can also include a `service` property that specifies the service name. **Note:** you need to `npm install aws4` first. - `httpSignature` - options for the [HTTP Signature Scheme](https://github.com/joyent/node-http-signature/blob/master/http_signing.md) using [Joyent's library](https://github.com/joyent/node-http-signature). The `keyId` and `key` properties must be specified. See the docs for other options. --- - `followRedirect` - follow HTTP 3xx responses as redirects (default: `true`). This property can also be implemented as function which gets `response` object as a single argument and should return `true` if redirects should continue or `false` otherwise. - `followAllRedirects` - follow non-GET HTTP 3xx responses as redirects (default: `false`) - `followOriginalHttpMethod` - by default we redirect to HTTP method GET. you can enable this property to redirect to the original HTTP method (default: `false`) - `maxRedirects` - the maximum number of redirects to follow (default: `10`) - `removeRefererHeader` - removes the referer header when a redirect happens (default: `false`). **Note:** if true, referer header set in the initial request is preserved during redirect chain. --- - `encoding` - encoding to be used on `setEncoding` of response data. If `null`, the `body` is returned as a `Buffer`. Anything else **(including the default value of `undefined`)** will be passed as the [encoding](http://nodejs.org/api/buffer.html#buffer_buffer) parameter to `toString()` (meaning this is effectively `utf8` by default). (**Note:** if you expect binary data, you should set `encoding: null`.) - `gzip` - if `true`, add an `Accept-Encoding` header to request compressed content encodings from the server (if not already present) and decode supported content encodings in the response. **Note:** Automatic decoding of the response content is performed on the body data returned through `request` (both through the `request` stream and passed to the callback function) but is not performed on the `response` stream (available from the `response` event) which is the unmodified `http.IncomingMessage` object which may contain compressed data. See example below. - `jar` - if `true`, remember cookies for future use (or define your custom cookie jar; see examples section) --- - `agent` - `http(s).Agent` instance to use - `agentClass` - alternatively specify your agent's class name - `agentOptions` - and pass its options. **Note:** for HTTPS see [tls API doc for TLS/SSL options](http://nodejs.org/api/tls.html#tls_tls_connect_options_callback) and the [documentation above](#using-optionsagentoptions). - `forever` - set to `true` to use the [forever-agent](https://github.com/request/forever-agent) **Note:** Defaults to `http(s).Agent({keepAlive:true})` in node 0.12+ - `pool` - an object describing which agents to use for the request. If this option is omitted the request will use the global agent (as long as your options allow for it). Otherwise, request will search the pool for your custom agent. If no custom agent is found, a new agent will be created and added to the pool. **Note:** `pool` is used only when the `agent` option is not specified. - A `maxSockets` property can also be provided on the `pool` object to set the max number of sockets for all agents created (ex: `pool: {maxSockets: Infinity}`). - Note that if you are sending multiple requests in a loop and creating multiple new `pool` objects, `maxSockets` will not work as intended. To work around this, either use [`request.defaults`](#requestdefaultsoptions) with your pool options or create the pool object with the `maxSockets` property outside of the loop. - `timeout` - integer containing the number of milliseconds to wait for a server to send response headers (and start the response body) before aborting the request. Note that if the underlying TCP connection cannot be established, the OS-wide TCP connection timeout will overrule the `timeout` option ([the default in Linux can be anywhere from 20-120 seconds][linux-timeout]). [linux-timeout]: http://www.sekuda.com/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout --- - `localAddress` - local interface to bind for network connections. - `proxy` - an HTTP proxy to be used. Supports proxy Auth with Basic Auth, identical to support for the `url` parameter (by embedding the auth info in the `uri`) - `strictSSL` - if `true`, requires SSL certificates be valid. **Note:** to use your own certificate authority, you need to specify an agent that was created with that CA as an option. - `tunnel` - controls the behavior of [HTTP `CONNECT` tunneling](https://en.wikipedia.org/wiki/HTTP_tunnel#HTTP_CONNECT_tunneling) as follows: - `undefined` (default) - `true` if the destination is `https`, `false` otherwise - `true` - always tunnel to the destination by making a `CONNECT` request to the proxy - `false` - request the destination as a `GET` request. - `proxyHeaderWhiteList` - a whitelist of headers to send to a tunneling proxy. - `proxyHeaderExclusiveList` - a whitelist of headers to send exclusively to a tunneling proxy and not to destination. --- - `time` - if `true`, the request-response cycle (including all redirects) is timed at millisecond resolution. When set, the following properties are added to the response object: - `elapsedTime` Duration of the entire request/response in milliseconds (*deprecated*). - `responseStartTime` Timestamp when the response began (in Unix Epoch milliseconds) (*deprecated*). - `timingStart` Timestamp of the start of the request (in Unix Epoch milliseconds). - `timings` Contains event timestamps in millisecond resolution relative to `timingStart`. If there were redirects, the properties reflect the timings of the final request in the redirect chain: - `socket` Relative timestamp when the [`http`](https://nodejs.org/api/http.html#http_event_socket) module's `socket` event fires. This happens when the socket is assigned to the request. - `lookup` Relative timestamp when the [`net`](https://nodejs.org/api/net.html#net_event_lookup) module's `lookup` event fires. This happens when the DNS has been resolved. - `connect`: Relative timestamp when the [`net`](https://nodejs.org/api/net.html#net_event_connect) module's `connect` event fires. This happens when the server acknowledges the TCP connection. - `response`: Relative timestamp when the [`http`](https://nodejs.org/api/http.html#http_event_response) module's `response` event fires. This happens when the first bytes are received from the server. - `end`: Relative timestamp when the last bytes of the response are received. - `timingPhases` Contains the durations of each request phase. If there were redirects, the properties reflect the timings of the final request in the redirect chain: - `wait`: Duration of socket initialization (`timings.socket`) - `dns`: Duration of DNS lookup (`timings.lookup` - `timings.socket`) - `tcp`: Duration of TCP connection (`timings.connect` - `timings.socket`) - `firstByte`: Duration of HTTP server response (`timings.response` - `timings.connect`) - `download`: Duration of HTTP download (`timings.end` - `timings.response`) - `total`: Duration entire HTTP round-trip (`timings.end`) - `har` - a [HAR 1.2 Request Object](http://www.softwareishard.com/blog/har-12-spec/#request), will be processed from HAR format into options overwriting matching values *(see the [HAR 1.2 section](#support-for-har-1.2) for details)* - `callback` - alternatively pass the request's callback in the options object The callback argument gets 3 arguments: 1. An `error` when applicable (usually from [`http.ClientRequest`](http://nodejs.org/api/http.html#http_class_http_clientrequest) object) 2. An [`http.IncomingMessage`](https://nodejs.org/api/http.html#http_class_http_incomingmessage) object (Response object) 3. The third is the `response` body (`String` or `Buffer`, or JSON object if the `json` option is supplied) [back to top](#table-of-contents) --- ## Convenience methods There are also shorthand methods for different HTTP METHODs and some other conveniences. ### request.defaults(options) This method **returns a wrapper** around the normal request API that defaults to whatever options you pass to it. **Note:** `request.defaults()` **does not** modify the global request API; instead, it **returns a wrapper** that has your default settings applied to it. **Note:** You can call `.defaults()` on the wrapper that is returned from `request.defaults` to add/override defaults that were previously defaulted. For example: ```js //requests using baseRequest() will set the 'x-token' header var baseRequest = request.defaults({ headers: {'x-token': 'my-token'} }) //requests using specialRequest() will include the 'x-token' header set in //baseRequest and will also include the 'special' header var specialRequest = baseRequest.defaults({ headers: {special: 'special value'} }) ``` ### request.METHOD() These HTTP method convenience functions act just like `request()` but with a default method already set for you: - *request.get()*: Defaults to `method: "GET"`. - *request.post()*: Defaults to `method: "POST"`. - *request.put()*: Defaults to `method: "PUT"`. - *request.patch()*: Defaults to `method: "PATCH"`. - *request.del() / request.delete()*: Defaults to `method: "DELETE"`. - *request.head()*: Defaults to `method: "HEAD"`. - *request.options()*: Defaults to `method: "OPTIONS"`. ### request.cookie() Function that creates a new cookie. ```js request.cookie('key1=value1') ``` ### request.jar() Function that creates a new cookie jar. ```js request.jar() ``` [back to top](#table-of-contents) --- ## Debugging There are at least three ways to debug the operation of `request`: 1. Launch the node process like `NODE_DEBUG=request node script.js` (`lib,request,otherlib` works too). 2. Set `require('request').debug = true` at any time (this does the same thing as #1). 3. Use the [request-debug module](https://github.com/request/request-debug) to view request and response headers and bodies. [back to top](#table-of-contents) --- ## Timeouts Most requests to external servers should have a timeout attached, in case the server is not responding in a timely manner. Without a timeout, your code may have a socket open/consume resources for minutes or more. There are two main types of timeouts: **connection timeouts** and **read timeouts**. A connect timeout occurs if the timeout is hit while your client is attempting to establish a connection to a remote machine (corresponding to the [connect() call][connect] on the socket). A read timeout occurs any time the server is too slow to send back a part of the response. These two situations have widely different implications for what went wrong with the request, so it's useful to be able to distinguish them. You can detect timeout errors by checking `err.code` for an 'ETIMEDOUT' value. Further, you can detect whether the timeout was a connection timeout by checking if the `err.connect` property is set to `true`. ```js request.get('http://10.255.255.1', {timeout: 1500}, function(err) { console.log(err.code === 'ETIMEDOUT'); // Set to `true` if the timeout was a connection timeout, `false` or // `undefined` otherwise. console.log(err.connect === true); process.exit(0); }); ``` [connect]: http://linux.die.net/man/2/connect ## Examples: ```js var request = require('request') , rand = Math.floor(Math.random()*100000000).toString() ; request( { method: 'PUT' , uri: 'http://mikeal.iriscouch.com/testjs/' + rand , multipart: [ { 'content-type': 'application/json' , body: JSON.stringify({foo: 'bar', _attachments: {'message.txt': {follows: true, length: 18, 'content_type': 'text/plain' }}}) } , { body: 'I am an attachment' } ] } , function (error, response, body) { if(response.statusCode == 201){ console.log('document saved as: http://mikeal.iriscouch.com/testjs/'+ rand) } else { console.log('error: '+ response.statusCode) console.log(body) } } ) ``` For backwards-compatibility, response compression is not supported by default. To accept gzip-compressed responses, set the `gzip` option to `true`. Note that the body data passed through `request` is automatically decompressed while the response object is unmodified and will contain compressed data if the server sent a compressed response. ```js var request = require('request') request( { method: 'GET' , uri: 'http://www.google.com' , gzip: true } , function (error, response, body) { // body is the decompressed response body console.log('server encoded the data as: ' + (response.headers['content-encoding'] || 'identity')) console.log('the decoded data is: ' + body) } ) .on('data', function(data) { // decompressed data as it is received console.log('decoded chunk: ' + data) }) .on('response', function(response) { // unmodified http.IncomingMessage object response.on('data', function(data) { // compressed data as it is received console.log('received ' + data.length + ' bytes of compressed data') }) }) ``` Cookies are disabled by default (else, they would be used in subsequent requests). To enable cookies, set `jar` to `true` (either in `defaults` or `options`). ```js var request = request.defaults({jar: true}) request('http://www.google.com', function () { request('http://images.google.com') }) ``` To use a custom cookie jar (instead of `request`’s global cookie jar), set `jar` to an instance of `request.jar()` (either in `defaults` or `options`) ```js var j = request.jar() var request = request.defaults({jar:j}) request('http://www.google.com', function () { request('http://images.google.com') }) ``` OR ```js var j = request.jar(); var cookie = request.cookie('key1=value1'); var url = 'http://www.google.com'; j.setCookie(cookie, url); request({url: url, jar: j}, function () { request('http://images.google.com') }) ``` To use a custom cookie store (such as a [`FileCookieStore`](https://github.com/mitsuru/tough-cookie-filestore) which supports saving to and restoring from JSON files), pass it as a parameter to `request.jar()`: ```js var FileCookieStore = require('tough-cookie-filestore'); // NOTE - currently the 'cookies.json' file must already exist! var j = request.jar(new FileCookieStore('cookies.json')); request = request.defaults({ jar : j }) request('http://www.google.com', function() { request('http://images.google.com') }) ``` The cookie store must be a [`tough-cookie`](https://github.com/SalesforceEng/tough-cookie) store and it must support synchronous operations; see the [`CookieStore` API docs](https://github.com/SalesforceEng/tough-cookie#cookiestore-api) for details. To inspect your cookie jar after a request: ```js var j = request.jar() request({url: 'http://www.google.com', jar: j}, function () { var cookie_string = j.getCookieString(url); // "key1=value1; key2=value2; ..." var cookies = j.getCookies(url); // [{key: 'key1', value: 'value1', domain: "www.google.com", ...}, ...] }) ``` [back to top](#table-of-contents) # Cyclist Cyclist is an efficient [cyclic list](http://en.wikipedia.org/wiki/Circular_buffer) implemention for Javascript. It is available through npm npm install cyclist ## What? Cyclist allows you to create a list of fixed size that is cyclic. In a cyclist list the element following the last one is the first one. This property can be really useful when for example trying to order data packets that can arrive out of order over a network stream. ## Usage ``` js var cyclist = require('cyclist'); var list = cyclist(4); // if size (4) is not a power of 2 it will be the follwing power of 2 // this buffer can now hold 4 elements in total list.put(42, 'hello 42'); // store something and index 42 list.put(43, 'hello 43'); // store something and index 43 console.log(list.get(42)); // prints hello 42 console.log(list.get(46)); // prints hello 42 again since 46 - 42 == list.size ``` ## API * `cyclist(size)` creates a new buffer * `cyclist#get(index)` get an object stored in the buffer * `cyclist#put(index,value)` insert an object into the buffer * `cyclist#del(index)` delete an object from an index * `cyclist#size` property containing current size of buffer ## License MIT unique-filename =============== Generate a unique filename for use in temporary directories or caches. ``` var uniqueFilename = require('unique-filename') // returns something like: /tmp/912ec803b2ce49e4a541068d495ab570 var randomTmpfile = uniqueFilename(os.tmpdir()) // returns something like: /tmp/my-test-912ec803b2ce49e4a541068d495ab570 var randomPrefixedTmpfile = uniqueFilename(os.tmpdir(), 'my-test') var uniqueTmpfile = uniqueFilename('/tmp', 'testing', '/my/thing/to/uniq/on') ``` ### uniqueFilename(*dir*, *fileprefix*, *uniqstr*) → String Returns the full path of a unique filename that looks like: `dir/prefix-7ddd44c0` or `dir/7ddd44c0` *dir* – The path you want the filename in. `os.tmpdir()` is a good choice for this. *fileprefix* – A string to append prior to the unique part of the filename. The parameter is required if *uniqstr* is also passed in but is otherwise optional and can be `undefined`/`null`/`''`. If present and not empty then this string plus a hyphen are prepended to the unique part. *uniqstr* – Optional, if not passed the unique part of the resulting filename will be random. If passed in it will be generated from this string in a reproducable way. # npm-install-checks A package that contains checks that npm runs during the installation. ## API ### .checkEngine(target, npmVer, nodeVer, force, strict, cb) Check if node/npm version is supported by the package. If not strict and it isn't supported, `cb` is called with the error object as its second argument. Error type: `ENOTSUP` ### .checkPlatform(target, force, cb) Check if OS/Arch is supported by the package. Error type: `EBADPLATFORM` ### .checkCycle(target, ancestors, cb) Check for cyclic dependencies. Error type: `ECYCLE` ### .checkGit(folder, cb) Check if a folder is a .git folder. Error type: `EISGIT` [![Build Status](https://travis-ci.org/npm/npm-user-validate.png?branch=master)](https://travis-ci.org/npm/npm-user-validate) [![devDependency Status](https://david-dm.org/npm/npm-user-validate/dev-status.png)](https://david-dm.org/npm/npm-user-validate#info=devDependencies) # npm-user-validate Validation for the npm client and npm-www (and probably other npm projects) # move-concurrently Move files and directories. ``` const move = require('move-concurrently') move('/path/to/thing', '/new/path/thing').then(() => { // thing is now moved! }).catch(err => { // oh no! }) ``` Uses `rename` to move things as fast as possible. If you `move` across devices or on filesystems that don't support renaming large directories. That is, situations that result in `rename` returning the `EXDEV` error, then `move` will fallback to copy + delete. When recursively copying directories it will first try to rename the contents before falling back to copying. While this will be slightly slower in true cross-device scenarios, it is MUCH faster in cases where the filesystem can't handle directory renames. When copying ownership is maintained when running as root. Permissions are always maintained. On Windows, if symlinks are unavailable then junctions will be used. ## INTERFACE ### move(from, to, options) → Promise Recursively moves `from` to `to` and resolves its promise when finished. If `to` already exists then the promise will be rejected with an `EEXIST` error. Starts by trying to rename `from` to `to`. Options are: * maxConcurrency – (Default: `1`) The maximum number of concurrent copies to do at once. * isWindows - (Default: `process.platform === 'win32'`) If true enables Windows symlink semantics. This requires an extra `stat` to determine if the destination of a symlink is a file or directory. If symlinking a directory fails then we'll try making a junction instead. Options can also include dependency injection: * Promise - (Default: `global.Promise`) The promise implementation to use, defaults to Node's. * fs - (Default: `require('fs')`) The filesystem module to use. Can be used to use `graceful-fs` or to inject a mock. * writeStreamAtomic - (Default: `require('fs-write-stream-atomic')`) The implementation of `writeStreamAtomic` to use. Used to inject a mock. * getuid - (Default: `process.getuid`) A function that returns the current UID. Used to inject a mock. ## Caseless -- wrap an object to set and get property with caseless semantics but also preserve caseing. This library is incredibly useful when working with HTTP headers. It allows you to get/set/check for headers in a caseless manner while also preserving the caseing of headers the first time they are set. ## Usage ```javascript var headers = {} , c = caseless(headers) ; c.set('a-Header', 'asdf') c.get('a-header') === 'asdf' ``` ## has(key) Has takes a name and if it finds a matching header will return that header name with the preserved caseing it was set with. ```javascript c.has('a-header') === 'a-Header' ``` ## set(key, value[, clobber=true]) Set is fairly straight forward except that if the header exists and clobber is disabled it will add `','+value` to the existing header. ```javascript c.set('a-Header', 'fdas') c.set('a-HEADER', 'more', false) c.get('a-header') === 'fdsa,more' ``` ## swap(key) Swaps the casing of a header with the new one that is passed in. ```javascript var headers = {} , c = caseless(headers) ; c.set('a-Header', 'fdas') c.swap('a-HEADER') c.has('a-header') === 'a-HEADER' headers === {'a-HEADER': 'fdas'} ``` ```javascript var correct = require('spdx-correct') var assert = require('assert') assert.equal(correct('mit'), 'MIT') assert.equal(correct('Apache 2'), 'Apache-2.0') assert(correct('No idea what license') === null) ``` # abbrev-js Just like [ruby's Abbrev](http://apidock.com/ruby/Abbrev). Usage: var abbrev = require("abbrev"); abbrev("foo", "fool", "folding", "flop"); // returns: { fl: 'flop' , flo: 'flop' , flop: 'flop' , fol: 'folding' , fold: 'folding' , foldi: 'folding' , foldin: 'folding' , folding: 'folding' , foo: 'foo' , fool: 'fool' } This is handy for command-line scripts, or other cases where you want to be able to accept shorthands. iMurmurHash.js ============== An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on [Gary Court's implementation](https://github.com/garycourt/murmurhash-js) with [kazuyukitanimura's modifications](https://github.com/kazuyukitanimura/murmurhash-js). This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing. Installation ------------ To use iMurmurHash in the browser, [download the latest version](https://raw.github.com/jensyt/imurmurhash-js/master/imurmurhash.min.js) and include it as a script on your site. ```html <script type="text/javascript" src="/scripts/imurmurhash.min.js"></script> <script> // Your code here, access iMurmurHash using the global object MurmurHash3 </script> ``` --- To use iMurmurHash in Node.js, install the module using NPM: ```bash npm install imurmurhash ``` Then simply include it in your scripts: ```javascript MurmurHash3 = require('imurmurhash'); ``` Quick Example ------------- ```javascript // Create the initial hash var hashState = MurmurHash3('string'); // Incrementally add text hashState.hash('more strings'); hashState.hash('even more strings'); // All calls can be chained if desired hashState.hash('and').hash('some').hash('more'); // Get a result hashState.result(); // returns 0xe4ccfe6b ``` Functions --------- ### MurmurHash3 ([string], [seed]) Get a hash state object, optionally initialized with the given _string_ and _seed_. _Seed_ must be a positive integer if provided. Calling this function without the `new` keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use `new` to create a new state object. For example: ```javascript // Use the cached object, calling the function again will return the same // object (but reset, so the current state would be lost) hashState = MurmurHash3(); ... // Create a new object that can be safely used however you wish. Calling the // function again will simply return a new state object, and no state loss // will occur, at the cost of creating more objects. hashState = new MurmurHash3(); ``` Both methods can be mixed however you like if you have different use cases. --- ### MurmurHash3.prototype.hash (string) Incrementally add _string_ to the hash. This can be called as many times as you want for the hash state object, including after a call to `result()`. Returns `this` so calls can be chained. --- ### MurmurHash3.prototype.result () Get the result of the hash as a 32-bit positive integer. This performs the tail and finalizer portions of the algorithm, but does not store the result in the state object. This means that it is perfectly safe to get results and then continue adding strings via `hash`. ```javascript // Do the whole string at once MurmurHash3('this is a test string').result(); // 0x70529328 // Do part of the string, get a result, then the other part var m = MurmurHash3('this is a'); m.result(); // 0xbfc4f834 m.hash(' test string').result(); // 0x70529328 (same as above) ``` --- ### MurmurHash3.prototype.reset ([seed]) Reset the state object for reuse, optionally using the given _seed_ (defaults to 0 like the constructor). Returns `this` so calls can be chained. --- License (MIT) ------------- Copyright (c) 2013 Gary Court, Jens Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. npm(1) -- a JavaScript package manager ============================== [![Build Status](https://img.shields.io/travis/npm/cli/latest.svg)](https://travis-ci.org/npm/cli) ## SYNOPSIS This is just enough info to get you up and running. Much more info will be available via `npm help` once it's installed. ## IMPORTANT **You need node v6 or higher to run this program.** To install an old **and unsupported** version of npm that works on node v5 and prior, clone the git repo and dig through the old tags and branches. **npm is configured to use npm, Inc.'s public registry at <https://registry.npmjs.org> by default.** Use of the npm public registry is subject to terms of use available at <https://www.npmjs.com/policies/terms>. You can configure npm to use any compatible registry you like, and even run your own registry. Check out the [doc on registries](https://docs.npmjs.com/misc/registry). ## Super Easy Install npm is bundled with [node](https://nodejs.org/en/download/). ### Windows Computers [Get the MSI](https://nodejs.org/en/download/). npm is in it. ### Apple Macintosh Computers [Get the pkg](https://nodejs.org/en/download/). npm is in it. ### Other Sorts of Unices Run `make install`. npm will be installed with node. If you want a more fancy pants install (a different version, customized paths, etc.) then read on. ## Fancy Install (Unix) There's a pretty robust install script at <https://www.npmjs.com/install.sh>. You can download that and run it. Here's an example using curl: ```sh curl -L https://www.npmjs.com/install.sh | sh ``` ### Slightly Fancier You can set any npm configuration params with that script: ```sh npm_config_prefix=/some/path sh install.sh ``` Or, you can run it in uber-debuggery mode: ```sh npm_debug=1 sh install.sh ``` ### Even Fancier Get the code with git. Use `make` to build the docs and do other stuff. If you plan on hacking on npm, `make link` is your friend. If you've got the npm source code, you can also semi-permanently set arbitrary config keys using the `./configure --key=val ...`, and then run npm commands by doing `node bin/npm-cli.js <command> <args>`. (This is helpful for testing, or running stuff without actually installing npm itself.) ## Windows Install or Upgrade Many improvements for Windows users have been made in npm 3 - you will have a better experience if you run a recent version of npm. To upgrade, either use [Microsoft's upgrade tool](https://github.com/felixrieseberg/npm-windows-upgrade), [download a new version of Node](https://nodejs.org/en/download/), or follow the Windows upgrade instructions in the [Installing/upgrading npm](https://npm.community/t/installing-upgrading-npm/251/2) post. If that's not fancy enough for you, then you can fetch the code with git, and mess with it directly. ## Installing on Cygwin No. ## Uninstalling So sad to see you go. ```sh sudo npm uninstall npm -g ``` Or, if that fails, ```sh sudo make uninstall ``` ## More Severe Uninstalling Usually, the above instructions are sufficient. That will remove npm, but leave behind anything you've installed. If you would like to remove all the packages that you have installed, then you can use the `npm ls` command to find them, and then `npm rm` to remove them. To remove cruft left behind by npm 0.x, you can use the included `clean-old.sh` script file. You can run it conveniently like this: ```sh npm explore npm -g -- sh scripts/clean-old.sh ``` npm uses two configuration files, one for per-user configs, and another for global (every-user) configs. You can view them by doing: ```sh npm config get userconfig # defaults to ~/.npmrc npm config get globalconfig # defaults to /usr/local/etc/npmrc ``` Uninstalling npm does not remove configuration files by default. You must remove them yourself manually if you want them gone. Note that this means that future npm installs will not remember the settings that you have chosen. ## More Docs Check out the [docs](https://docs.npmjs.com/). You can use the `npm help` command to read any of them. If you're a developer, and you want to use npm to publish your program, you should [read this](https://docs.npmjs.com/misc/developers). ## BUGS When you find issues, please report them: * web: <https://npm.community/c/bugs> Be sure to include *all* of the output from the npm command that didn't work as expected. The `npm-debug.log` file is also helpful to provide. ## SEE ALSO * npm(1) * npm-help(1) # infer-owner Infer the owner of a path based on the owner of its nearest existing parent ## USAGE ```js const inferOwner = require('infer-owner') inferOwner('/some/cache/folder/file').then(owner => { // owner is {uid, gid} that should be attached to // the /some/cache/folder/file, based on ownership // of /some/cache/folder, /some/cache, /some, or /, // whichever is the first to exist }) // same, but not async const owner = inferOwner.sync('/some/cache/folder/file') // results are cached! to reset the cache (eg, to change // permissions for whatever reason), do this: inferOwner.clearCache() ``` This module endeavors to be as performant as possible. Parallel requests for ownership of the same path will only stat the directories one time. ## API * `inferOwner(path) -> Promise<{ uid, gid }>` If the path exists, return its uid and gid. If it does not, look to its parent, then its grandparent, and so on. * `inferOwner(path) -> { uid, gid }` Sync form of `inferOwner(path)`. * `inferOwner.clearCache()` Delete all cached ownership information and in-flight tracking. # sorted-union-stream Get the union of two sorted streams ``` npm install sorted-union-stream ``` [![build status](https://secure.travis-ci.org/mafintosh/sorted-union-stream.png)](http://travis-ci.org/mafintosh/sorted-union-stream) ## Usage ``` js var union = require('sorted-union-stream') var from = require('from2-array') // es.readArray converts an array into a stream var sorted1 = from.obj([1,10,24,42,43,50,55]) var sorted2 = from.obj([10,42,53,55,60]) // combine the two streams into a single sorted stream var u = union(sorted1, sorted2) u.on('data', function(data) { console.log(data) }) u.on('end', function() { console.log('no more data') }) ``` Running the above example will print ``` 1 10 24 42 43 50 53 55 60 no more data ``` ## Streaming objects If you are streaming objects sorting is based on `.key`. If this property is not present you should add a `toKey` function as the third parameter. `toKey` should return an key representation of the data that can be used to compare objects. _The keys MUST be sorted_ ``` js var sorted1 = from.obj([{foo:'a'}, {foo:'b'}, {foo:'c'}]) var sorted2 = from.obj([{foo:'b'}, {foo:'d'}]) var u = union(sorted1, sorted2, function(data) { return data.foo // the foo property is sorted }) union.on('data', function(data) { console.log(data) }); ``` Running the above will print ``` {foo:'a'} {foo:'b'} {foo:'c'} {foo:'d'} ``` ## License MIT
gyan0890_Near-Rust
.github ISSUE_TEMPLATE 01_BUG_REPORT.md 02_FEATURE_REQUEST.md 03_CODEBASE_IMPROVEMENT.md 04_SUPPORT_QUESTION.md config.yml PULL_REQUEST_TEMPLATE.md labels.yml workflows build.yml deploy-to-console.yml labels.yml lock.yml pr-labels.yml stale.yml README.md contract Cargo.toml README.md build.sh deploy.sh src lib.rs docs CODE_OF_CONDUCT.md CONTRIBUTING.md SECURITY.md frontend .eslintrc.json .prettierrc.json hooks wallet-selector.ts next.config.js package-lock.json package.json pages api hello.ts postcss.config.js public next.svg thirteen.svg vercel.svg start.sh styles globals.css tailwind.config.js tsconfig.json integration-tests Cargo.toml src tests.rs package-lock.json package.json
<h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Deploy on Vercel](#deploy-on-vercel) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This is a [Next.js](https://nextjs.org/) project bootstrapped with [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) and [`tailwindcss`](https://tailwindcss.com/docs/guides/nextjs) created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. Smart-contract was initialized with [create-near-app]. Use this template and start to build your own gallery project! ### Built With [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app), [`tailwindcss`](https://tailwindcss.com/docs/guides/nextjs), [`tailwindui`](https://tailwindui.com/), [`@headlessui/react`](https://headlessui.com/), [`@heroicons/react`](https://heroicons.com/), [create-near-app], [`amazing-github-template`](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `18>`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Start your frontend: npm run start Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. Test your contract: npm run test Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. You can start editing the page by modifying `frontend/pages/index.tsx`. The page auto-updates as you edit the file. This is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. 4. [API routes](https://nextjs.org/docs/api-routes/introduction) can be accessed on [http://localhost:3000/api/hello](http://localhost:3000/api/hello). This endpoint can be edited in `frontend/pages/api/hello.ts`. 5. The `frontend/pages/api` directory is mapped to `/api/*`. Files in this directory are treated as [API routes](https://nextjs.org/docs/api-routes/introduction) instead of React pages. 6. This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `contract/neardev/dev-account.env` that sets the account name of the contract. Set it to the account id you used above. CONTRACT_NAME=near-blank-project.YOUR-NAME.testnet Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-next-app]: https://github.com/vercel/next.js/tree/canary/packages/create-next-app [Node.js]: https://nodejs.org/en/download/package-manager [tailwindcss]: https://tailwindcss.com/docs/guides/nextjs [create-near-app]: https://github.com/near/create-near-app [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js/) - your feedback and contributions are welcome! ## Deploy on Vercel The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js. Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details. ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
NEAR-Edu_eslint-config-near
.eslintrc.js .github workflows release.yml .vscode settings.json README.md package-lock.json package.json release.config.js src index.ts lib default.ts tsconfig.json
# eslint-config-near This library is an ESlint configuration for NEAR projects. ## Installation To install this configuration run the following command: ```sh npm install -D eslint-config-near ``` or ```sh yarn add -D eslint-config-near ``` ## Usage ``` touch .eslintrc.cjs mkdir .vscode touch .vscode/settings.json ``` When configuring ESlint to use this configuration add the following to your config file: `.eslintrc.cjs` ```javascript /* eslint-env node */ module.exports = { extends: ["near"], }; ``` Feel free to customize, such as by adding `rules` as a sibling of `extends` such as: ``` rules: { 'no-console': 'off', } ``` You might also want to create something like `.vscode/settings.json`: ```JSON { "editor.codeActionsOnSave": { "source.fixAll.eslint": true }, "editor.formatOnSave": true, "eslint.workingDirectories": ["./server", "./src", "./shared"] } ``` ## See also - [near-prettier-config](https://github.com/NEARFoundation/near-prettier-config) - for configuring prettier ## Inspiration This config is based on the [eslint-config-canonical](https://github.com/gajus/eslint-config-canonical) with some minor changes and setup for different filetypes. ## Building To build the project just run the build script like: ```sh npm run build ``` ## Publishing There is no need for manual publishing as this process is done in CI by [semantic-release](https://github.com/semantic-release/semantic-release).
Learn-NEAR-Club_ncaptcha-contact7-addon
Controllers NCaptchaController.php Model Config.php Constructor Constructor.php Readme.md composer.json index.php readme.txt vendor autoload.php composer ClassLoader.php InstalledVersions.php autoload_classmap.php autoload_namespaces.php autoload_psr4.php autoload_real.php autoload_static.php installed.json installed.php
here-wallet_promotion-sdk
README.md dist index.cjs.js index.esm.js index.js example index.html images here-logo.svg promotionstar.svg package-lock.json package.json src index.js nft-styles.css nft-view.html
# @here-wallet/promotion Promotion toolkit for HERE wallet parthners ### Support commonjs or esmodule ```ts import HerePromo from '@here-wallet/promotion'; HerePromotion.init({ appendTo: '.header' }) ``` ### Or just as script! ```html <script src="https://www.unpkg.com/@here-wallet/promotion@0.2.0/dist/index.js"></script> <script>HerePromotion.init({ appendTo: '.header' })</script> ```
icerove_corgi3d
README.md package.json public index.html manifest.json robots.txt src App.js App.test.js assets images arrow.svg egg.svg good-bye.svg icon-nav.svg icon-sell.svg icon-send.svg icon-share.svg rarity-sample.svg shadow.svg quotes quotes.json component Account Account.js AccountCard AccountCard.js CorgiCard Card.js Corgi Corgi.js Dash Dash.js Poster Poster.js ShowCase DashCard DashCard.js ShowCase.js Footer Footer.js Generation Animation Animation.js Generation.js Info Info.js Screen Screen.js Header Header.js Nav Nav.js Market Market.js Profile Profile.js ProfileRow ProfileRow.js SharePage SharePage.js SinglePage Sell Sell.js Send Send.js Transafer Transfer.js Share Share.js SinglePage.js utils Button.js Egg.js Modal.js Photo.js Rate.js Spinner.js corgiAnimation.js corgiPhoto Common.js Rare.js Uncommon.js VeryRare.js config.js context NearContext.js hooks character.js contract.js index.css index.js setupTests.js
# Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `yarn start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.\ You will also see any lint errors in the console. ### `yarn test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `yarn build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `yarn eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `yarn build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify)
maxhr_near--docs-generator
README.md action.yml builder .eslintrc.yml babel.config.js docusaurus.config.js near-api-js.sh near-sdk-js.sh package.json dev-attach.sh dev.sh docker-compose.yml docs-bot README.md api on-release.js app.yml package.json entrypoint.sh shell-scripts funcs.sh test.sh
# docs-generator This is: - A GitHub Action that should run on the docs repo (`near/docs`) - A GitHub app (`./docs-bot`) that should be installed on the docs repo (`near/docs`) ### GitHub Action This is a containerized action (see `Dockerfile`). Inputs: - `source_repo`: Source repo to generate docs for (`near/near-api-js` and others. Or your fork - ex: `maxhr/near--near-api-js`) - `release_version`: The git tag to check out, this should match the release version of the package (`v1.0.0`) - `builder_name`: Name of builder file in `./builder`. Today: `near-api-js`. Soon also: `near-cli | near-sdk-js` - `github_token`: If you run `dev.sh` it's your Personal Access Token with repos permissions. When running in GitHub workflow - GH provides it automatically as an env var. `entrypoint.sh`: - Pulls source and docs - Builds doc - in `/builder` dir there are build files that match the `builder_name` input (ex: `builder/near-api-js.sh`) - Creates a PR in the docs repo (the repo that this action runs on) ### GitHub App (Docs Bot) The app (`./docs-bot`), is published on Vercel (https://docs-bot.vercel.app). It's (current) purpose is to trigger `repository_dispatch` in the docs repo. This is done with an App because GitHub doesn't allow triggering `repository_dispatch` from inside the workflow cross-repos, unless you provide a Personal Access Token, which gives too much permissions to the workflow. A GitHub app limits the permissions only the repo it's installed on. It should be installed on the docs repo and its `https://docs-bot.vercel.app/api/on-release` endpoint can be called from `near/near-api-js` (and others) workflow when a new version get released. This is to be able to trigger docs build automatically. You can also invoke the GitHub action (described above) manually with `workflow_dispatch` event. See the workflows in the docs repo to see how it's configured for manual and automatic listeners. See the workflows in `near-api-js` repo to see how it's being triggered automatically. The endpoint must receive a secret token `DOCS_BOT_SECRET`. ## Contributing You need a GitHub access token with repos permissions to run `./dev.sh`. Make sure you have it in your `~/.github-token`. `./dev.sh` will run docker container with the needed params. - `GITHUB_REPOSITORY_OWNER` - should be `near` or you if you forked - `GITHUB_REPOSITORY` - `near/docs` or your fork - `SOURCE_REPO` - for example `near/near-api-js` - `BUILDER_NAME` - at the moment `near-api-js` others soon. This will run `builder/near-api-js.sj` - `SOURCE_TAG` - the published package version to checkot (ex: `v1.0.0`) - `GITHUB_TOKEN` - access token. GitHub provides it in Action Workflow. For local dev you need a Personal Access Token. `./dev-attach.sh` will run attach to the container, without running the entrypoint file. You can use it to run `entrypoint.sh` manually for debugging. # Docs Bot See [README](../README.md) for Docs Generator.
MichaelOdumosu57_proof_of_vibes_near_hackathon
.circleci config.bak.yml config.heroku.yml config.main.yml config.template.yml config.template2.yml config.yml .vscode settings.json README.md apps zero backend flask dev app.py configs.py healthcheck.py my_util.py requirements.txt unit_tests conftest.py run_tests.py test_healthcheck.py test_news.py test_spotify.py wallet_info.py example app.py cart.py configs.py form.py my_init.py my_util.py orders.py products.py requirements.txt runtime.txt users.py devops general_snippets.md frontend AngularAppCurrent .vscode extensions.json launch.json tasks.json README.md angular.json firebase.json karma.conf.circle-ci.js karma.conf.js package.json projects mobile-nav README.md assets media 0.svg karma.conf.js ng-package.json package.json src lib mobile-nav-item mobile-nav-item.component.html mobile-nav-item.component.spec.ts mobile-nav-item.component.ts mobile-nav.component.html mobile-nav.component.spec.ts mobile-nav.component.ts mobile-nav.module.ts public-api.ts test.ts tsconfig.lib.json tsconfig.lib.prod.json tsconfig.spec.json nibls-is-present README.md karma.conf.js ng-package.json package.json src lib nibls-is-present.directive.spec.ts nibls-is-present.directive.ts public-api.ts test.ts tsconfig.lib.json tsconfig.lib.prod.json tsconfig.spec.json src app app-routing.module.ts app.component.html app.component.spec.ts app.component.ts app.module.ts core base base.service.spec.ts base.service.ts config config.service.spec.ts config.service.ts core.module.ts utility test-utils.ts utility.service.spec.ts utility.service.ts utils.ts pages generate-nft generate-nft.component.html generate-nft.component.spec.ts generate-nft.component.ts landing landing-main landing-main.component.html landing-main.component.spec.ts landing-main.component.ts landing-routing.module.ts landing.module.ts team team-main team-main.component.html team-main.component.spec.ts team-main.component.ts team-routing.module.ts team.module.ts vibesmap vibesmap.component.html vibesmap.component.spec.ts vibesmap.component.ts shared components custom-label custom-label.component.html custom-label.component.ts dropdown-option dropdown-option.component.html dropdown-option.component.spec.ts dropdown-option.component.ts footer footer.component.html footer.component.spec.ts footer.component.ts mobile-nav-item-pod mobile-nav-item-pod.component.html mobile-nav-item-pod.component.spec.ts mobile-nav-item-pod.component.ts nav nav.component.html nav.component.spec.ts nav.component.ts notify-banner notify-banner.component.html notify-banner.component.spec.ts notify-banner.component.ts penrose penrose.component.html penrose.component.ts sample-cpnt sample-cpnt.component.html sample-cpnt.component.spec.ts sample-cpnt.component.ts directives scroll-bottom-pagination-directive scroll-bottom-pagination.directive.spec.ts scroll-bottom-pagination.directive.ts services nav nav.service.spec.ts nav.service.ts shared.module.ts store shared.states.ts spotify index.ts spotify.actions.ts spotify.reducers.ts spotify.selectors.ts wml-components functions.ts models.ts wml-card wml-card.component.html wml-card.component.spec.ts wml-card.component.ts wml-card.module.ts wml-components.module.ts wml-dropdown wml-dropdown-option wml-dropdown-option.component.html wml-dropdown-option.component.spec.ts wml-dropdown-option.component.ts wml-dropdown-sample wml-dropdown-sample.component.html wml-dropdown-sample.component.spec.ts wml-dropdown-sample.component.ts wml-dropdown-service wml-dropdown.service.spec.ts wml-dropdown.service.ts wml-dropdown.component.html wml-dropdown.component.spec.ts wml-dropdown.component.ts wml-dropdown.module.ts wml-fields wml-fields.component.html wml-fields.component.spec.ts wml-fields.component.ts wml-fields.module.ts wml-label wml-label.component.html wml-label.component.spec.ts wml-label.component.ts wml-form wml-form.component.html wml-form.component.spec.ts wml-form.component.ts wml-form.module.ts wml-input wml-input.component.html wml-input.component.spec.ts wml-input.component.ts wml-input.module.ts assets i18n en.json media mobile_nav 0.svg nav 0.svg 1.svg 2.svg nav_0.svg team_main 6.svg environments environment.dev.ts environment.prod.ts environment.ts helpers automation automation automation.service.spec.ts automation.service.ts template template.component.html template.component.spec.ts template.component.ts template.service.spec.ts template.service.ts index.html main.ts polyfills.ts test.ts staticwebapp.config.json tsconfig.app.json tsconfig.json tsconfig.spec.json logging mySpringProject .mvn wrapper MavenWrapperDownloader.java mvnw.cmd pom.xml src main java com example mySpringProject MySpringProjectApplication.java controller HelloController.java resources log4j2.xml test java com example mySpringProject MySpringProjectApplicationTests.java mobile flutter_app README.md android app src debug AndroidManifest.xml main AndroidManifest.xml res drawable-v21 launch_background.xml drawable launch_background.xml values-night styles.xml values styles.xml profile AndroidManifest.xml ios Runner AppDelegate.swift Assets.xcassets AppIcon.appiconset Contents.json LaunchImage.imageset Contents.json README.md Runner-Bridging-Header.h web index.html manifest.json testing TESTS.md e2e capybara .bundle config.windows.txt billy_helper.rb downloadhelpers.rb play.rb spec_helper.rb target-e2e-circleci.rb target-e2e-dev-circleci.rb template-e2e-circleci.rb ignore run_backend_dev.ps1 run_backend_test.ps1 set_frontend_env.ps1 misc docs application_documentation general_notes README.md notes README.md template README.md tutorials connect_near_wallet_to_web_app MICHAEL README.md creating_nft_from_mintbase YOUR_NAME_TEMPLATE README.md examples IDEAS.md NOTES.md README.md simple-gallery .eslintrc.json README.md config constants.ts hooks useStoreNfts.ts next-env.d.ts next.config.js package-lock.json package.json postcss.config.js queries queries.ts services providers apollo.ts constants.ts styles globals.css tailwind.config.js tsconfig.json types types.ts wallet.types.ts simple-login .eslintrc.json README.md next-env.d.ts next.config.js package-lock.json package.json postcss.config.js services providers constants.ts styles globals.css tailwind.config.js tsconfig.json simple-marketplace .eslintrc.json README.md config constants.ts global.d.ts hooks useNearPrice.ts useStoreNfts.ts useStores.ts useTokenListData.ts lib numbers.ts next-env.d.ts next.config.js package-lock.json package.json postcss.config.js queries fragments.ts marketplace.queries.ts services providers apollo.ts styles globals.css tailwind.config.js tsconfig.json types types.ts wallet.types.ts utils BuyModal.utils.ts index.ts simple-minter .eslintrc.json README.md config constants.ts next-env.d.ts next.config.js package-lock.json package.json postcss.config.js services providers apollo.ts constants.ts styles globals.css tailwind.config.js tsconfig.json types types.ts security_near YOUR_NAME_TEMPLATE README.md template YOUR_NAME_TEMPLATE README.md using_pagoda YOUR_NAME_TEMPLATE README.md issues template BUG_REPORT.md snippets check_all_checkboxes_for_developer_roles_in_azure.js
--- name: Simple Minter slug: simple-minter description: Simple Minter on Mintbase framework: Next.js css: Tailwind deployUrl: https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-minter demoUrl: https://examples-simple-minter.vercel.app/ --- # Simple Minter This examples shows a simple minter on Mintbase. ## Demo https://examples-simple-minter.vercel.app/ ## Try on CodeSandbox [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/github/Mintbase/examples/tree/main/simple-minter) ## 🚀 One-Click Deploy Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme): [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-minter) ## Getting Started Execute [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) with [npm](https://docs.npmjs.com/cli/init) or [Yarn](https://yarnpkg.com/lang/en/docs/cli/create/) to bootstrap the example: ```bash npx create-next-app --example https://github.com/Mintbase/examples/tree/main/simple-minter # or yarn create next-app --example https://github.com/Mintbase/examples/tree/main/simple-minter ``` Run Next.js in development mode: ```bash npm install npm run dev # or yarn yarn dev ``` ## Set ENV variables Once that's done, copy the `.env.example` file in this directory to `.env.local` (which will be ignored by Git): ```bash cp .env.example .env.local ``` if you use windows without powershell or cygwin: ```bash copy .env.example .env.local ``` Then open `.env.local` and set the environment variables to match the ones for your Google Optimize account. To get your `api key` visit : [Mintbase Developers Page for Mainnet](https://www.mintbase.io/developer): [Mintbase Developers Page for testnet](https://testnet.mintbase.io/developer): ``` NEXT_PUBLIC_DEVELOPER_KEY=your_mintbase_api_key ``` `NEXT_PUBLIC_NETWORK` could be `testnet` or `mainnet` ``` NEXT_PUBLIC_NETWORK=testnet ``` `NEXT_PUBLIC_STORE_ID` its your store id ``` NEXT_PUBLIC_STORE_ID=hellovirtualworld.mintspace2.testnet ``` ## Extending This project is setup using Next.js + MintBase UI + Tailwind + Apollo + React Hook Form. You can use this project as a reference to build your own, and use or remove any library you think it would suit your needs. ## 🙋‍♀️ Need extra help? [Ask on our Telegram Channel](https://t.me/mintdev) <br/> [Create an Issue](https://github.com/Mintbase/examples/issues) --- name: Simple Login slug: simple-login description: Simple NEAR wallet login framework: Next.js css: Tailwind deployUrl: https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-login demoUrl: https://examples-simple-login.vercel.app/ --- # Simple Login This examples shows a simple login with NEAR example. ## Demo https://examples-simple-login.vercel.app/ ## Try on CodeSandbox [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/github/Mintbase/examples/tree/main/simple-login) ### One-Click Deploy Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme): [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-login) ## Getting Started Execute [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) with [npm](https://docs.npmjs.com/cli/init) or [Yarn](https://yarnpkg.com/lang/en/docs/cli/create/) to bootstrap the example: ```bash npx create-next-app --example https://github.com/Mintbase/examples/tree/main/simple-login # or yarn create next-app --example https://github.com/Mintbase/examples/tree/main/simple-login ``` Run Next.js in development mode: ```bash npm install npm run dev # or yarn yarn dev ``` Once that's done, copy the `.env.example` file in this directory to `.env.local` (which will be ignored by Git): ```bash cp .env.example .env.local ``` Then open `.env.local` and set the environment variables to match the ones for your Google Optimize account. ## Set ENV variables Once that's done, copy the `.env.example` file in this directory to `.env.local` (which will be ignored by Git): ```bash cp .env.example .env.local ``` if you use windows without powershell or cygwin: ```bash copy .env.example .env.local ``` Then open `.env.local` and set the environment variables to match the ones for your Google Optimize account. `NEXT_PUBLIC_NETWORK` could be `testnet` or `mainnet` ``` NEXT_PUBLIC_NETWORK=testnet ``` `NEXT_PUBLIC_USER_ID` its your near wallet user id ``` NEXT_PUBLIC_USER_ID=mintbase_user_on_near ``` ## Extending This project is setup using Next.js + MintBase UI + Tailwind. You can use this project as a reference to build your own, and use or remove any library you think it would suit your needs. ## 🙋‍♀️ Need extra help? [Ask on our Telegram Channel](https://t.me/mintdev) <br/> [Create an Issue](https://github.com/Mintbase/examples/issues) # Developer key password: !@$#REFGewgsFA app name: proofofvibesapp apiKey: WYa_3su7vJSsoOwI-517b transaction hash: E5t4ASw1JCa5zxC9mx7b3CrQWvLjx8UimM98dMCavR81 block hash: 3ExadRHxwbGXwCK6gr1pbAW837a85j9bCeDmZRuKNhcy Transaction :https://explorer.testnet.near.org/transactions/E5t4ASw1JCa5zxC9mx7b3CrQWvLjx8UimM98dMCavR81 collection name : proof_of_vibes_collection_0 ## NFT storage API key * ## NFT Series Tranaction for 100 NFT's :https://explorer.testnet.near.org/transactions/EekEfZYV1pfX9ik8hHxteTVd9jBJKngmiE1gvXbeFgR9 ## Mapbox sk.eyJ1IjoibWljaGFlbG9kdW1vc3U1NyIsImEiOiJjbDhnbGhxdjQwNzk3M29ueTF1MWY0cDB3In0.iUhWzfPuk632KbAu1A9I6A # Launch Screen Assets You can customize the launch screen with your own desired assets by replacing the image files in this directory. You can also do it by opening your Flutter project's Xcode project with `open ios/Runner.xcworkspace`, selecting `Runner/Assets.xcassets` in the Project Navigator and dropping in the desired images. # flutter_app A new Flutter project. ## Getting Started This project is a starting point for a Flutter application. A few resources to get you started if this is your first Flutter project: - [Lab: Write your first Flutter app](https://flutter.dev/docs/get-started/codelab) - [Cookbook: Useful Flutter samples](https://flutter.dev/docs/cookbook) For help getting started with Flutter, view our [online documentation](https://flutter.dev/docs), which offers tutorials, samples, guidance on mobile development, and a full API reference. --- name: Simple Gallery slug: simple-gallery description: Simple Mintbase Gallery framework: Next.js css: Tailwind deployUrl: https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-gallery demoUrl: https://examples-simple-gallery.vercel.app/ --- # Simple Gallery This examples shows a simple gallery. ## Demo https://examples-simple-gallery.vercel.app/ ## Try on CodeSandbox [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/github/Mintbase/examples/tree/main/simple-gallery) ### One-Click Deploy Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme): [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-gallery) ## Getting Started Execute [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) with [npm](https://docs.npmjs.com/cli/init) or [Yarn](https://yarnpkg.com/lang/en/docs/cli/create/) to bootstrap the example: ```bash npx create-next-app --example https://github.com/Mintbase/examples/tree/main/simple-gallery # or yarn create next-app --example https://github.com/Mintbase/examples/tree/main/simple-gallery ``` Run Next.js in development mode: ```bash npm install npm run dev # or yarn yarn dev ``` Once that's done, copy the `.env.example` file in this directory to `.env.local` (which will be ignored by Git): ```bash cp .env.example .env.local ``` ## Set ENV variables Once that's done, copy the `.env.example` file in this directory to `.env.local` (which will be ignored by Git): ```bash cp .env.example .env.local ``` if you use windows without powershell or cygwin: ```bash copy .env.example .env.local ``` To get your `api key` visit : [Mintbase Developers Page for Mainnet](https://www.mintbase.io/developer): [Mintbase Developers Page for testnet](https://testnet.mintbase.io/developer): ``` NEXT_PUBLIC_DEVELOPER_KEY=your_mintbase_api_key `NEXT_PUBLIC_NETWORK` could be `testnet` or `mainnet` ``` NEXT_PUBLIC_NETWORK=testnet ``` `NEXT_PUBLIC_STORE_ID` its your store id ``` NEXT_PUBLIC_STORE_ID=hellovirtualworld.mintspace2.testnet ``` ## Extending This project is setup using Next.js + MintBase UI + Tailwind + Apollo. You can use this project as a reference to build your own, and use or remove any library you think it would suit your needs. ## 🙋‍♀️ Need extra help? [Ask on our Telegram Channel](https://t.me/mintdev) <br/> [Create an Issue](https://github.com/Mintbase/examples/issues) # NiblsIsPresent This library was generated with [Angular CLI](https://github.com/angular/angular-cli) version 14.2.0. ## Code scaffolding Run `ng generate component component-name --project niblsIsPresent` to generate a new component. You can also use `ng generate directive|pipe|service|class|guard|interface|enum|module --project niblsIsPresent`. > Note: Don't forget to add `--project niblsIsPresent` or else it will be added to the default project in your `angular.json` file. ## Build Run `ng build niblsIsPresent` to build the project. The build artifacts will be stored in the `dist/` directory. ## Publishing After building your library with `ng build niblsIsPresent`, go to the dist folder `cd dist/nibls-is-present` and run `npm publish`. ## Running unit tests Run `ng test niblsIsPresent` to execute the unit tests via [Karma](https://karma-runner.github.io). ## Further help To get more help on the Angular CLI use `ng help` or go check out the [Angular CLI Overview and Command Reference](https://angular.io/cli) page. # pagoda.co * NEAR Lake [ Create your own indexer using near lake](https://console.pagoda.co/indexers) [![CircleCI](https://dl.circleci.com/status-badge/img/gh/NIBLS-Coin-Project/nibls_coin_application/tree/master.svg?style=svg)](https://dl.circleci.com/status-badge/redirect/gh/NIBLS-Coin-Project/nibls_coin_application/tree/master) # Summary ## Projects ## TODO make site look like https://sports.ny.betmgm.com/en/sports https://www.cadillac.com/certified-pre-owned ## General Description ## Features include Only members of the project can read the README.md from the ignore folder ## Issues * Figure out why on zoom in does waterpipe.js not resize the canvas properly the view should be true to the screen * Figure out why you cant have directives come from another library * your getting changed after checked error when you are using async pipe to toggle overlay loading figure out what we can do here * we can't continue with accounts until we get the accounts under the proper organization * privacy policy * checkout https://app.termly.io/user/sign-up # Aspects ## Challenges ## Enjoyed ## Leadership ## Done Different # Resources * [Authentication with python requests](https://www.geeksforgeeks.org/authentication-using-python-requests/) * [Generate Random String Python](https://flexiple.com/python/generate-random-string-python/) * [Testing Flask App](https://github.com/markdouthwaite/minimal-flask-api/blob/main/tests/test_api.py) * [Angular Notification Bar](https://stackblitz.com/edit/angular-notification-bar?file=src%2Fapp%2Fnotification-bar%2Fnotification-bar.service.ts) * [NEWS API org](https://newsapi.org/) * [Marquee3000](https://openbase.com/js/marquee3000) * [Free adobe illustrator](designstripe.com) * [Deploy to azure website](https://dev.to/azure/get-started-with-the-new-azure-static-web-apps-cli-mm3) * [Trello Board](https://trello.com/b/6wouCkwX/nibls-website) ## Spotify * [Spotify CSS colors](https://usbrandcolors.com/spotify-colors/) * [Spotify authorize scopes](https://developer.spotify.com/documentation/general/guides/authorization/scopes/) ## Snippets * general snippets found in planning in the trello workspace ## Docs ## Media <a href="https://www.flaticon.com/free-icons/linkedin" title="linkedin icons">Linkedin icons created by Fathema Khanom - Flaticon</a> <!-- bunch of links --> apps\zero\frontend\AngularAppCurrent\src\assets\media\nav_0.png <a href="https://www.flaticon.com/free-icons/user" title="user icons">User icons created by Freepik - Flaticon</a> # Metrics ## Users ## Netowrk ## Storage # Stack ## Frontend * Angular v14.2.3 ### Structure ## Backend * python v3.10.7 * flask v 2.2.4 * to do testing run the the backend_test_env.ps1 in the p folder ## Testing * Docker, (tes in docker containers from linux VM) v20.10.7 ### E2E ## Hosting ### frontend hosting * azure static web apps - * preview - https://ambitious-sand-0ab399110-preview.centralus.1.azurestaticapps.net/ * production - https://ambitious-sand-0ab399110.1.azurestaticapps.net/ * niblsinc.com * hosting - https://nationalintelligentblockchainleaguesports.wordpress.com/ ### backend hosting * azure app service - * https://nibls-flask-backend-0.azurewebsites.net/healthz ### Logging * log4j ## DevOps ### CI/CD/CM * CircleCi ### Version Control * Github ## Communication Linkedin Slack * intro page phrases * Find the problem, find the solution. * Demo, Design, Develop Deploy * Believe in the mission. Believe in the team. * Make it work for every possible scenario. * When faced with the greatest of pressures, remain calm. # MobileNav This library was generated with [Angular CLI](https://github.com/angular/angular-cli) version 14.2.0. ## Code scaffolding Run `ng generate component component-name --project mobileNav` to generate a new component. You can also use `ng generate directive|pipe|service|class|guard|interface|enum|module --project mobileNav`. > Note: Don't forget to add `--project mobileNav` or else it will be added to the default project in your `angular.json` file. ## Build Run `ng build mobileNav` to build the project. The build artifacts will be stored in the `dist/` directory. ## Publishing After building your library with `ng build mobileNav`, go to the dist folder `cd dist/mobile-nav` and run `npm publish`. ## Running unit tests Run `ng test mobileNav` to execute the unit tests via [Karma](https://karma-runner.github.io). ## Further help To get more help on the Angular CLI use `ng help` or go check out the [Angular CLI Overview and Command Reference](https://angular.io/cli) page. --- name: Simple Marketplace slug: simple-marketplace description: Simple Marketplace on MintBase framework: Next.js css: Tailwind deployUrl: https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-marketplace demoUrl: https://examples-simple-marketplace.vercel.app/ --- # Simple Marketplace This examples shows a simple marketplace. ## Demo https://examples-simple-marketplace.vercel.app/ ## Requirements - [Setup a Near Wallet](https://wallet.testnet.near.org/) - [Setup a Mintbase store aka Smart Contract](https://www.youtube.com/watch?v=Ck2EPrtuxa8) and [Mint NFTS](https://www.youtube.com/watch?v=6L_aAnJc3hM): - [Get a Developer Key](https://testnet.mintbase.io/developer) ## Try on CodeSandbox [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/github/Mintbase/examples/tree/main/simple-marketplace) ## One-Click Deploy Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme): [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FMintbase%2Fexamples%2Ftree%2Fmain%2Fsimple-marketplace) ## Getting Started Execute [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) with [npm](https://docs.npmjs.com/cli/init) or [Yarn](https://yarnpkg.com/lang/en/docs/cli/create/) to bootstrap the example: ```bash npx create-next-app --example https://github.com/Mintbase/examples/tree/main/simple-marketplace # or yarn create next-app --example https://github.com/Mintbase/examples/tree/main/simple-marketplace ``` Run Next.js in development mode: ```bash npm install npm run dev # or yarn yarn dev ``` ## Set ENV variables Once that's done, copy the `.env.example` file in this directory to `.env.local` (which will be ignored by Git): ```bash cp .env.example .env.local ``` if you use windows without powershell or cygwin: ```bash copy .env.example .env.local ``` To get your `api key` visit : [Mintbase Developers Page for Mainnet](https://www.mintbase.io/developer): [Mintbase Developers Page for testnet](https://testnet.mintbase.io/developer): ``` NEXT_PUBLIC_DEVELOPER_KEY=your_mintbase_api_key ``` `NEXT_PUBLIC_NETWORK` could be `testnet` or `mainnet` ``` NEXT_PUBLIC_NETWORK=testnet ``` `NEXT_PUBLIC_STORES` its your stores ids ``` NEXT_PUBLIC_STORES=latium.mintspace2.testnet,mufasa.mintspace2.testnet ``` ## Extending This project is setup using Next.js + MintBase UI + Tailwind + Apollo. You can use this project as a reference to build your own, and use or remove any library you think it would suit your needs. ## 🙋‍♀️ Need extra help? [Ask on our Telegram Channel](https://t.me/mintdev) <br/> [Create an Issue](https://github.com/Mintbase/examples/issues) <p align="center"> <a href="https://mintbase.io"> <img src="./assets/mb-logo.png" style="object-fit: cover"> <h3 align="center">Examples</h3> </a> </p> ## Index - [Simple Login](./simple-login) - [Simple Gallery](./simple-gallery) - [Simple Marketplace](./simple-marketplace) - [Simple Minter](./simple-minter) * always want to audit your smart contract Isha Tyagi near on telegram * smart contact audit is expensive * 75,000 - 200,000k dollars * they have grants https://airtable.com/shrqu32NXPKjFYsrv # Robert * public,private, internal * senstive fn are not exposed unintentionally * private fn - will not call if not called by the smart contract itself * interal - only called by other private function cant be used in near contract * assert_one_yoc- make user manaually approve * enable overflow-checks * cross- contract calls * everything is async, smart contract happens async * signer -sign transaction * predecessor - one that called the contract * watch out for generating more money from self transfaction * rmbr to get slide dec # NEAR Developer Acct NEAR wallet security code today width foam foam amateur portion bridge inquiry river adapt amazing uphold ![1663874128481](image/README/1663874128481.png) # NEAR Customer Acct * michael_odumosu_29.near * developer account and customer accout are the same ## Near acct [test acct](https://testnet.mynearwallet.com/) * michaelodumosu29.testnet tilt mimic dignity daring scissors time slow tool load better again extra ![1663956543496](image/README/1663956543496.png) # Cool Factor * how you wake up how you carry yourself vibe check - web 1 land paginge, qrcode access to mint nft web2 web3 -voting rights * need protocols to help establish equality * reach out to larger brands * if community comes out vibe rating come more * now your can come to private events * money should not be a barrier its cool factor * what are my connections that should be your token in digital value # Hackathon * build hacks ## Features Wallets Near wallets, decentrialized identity / NFT * vibecoin next step social coin - * vibecheck * vibetoken * proof of vibes
kuutamolabs_rust-lightning
.github workflows build.yml ARCH.md ^ \ CHANGELOG.md CONTRIBUTING.md Cargo.toml GLOSSARY.md LICENSE.md README.md SECURITY.md bench Cargo.toml README.md benches bench.rs ci check-compiles.sh check-each-commit.sh ci-tests.sh codecov.yml fuzz Cargo.toml README.md ci-fuzz.sh src base32.rs bech32_parse.rs bin base32_target.rs bech32_parse_target.rs chanmon_consistency_target.rs chanmon_deser_target.rs fromstr_to_netaddress_target.rs full_stack_target.rs gen_target.sh indexedmap_target.rs invoice_deser_target.rs invoice_request_deser_target.rs msg_accept_channel_target.rs msg_accept_channel_v2_target.rs msg_announcement_signatures_target.rs msg_channel_announcement_target.rs msg_channel_details_target.rs msg_channel_ready_target.rs msg_channel_reestablish_target.rs msg_channel_update_target.rs msg_closing_signed_target.rs msg_commitment_signed_target.rs msg_decoded_onion_error_packet_target.rs msg_error_message_target.rs msg_funding_created_target.rs msg_funding_signed_target.rs msg_gossip_timestamp_filter_target.rs msg_init_target.rs msg_node_announcement_target.rs msg_open_channel_target.rs msg_open_channel_v2_target.rs msg_ping_target.rs msg_pong_target.rs msg_query_channel_range_target.rs msg_query_short_channel_ids_target.rs msg_reply_channel_range_target.rs msg_reply_short_channel_ids_end_target.rs msg_revoke_and_ack_target.rs msg_shutdown_target.rs msg_splice_ack_target.rs msg_splice_locked_target.rs msg_splice_target.rs msg_stfu_target.rs msg_tx_abort_target.rs msg_tx_ack_rbf_target.rs msg_tx_add_input_target.rs msg_tx_add_output_target.rs msg_tx_complete_target.rs msg_tx_init_rbf_target.rs msg_tx_remove_input_target.rs msg_tx_remove_output_target.rs msg_tx_signatures_target.rs msg_update_add_htlc_target.rs msg_update_fail_htlc_target.rs msg_update_fail_malformed_htlc_target.rs msg_update_fee_target.rs msg_update_fulfill_htlc_target.rs offer_deser_target.rs onion_hop_data_target.rs onion_message_target.rs peer_crypt_target.rs process_network_graph_target.rs refund_deser_target.rs router_target.rs target_template.txt zbase32_target.rs chanmon_consistency.rs chanmon_deser.rs fromstr_to_netaddress.rs full_stack.rs indexedmap.rs invoice_deser.rs invoice_request_deser.rs lib.rs msg_targets gen_target.sh mod.rs msg_accept_channel.rs msg_accept_channel_v2.rs msg_announcement_signatures.rs msg_channel_announcement.rs msg_channel_details.rs msg_channel_ready.rs msg_channel_reestablish.rs msg_channel_update.rs msg_closing_signed.rs msg_commitment_signed.rs msg_decoded_onion_error_packet.rs msg_error_message.rs msg_funding_created.rs msg_funding_signed.rs msg_gossip_timestamp_filter.rs msg_init.rs msg_node_announcement.rs msg_open_channel.rs msg_open_channel_v2.rs msg_ping.rs msg_pong.rs msg_query_channel_range.rs msg_query_short_channel_ids.rs msg_reply_channel_range.rs msg_reply_short_channel_ids_end.rs msg_revoke_and_ack.rs msg_shutdown.rs msg_splice.rs msg_splice_ack.rs msg_splice_locked.rs msg_stfu.rs msg_target_template.txt msg_tx_abort.rs msg_tx_ack_rbf.rs msg_tx_add_input.rs msg_tx_add_output.rs msg_tx_complete.rs msg_tx_init_rbf.rs msg_tx_remove_input.rs msg_tx_remove_output.rs msg_tx_signatures.rs msg_update_add_htlc.rs msg_update_fail_htlc.rs msg_update_fail_malformed_htlc.rs msg_update_fee.rs msg_update_fulfill_htlc.rs msg_warning_message.rs utils.rs offer_deser.rs onion_hop_data.rs onion_message.rs peer_crypt.rs process_network_graph.rs refund_deser.rs router.rs utils mod.rs test_logger.rs test_persister.rs zbase32.rs targets.h lightning-background-processor Cargo.toml src lib.rs lightning-block-sync Cargo.toml src convert.rs gossip.rs http.rs init.rs lib.rs poll.rs rest.rs rpc.rs test_utils.rs utils.rs lightning-custom-message Cargo.toml src lib.rs lightning-invoice Cargo.toml README.md fuzz Cargo.toml ci-fuzz.sh fuzz_targets serde_data_part.rs src de.rs lib.rs payment.rs ser.rs sync.rs tb.rs utils.rs tests ser_de.rs lightning-net-tokio Cargo.toml src lib.rs lightning-persister Cargo.toml src fs_store.rs lib.rs test_utils.rs utils.rs lightning-rapid-gossip-sync Cargo.toml README.md src error.rs lib.rs processing.rs lightning-transaction-sync Cargo.toml src common.rs error.rs esplora.rs lib.rs tests integration_tests.rs lightning Cargo.toml src blinded_path message.rs mod.rs payment.rs utils.rs chain chaininterface.rs chainmonitor.rs channelmonitor.rs mod.rs onchaintx.rs package.rs transaction.rs events bump_transaction.rs mod.rs lib.rs ln async_signer_tests.rs blinded_payment_tests.rs chan_utils.rs chanmon_update_fail_tests.rs channel.rs channel_id.rs channelmanager.rs features.rs functional_test_utils.rs functional_tests.rs inbound_payment.rs mod.rs monitor_tests.rs msgs.rs onion_route_tests.rs onion_utils.rs outbound_payment.rs payment_tests.rs peer_channel_encryptor.rs peer_handler.rs priv_short_conf_tests.rs reload_tests.rs reorg_tests.rs script.rs shutdown_tests.rs wire.rs offers invoice.rs invoice_error.rs invoice_request.rs merkle.rs mod.rs offer.rs parse.rs payer.rs refund.rs signer.rs test_utils.rs onion_message functional_tests.rs messenger.rs mod.rs offers.rs packet.rs routing gossip.rs mod.rs router.rs scoring.rs test_utils.rs utxo.rs sign mod.rs type_resolver.rs sync debug_sync.rs fairrwlock.rs mod.rs nostd_sync.rs test_lockorder_checks.rs util atomic_counter.rs base32.rs byte_utils.rs chacha20.rs chacha20poly1305rfc.rs config.rs crypto.rs errors.rs fuzz_wrappers.rs indexed_map.rs invoice.rs logger.rs macro_logger.rs message_signing.rs mod.rs persist.rs poly1305.rs scid_utils.rs ser.rs ser_macros.rs string.rs test_channel_signer.rs test_utils.rs time.rs transaction_utils.rs wakers.rs msrv-no-dev-deps-check Cargo.toml src lib.rs no-std-check Cargo.toml src lib.rs pending_changelog 113-channel-ser-compat.txt rustfmt.toml
# lightning-invoice [![Docs.rs](https://docs.rs/lightning-invoice/badge.svg)](https://docs.rs/lightning-invoice/) This repo provides data structures for BOLT 11 lightning invoices and functions to parse and serialize these from and to bech32. **Please be sure to run the test suite since we need to check assumptions regarding `SystemTime`'s bounds on your platform. You can also call `check_platform` on startup or in your test suite to do so.** # lightning-rapid-gossip-sync This crate exposes functionality for rapid gossip graph syncing, aimed primarily at mobile clients. Its server counterpart is the [rapid-gossip-sync-server](https://github.com/lightningdevkit/rapid-gossip-sync-server) repository. ## Mechanism The (presumed) server sends a compressed gossip response containing gossip data. The gossip data is formatted compactly, omitting signatures and opportunistically incremental where previous channel updates are known. Essentially, the serialization structure is as follows: 1. Fixed prefix bytes `76, 68, 75, 1` (the first three bytes are ASCII for `LDK`) - The purpose of this prefix is to identify the serialization format, should other rapid gossip sync formats arise in the future - The fourth byte is the protocol version in case our format gets updated 2. Chain hash (32 bytes) 3. Latest seen timestamp (`u32`) 4. An unsigned int indicating the number of node IDs to follow 5. An array of compressed node ID pubkeys (all pubkeys are presumed to be standard compressed 33-byte-serializations) 6. An unsigned int indicating the number of channel announcement messages to follow 7. An array of significantly stripped down customized channel announcements 8. An unsigned int indicating the number of channel update messages to follow 9. A series of default values used for non-incremental channel updates - The values are defined as follows: 1. `default_cltv_expiry_delta` 2. `default_htlc_minimum_msat` 3. `default_fee_base_msat` 4. `default_fee_proportional_millionths` 5. `default_htlc_maximum_msat` (`u64`, and if the default is no maximum, `u64::MAX`) - The defaults are calculated by the server based on the frequency among non-incremental updates within a given delta set 10. An array of customized channel updates You will also notice that `NodeAnnouncement` messages are omitted altogether as the node IDs are implicitly extracted from the channel announcements and updates. The data is then applied to the current network graph, artificially dated to the timestamp of the latest seen message less one week, be it an announcement or an update, from the server's perspective. The network graph should not be pruned until the graph sync completes. ### Custom Channel Announcement To achieve compactness and avoid data repetition, we're sending a significantly stripped down version of the channel announcement message, which contains only the following data: 1. `channel_features`: `u16` + `n`, where `n` is the number of bytes indicated by the first `u16` 2. `short_channel_id`: `CompactSize` (incremental `CompactSize` deltas starting from 0) 3. `node_id_1_index`: `CompactSize` (index of node id within the previously sent sequence) 4. `node_id_2_index`: `CompactSize` (index of node id within the previously sent sequence) ### Custom Channel Update For the purpose of rapid syncing, we have deviated from the channel update format specified in BOLT 7 significantly. Our custom channel updates are structured as follows: 1. `short_channel_id`: `CompactSize` (incremental `CompactSize` deltas starting at 0) 2. `custom_channel_flags`: `u8` 3. `update_data` Specifically, our custom channel flags break down like this: | 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 | |---------------------|----|----|----|---|---|------------------|-----------| | Incremental update? | | | | | | Disable channel? | Direction | If the most significant bit is set to `1`, indicating an incremental update, the intermediate bit flags assume the following meaning: | 64 | 32 | 16 | 8 | 4 | |---------------------------------|---------------------------------|-----------------------------|-------------------------------------------|---------------------------------| | `cltv_expiry_delta` has changed | `htlc_minimum_msat` has changed | `fee_base_msat` has changed | `fee_proportional_millionths` has changed | `htlc_maximum_msat` has changed | If the most significant bit is set to `0`, the meaning is almost identical, except instead of a change, the flags now represent a deviation from the defaults sent at the beginning of the update sequence. In both cases, `update_data` only contains the fields that are indicated by the channel flags to be non-default or to have mutated. ## Delta Calculation The way a server is meant to calculate this rapid gossip sync data is by taking the latest time any change, be it either an announcement or an update, was seen. That timestamp is included in each rapid sync message, so all the client needs to do is cache one variable. If a particular channel update had never occurred before, the full update is sent. If a channel has had updates prior to the provided timestamp, the latest update prior to the timestamp is taken as a reference, and the delta is calculated against it. Depending on whether the rapid sync message is calculated on the fly or a snapshotted version is returned, intermediate changes between the latest update seen by the client and the latest update broadcast on the network may be taken into account when calculating the delta. ## Performance Given the primary purpose of this utility is a faster graph sync, we thought it might be helpful to provide some examples of various delta sets. These examples were calculated as of May 19th 2022 with a network graph comprised of 80,000 channel announcements and 160,000 directed channel updates. | Full sync | | |-----------------------------|--------| | Message Length | 4.7 MB | | Gzipped Message Length | 2.0 MB | | Client-side Processing Time | 1.4 s | | Week-old sync | | |-----------------------------|--------| | Message Length | 2.7 MB | | Gzipped Message Length | 862 kB | | Client-side Processing Time | 907 ms | | Day-old sync | | |-----------------------------|---------| | Message Length | 191 kB | | Gzipped Message Length | 92.8 kB | | Client-side Processing Time | 196 ms | # Fuzzing Fuzz tests generate a ton of random parameter arguments to the program and then validate that none cause it to crash. ## How does it work? Typically, CI will run `ci-fuzz.sh` on one of the environments the automated tests are configured for. Fuzzing is further only effective with a lot of CPU time, indicating that if crash scenarios are discovered on CI with its low runtime constraints, the crash is caused relatively easily. ## How do I run fuzz tests locally? We support multiple fuzzing engines such as `honggfuzz`, `libFuzzer` and `AFL`. You typically won't need to run the entire suite of different fuzzing tools. For local execution, `honggfuzz`should be more than sufficient. ### Setup #### Honggfuzz To install `honggfuzz`, simply run ```shell cargo update cargo install --force honggfuzz ``` In some environments, you may want to pin the honggfuzz version to `0.5.52`: ```shell cargo update -p honggfuzz --precise "0.5.52" cargo install --force honggfuzz --version "0.5.52" ``` #### cargo-fuzz / libFuzzer To install `cargo-fuzz`, simply run ```shell cargo update cargo install --force cargo-fuzz ``` ### Execution #### Honggfuzz To run fuzzing using `honggfuzz`, do ```shell export CPU_COUNT=1 # replace as needed export HFUZZ_BUILD_ARGS="--features honggfuzz_fuzz" export HFUZZ_RUN_ARGS="-n $CPU_COUNT --exit_upon_crash" export TARGET="msg_ping_target" # replace with the target to be fuzzed cargo hfuzz run $TARGET ``` (Or, for a prettier output, replace the last line with `cargo --color always hfuzz run $TARGET`.) #### cargo-fuzz / libFuzzer To run fuzzing using `cargo-fuzz / libFuzzer`, run ```shell rustup install nightly # Note: libFuzzer requires a nightly version of rust. cargo +nightly fuzz run --features "libfuzzer_fuzz" msg_ping_target ``` Note: If you encounter a `SIGKILL` during run/build check for OOM in kernel logs and consider increasing RAM size for VM. If you wish to just generate fuzzing binary executables for `libFuzzer` and not run them: ```shell cargo +nightly fuzz build --features "libfuzzer_fuzz" msg_ping_target # Generates binary artifact in path ./target/aarch64-unknown-linux-gnu/release/msg_ping_target # Exact path depends on your system architecture. ``` You can upload the build artifact generated above to `ClusterFuzz` for distributed fuzzing. ### List Fuzzing Targets To see a list of available fuzzing targets, run: ```shell ls ./src/bin/ ``` ## A fuzz test failed, what do I do? You're trying to create a PR, but need to find the underlying cause of that pesky fuzz failure blocking the merge? Worry not, for this is easily traced. If your output log looks like this: ``` Size:639 (i,b,hw,ed,ip,cmp): 0/0/0/0/0/1, Tot:0/0/0/2036/5/28604 Seen a crash. Terminating all fuzzing threads … # a lot of lines in between <0x0000555555565559> [func:UNKNOWN file: line:0 module:./rust-lightning/fuzz/hfuzz_target/x86_64-unknown-linux-gnu/release/full_stack_target] <0x0000000000000000> [func:UNKNOWN file: line:0 module:UNKNOWN] ===================================================================== 2d3136383734090101010101010101010101010101010101010101010101 010101010100040101010101010101010101010103010101010100010101 0069d07c319a4961 The command "if [ "$(rustup show | grep default | grep stable)" != "" ]; then cd fuzz && cargo test --verbose && ./ci-fuzz.sh; fi" exited with 1. ``` Note that the penultimate stack trace line ends in `release/full_stack_target]`. That indicates that the failing target was `full_stack`. To reproduce the error locally, simply copy the hex, and run the following from the `fuzz` directory: ```shell export TARGET="full_stack" # adjust for your output export HEX="2d3136383734090101010101010101010101010101010101010101010101\ 010101010100040101010101010101010101010103010101010100010101\ 0069d07c319a4961" # adjust for your output mkdir -p ./test_cases/$TARGET echo $HEX | xxd -r -p > ./test_cases/$TARGET/any_filename_works export RUST_BACKTRACE=1 export RUSTFLAGS="--cfg=fuzzing" cargo test ``` Note that if the fuzz test failed locally, moving the offending run's trace to the `test_cases` folder should also do the trick; simply replace the `echo $HEX |` line above with (the trace file name is of course a bit longer than in the example): ```shell mv hfuzz_workspace/fuzz_target/SIGABRT.PC.7ffff7e21ce1.STACK.[…].fuzz ./test_cases/$TARGET/ ``` This will reproduce the failing fuzz input and yield a usable stack trace. ## How do I add a new fuzz test? 1. The easiest approach is to take one of the files in `fuzz/src/`, such as `process_network_graph.rs`, and duplicate it, renaming the new file to something more suitable. For the sake of example, let's call the new fuzz target we're creating `my_fuzzy_experiment`. 2. In the newly created file `fuzz/src/my_fuzzy_experiment.rs`, run a string substitution of `process_network_graph` to `my_fuzzy_experiment`, such that the three methods in the file are `do_test`, `my_fuzzy_experiment_test`, and `my_fuzzy_experiment_run`. 3. Adjust the body (not the signature!) of `do_test` as necessary for the new fuzz test. 4. In `fuzz/src/bin/gen_target.sh`, add a line reading `GEN_TEST my_fuzzy_experiment` to the first group of `GEN_TEST` lines (starting in line 9). 5. If your test relies on a new local crate, add that crate as a dependency to `fuzz/Cargo.toml`. 6. In `fuzz/src/lib.rs`, add the line `pub mod my_fuzzy_experiment`. Additionally, if you added a new crate dependency, add the `extern crate […]` import line. 7. Run `fuzz/src/bin/gen_target.sh`. 8. There is no step eight: happy fuzzing! Rust-Lightning ============== [![Crate](https://img.shields.io/crates/v/lightning.svg?logo=rust)](https://crates.io/crates/lightning) [![Documentation](https://img.shields.io/static/v1?logo=read-the-docs&label=docs.rs&message=lightning&color=informational)](https://docs.rs/lightning/) [![Safety Dance](https://img.shields.io/badge/unsafe-forbidden-success.svg)](https://github.com/rust-secure-code/safety-dance/) [LDK](https://lightningdevkit.org)/`rust-lightning` is a highly performant and flexible implementation of the Lightning Network protocol. The primary crate, `lightning`, is runtime-agnostic. Data persistence, chain interactions, and networking can be provided by LDK's [sample modules](#crates), or you may provide your own custom implementations. More information is available in the [`About`](#about) section. Status ------ The project implements all of the [BOLT specifications](https://github.com/lightning/bolts), and has been in production use since 2021. As with any Lightning implementation, care and attention to detail is important for safe deployment. Communications for `rust-lightning` and Lightning Development Kit happen through our LDK [Discord](https://discord.gg/5AcknnMfBw) channels. Crates ----------- 1. [lightning](./lightning) The core of the LDK library, implements the Lightning protocol, channel state machine, and on-chain logic. Supports `no-std` and exposes only relatively low-level interfaces. 2. [lightning-background-processor](./lightning-background-processor) Utilities to perform required background tasks for Rust Lightning. 3. [lightning-block-sync](./lightning-block-sync) Utilities to fetch the chain data from a block source and feed them into Rust Lightning. 4. [lightning-invoice](./lightning-invoice) Data structures to parse and serialize [BOLT #11](https://github.com/lightning/bolts/blob/master/11-payment-encoding.md) Lightning invoices. 5. [lightning-net-tokio](./lightning-net-tokio) Implementation of the `rust-lightning` network stack using the [Tokio](https://github.com/tokio-rs/tokio) `async` runtime. For `rust-lightning` clients which wish to make direct connections to Lightning P2P nodes, this is a simple alternative to implementing the required network stack, especially for those already using Tokio. 6. [lightning-persister](./lightning-persister) Implements utilities to manage `rust-lightning` channel data persistence and retrieval. Persisting channel data is crucial to avoiding loss of channel funds. 7. [lightning-rapid-gossip-sync](./lightning-rapid-gossip-sync) Client for rapid gossip graph syncing, aimed primarily at mobile clients. About ----------- LDK/`rust-lightning` is a generic library that allows you to build a Lightning node without needing to worry about getting all of the Lightning state machine, routing, and on-chain punishment code (and other chain interactions) exactly correct. Note that LDK isn't, in itself, a node. For an out-of-the-box Lightning node based on LDK, see [Sensei](https://l2.technology/sensei). However, if you want to integrate Lightning with custom features such as your own chain sync, key management, data storage/backup logic, etc., LDK is likely your best option. Some `rust-lightning` utilities such as those in [`chan_utils`](./lightning/src/ln/chan_utils.rs) are also suitable for use in non-LN Bitcoin applications such as Discreet Log Contracts (DLCs) and bulletin boards. A sample node which fetches blockchain data and manages on-chain funds via the Bitcoin Core RPC/REST interface is available [here](https://github.com/lightningdevkit/ldk-sample/). The individual pieces of that demo are composable, so you can pick the off-the-shelf parts you want and replace the rest. In general, `rust-lightning` does not provide (but LDK has implementations of): * on-disk storage - you can store the channel state any way you want - whether Google Drive/iCloud, a local disk, any key-value store/database/a remote server, or any combination of them - we provide a clean API that provides objects which can be serialized into simple binary blobs, and stored in any way you wish. * blockchain data - we provide a simple `block_connected`/`block_disconnected` API which you provide block headers and transaction information to. We also provide an API for getting information about transactions we wish to be informed of, which is compatible with Electrum server requests/neutrino filtering/etc. * UTXO management - RL/LDK owns on-chain funds as long as they are claimable as part of a Lightning output which can be contested - once a channel is closed and all on-chain outputs are spendable only by the user, we provide users notifications that a UTXO is "theirs" again and it is up to them to spend it as they wish. Additionally, channel funding is accomplished with a generic API which notifies users of the output which needs to appear on-chain, which they can then create a transaction for. Once a transaction is created, we handle the rest. This is a large part of our API's goals - making it easier to integrate Lightning into existing on-chain wallets which have their own on-chain logic - without needing to move funds in and out of a separate Lightning wallet with on-chain transactions and a separate private key system. * networking - to enable a user to run a full Lightning node on an embedded machine, we don't specify exactly how to connect to another node at all! We provide a default implementation which uses TCP sockets, but, e.g., if you wanted to run your full Lightning node on a hardware wallet, you could, by piping the Lightning network messages over USB/serial and then sending them in a TCP socket from another machine. * private keys - again we have "default implementations", but users can choose to provide private keys to RL/LDK in any way they wish following a simple API. We even support a generic API for signing transactions, allowing users to run RL/LDK without any private keys in memory/putting private keys only on hardware wallets. LDK's customizability was presented about at Advancing Bitcoin in February 2020: https://vimeo.com/showcase/8372504/video/412818125 Design Goal ----------- The goal is to provide a fully-featured and incredibly flexible Lightning implementation, allowing users to decide how they wish to use it. With that in mind, everything should be exposed via simple, composable APIs. More information about `rust-lightning`'s flexibility is provided in the `About` section above. For security reasons, do not add new dependencies. Really do not add new non-optional/non-test/non-library dependencies. Really really do not add dependencies with dependencies. Do convince Andrew to cut down dependency usage in `rust-bitcoin`. Rust-Lightning vs. LDK (Lightning Development Kit) ------------- `rust-lightning` refers to the core `lightning` crate within this repo, whereas LDK encompasses `rust-lightning` and all of its sample modules and crates (e.g. the `lightning-persister` crate), language bindings, sample node implementation(s), and other tools built around using `rust-lightning` for Lightning integration or building a Lightning node. Tagline ------- *"Rust-Lightning, not Rusty's Lightning!"* Contributing ------------ Contributors are warmly welcome, see [CONTRIBUTING.md](CONTRIBUTING.md). Project Architecture --------------------- For a `rust-lightning` high-level API introduction, see [ARCH.md](ARCH.md). License is either Apache-2.0 or MIT, at the option of the user (ie dual-license Apache-2.0 and MIT). This crate uses criterion to benchmark various LDK functions. It can be run as `RUSTFLAGS=--cfg=ldk_bench cargo bench`. For routing or other HashMap-bottlenecked functions, the `hashbrown` feature should also be benchmarked.
NEAR-Analytics_devSnoopy
.gitpod.yml README.md contract Cargo.toml README.md build.sh deploy.sh generate_data.sh src lib.rs docs readme.md integration-tests Cargo.toml src tests.rs package-lock.json package.json rust-toolchain.toml
Dev Snoopy ================== This is a proxy smartcontract that functions as a metadata holder for exsiting transactions from external contracts. This project aims to provide a way to track the impact of events created on the NEAR blockchain. Head over to docs to learn more about the project. - [docs](/docs/readme.md) This app was initialized with [create-near-app] Quick Start =========== If you haven't installed dependencies during setup: npm install Build and deploy your contract to TestNet with a temporary dev account: npm run deploy Test your contract: npm test If you have a frontend, run `npm start`. This will run a dev server. Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
Juarsh_Vote-in-Blocks
Backend config db.js controllers conductedController.js index.js loginController.js passportGoogleCallback.js registerController.js totalElectionController.js updateController.js votingController.js votingElectionController.js index.js models authModels.js index.js voting_model.js package-lock.json package.json passport google.js routes authRoutes.js index.js voteRoutes.js Frontend .gitpod.yml babel.config.js contract README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts index.ts tsconfig.json compile.js node_modules .bin acorn.cmd acorn.ps1 asb.cmd asb.ps1 asbuild.cmd asbuild.ps1 asc.cmd asc.ps1 asinit.cmd asinit.ps1 asp.cmd asp.ps1 aspect.cmd aspect.ps1 assemblyscript-build.cmd assemblyscript-build.ps1 eslint.cmd eslint.ps1 esparse.cmd esparse.ps1 esvalidate.cmd esvalidate.ps1 js-yaml.cmd js-yaml.ps1 mkdirp.cmd mkdirp.ps1 near-vm-as.cmd near-vm-as.ps1 near-vm.cmd near-vm.ps1 nearley-railroad.cmd nearley-railroad.ps1 nearley-test.cmd nearley-test.ps1 nearley-unparse.cmd nearley-unparse.ps1 nearleyc.cmd nearleyc.ps1 node-which.cmd node-which.ps1 rimraf.cmd rimraf.ps1 semver.cmd semver.ps1 shjs.cmd shjs.ps1 wasm-opt.cmd wasm-opt.ps1 @as-covers assembly CONTRIBUTING.md README.md index.ts package.json tsconfig.json core CONTRIBUTING.md README.md package.json glue README.md lib index.d.ts index.js package.json transform README.md lib index.d.ts index.js util.d.ts util.js node_modules visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js toString.d.ts toString.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformRange.d.ts transformRange.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json package.json @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json @babel code-frame README.md lib index.js package.json helper-validator-identifier README.md lib identifier.js index.js keyword.js package.json scripts generate-identifier-regex.js highlight README.md lib index.js node_modules ansi-styles index.js package.json readme.md chalk index.js package.json readme.md templates.js types index.d.ts color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name .eslintrc.json README.md index.js package.json test.js escape-string-regexp index.js package.json readme.md has-flag index.js package.json readme.md supports-color browser.js index.js package.json readme.md package.json @eslint eslintrc CHANGELOG.md README.md conf config-schema.js environments.js eslint-all.js eslint-recommended.js lib cascading-config-array-factory.js config-array-factory.js config-array config-array.js config-dependency.js extracted-config.js ignore-pattern.js index.js override-tester.js flat-compat.js index.js shared ajv.js config-ops.js config-validator.js deprecation-warnings.js naming.js relative-module-resolver.js types.js node_modules ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json @humanwhocodes config-array README.md api.js package.json object-schema .eslintrc.js .travis.yml README.md package.json src index.js merge-strategy.js object-schema.js validation-strategy.js tests merge-strategy.js object-schema.js validation-strategy.js acorn-jsx README.md index.d.ts index.js package.json xhtml.js acorn CHANGELOG.md README.md dist acorn.d.ts acorn.js acorn.mjs.d.ts bin.js package.json ajv .runkit_example.js README.md dist 2019.d.ts 2019.js 2020.d.ts 2020.js ajv.d.ts ajv.js compile codegen code.d.ts code.js index.d.ts index.js scope.d.ts scope.js errors.d.ts errors.js index.d.ts index.js jtd parse.d.ts parse.js serialize.d.ts serialize.js types.d.ts types.js names.d.ts names.js ref_error.d.ts ref_error.js resolve.d.ts resolve.js rules.d.ts rules.js util.d.ts util.js validate applicability.d.ts applicability.js boolSchema.d.ts boolSchema.js dataType.d.ts dataType.js defaults.d.ts defaults.js index.d.ts index.js keyword.d.ts keyword.js subschema.d.ts subschema.js core.d.ts core.js jtd.d.ts jtd.js refs data.json json-schema-2019-09 index.d.ts index.js meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.d.ts index.js meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.d.ts jtd-schema.js runtime equal.d.ts equal.js parseJson.d.ts parseJson.js quote.d.ts quote.js timestamp.d.ts timestamp.js ucs2length.d.ts ucs2length.js validation_error.d.ts validation_error.js standalone index.d.ts index.js instance.d.ts instance.js types index.d.ts index.js json-schema.d.ts json-schema.js jtd-schema.d.ts jtd-schema.js vocabularies applicator additionalItems.d.ts additionalItems.js additionalProperties.d.ts additionalProperties.js allOf.d.ts allOf.js anyOf.d.ts anyOf.js contains.d.ts contains.js dependencies.d.ts dependencies.js dependentSchemas.d.ts dependentSchemas.js if.d.ts if.js index.d.ts index.js items.d.ts items.js items2020.d.ts items2020.js not.d.ts not.js oneOf.d.ts oneOf.js patternProperties.d.ts patternProperties.js prefixItems.d.ts prefixItems.js properties.d.ts properties.js propertyNames.d.ts propertyNames.js thenElse.d.ts thenElse.js code.d.ts code.js core id.d.ts id.js index.d.ts index.js ref.d.ts ref.js discriminator index.d.ts index.js types.d.ts types.js draft2020.d.ts draft2020.js draft7.d.ts draft7.js dynamic dynamicAnchor.d.ts dynamicAnchor.js dynamicRef.d.ts dynamicRef.js index.d.ts index.js recursiveAnchor.d.ts recursiveAnchor.js recursiveRef.d.ts recursiveRef.js errors.d.ts errors.js format format.d.ts format.js index.d.ts index.js jtd discriminator.d.ts discriminator.js elements.d.ts elements.js enum.d.ts enum.js error.d.ts error.js index.d.ts index.js metadata.d.ts metadata.js nullable.d.ts nullable.js optionalProperties.d.ts optionalProperties.js properties.d.ts properties.js ref.d.ts ref.js type.d.ts type.js union.d.ts union.js values.d.ts values.js metadata.d.ts metadata.js next.d.ts next.js unevaluated index.d.ts index.js unevaluatedItems.d.ts unevaluatedItems.js unevaluatedProperties.d.ts unevaluatedProperties.js validation const.d.ts const.js dependentRequired.d.ts dependentRequired.js enum.d.ts enum.js index.d.ts index.js limitContains.d.ts limitContains.js limitItems.d.ts limitItems.js limitLength.d.ts limitLength.js limitNumber.d.ts limitNumber.js limitProperties.d.ts limitProperties.js multipleOf.d.ts multipleOf.js pattern.d.ts pattern.js required.d.ts required.js uniqueItems.d.ts uniqueItems.js lib 2019.ts 2020.ts ajv.ts compile codegen code.ts index.ts scope.ts errors.ts index.ts jtd parse.ts serialize.ts types.ts names.ts ref_error.ts resolve.ts rules.ts util.ts validate applicability.ts boolSchema.ts dataType.ts defaults.ts index.ts keyword.ts subschema.ts core.ts jtd.ts refs data.json json-schema-2019-09 index.ts meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.ts meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.ts runtime equal.ts parseJson.ts quote.ts timestamp.ts ucs2length.ts validation_error.ts standalone index.ts instance.ts types index.ts json-schema.ts jtd-schema.ts vocabularies applicator additionalItems.ts additionalProperties.ts allOf.ts anyOf.ts contains.ts dependencies.ts dependentSchemas.ts if.ts index.ts items.ts items2020.ts not.ts oneOf.ts patternProperties.ts prefixItems.ts properties.ts propertyNames.ts thenElse.ts code.ts core id.ts index.ts ref.ts discriminator index.ts types.ts draft2020.ts draft7.ts dynamic dynamicAnchor.ts dynamicRef.ts index.ts recursiveAnchor.ts recursiveRef.ts errors.ts format format.ts index.ts jtd discriminator.ts elements.ts enum.ts error.ts index.ts metadata.ts nullable.ts optionalProperties.ts properties.ts ref.ts type.ts union.ts values.ts metadata.ts next.ts unevaluated index.ts unevaluatedItems.ts unevaluatedProperties.ts validation const.ts dependentRequired.ts enum.ts index.ts limitContains.ts limitItems.ts limitLength.ts limitNumber.ts limitProperties.ts multipleOf.ts pattern.ts required.ts uniqueItems.ts package.json ansi-colors README.md index.js package.json symbols.js types index.d.ts ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md argparse CHANGELOG.md README.md index.js lib action.js action append.js append constant.js count.js help.js store.js store constant.js false.js true.js subparsers.js version.js action_container.js argparse.js argument error.js exclusive.js group.js argument_parser.js const.js help added_formatters.js formatter.js namespace.js utils.js package.json as-bignum README.md assembly __tests__ as-pect.d.ts i128.spec.as.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts package.json asbuild README.md dist cli.d.ts cli.js commands build.d.ts build.js fmt.d.ts fmt.js index.d.ts index.js init cmd.d.ts cmd.js files asconfigJson.d.ts asconfigJson.js aspecConfig.d.ts aspecConfig.js assembly_files.d.ts assembly_files.js eslintConfig.d.ts eslintConfig.js gitignores.d.ts gitignores.js index.d.ts index.js indexJs.d.ts indexJs.js packageJson.d.ts packageJson.js test_files.d.ts test_files.js index.d.ts index.js interfaces.d.ts interfaces.js run.d.ts run.js test.d.ts test.js index.d.ts index.js main.d.ts main.js utils.d.ts utils.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md assembly JSON.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts index.js package.json temp-docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md assemblyscript-regex .eslintrc.js .github workflows benchmark.yml release.yml test.yml README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __spec_tests__ generated.spec.ts __tests__ alterations.spec.ts as-pect.d.ts boundary-assertions.spec.ts capture-group.spec.ts character-classes.spec.ts character-sets.spec.ts characters.ts empty.ts quantifiers.spec.ts range-quantifiers.spec.ts regex.spec.ts utils.ts char.ts env.ts index.ts nfa matcher.ts nfa.ts types.ts walker.ts parser node.ts parser.ts string-iterator.ts walker.ts regexp.ts tsconfig.json util.ts benchmark benchmark.js package.json spec test-generator.js ts index.ts tsconfig.json assemblyscript-temporal .github workflows node.js.yml release.yml .vscode launch.json README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __tests__ README.md as-pect.d.ts date.spec.ts duration.spec.ts empty.ts plaindate.spec.ts plaindatetime.spec.ts plainmonthday.spec.ts plaintime.spec.ts plainyearmonth.spec.ts timezone.spec.ts zoneddatetime.spec.ts constants.ts date.ts duration.ts enums.ts env.ts index.ts instant.ts now.ts plaindate.ts plaindatetime.ts plainmonthday.ts plaintime.ts plainyearmonth.ts timezone.ts tsconfig.json tz __tests__ index.spec.ts rule.spec.ts zone.spec.ts iana.ts index.ts rule.ts zone.ts utils.ts zoneddatetime.ts development.md package.json tzdb README.md iana theory.html zoneinfo2tdf.pl assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json astral-regex index.d.ts index.js package.json readme.md axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match .github FUNDING.yml LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json callsites index.d.ts index.js package.json readme.md camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js cross-spawn CHANGELOG.md README.md index.js lib enoent.js parse.js util escape.js readShebang.js resolveCommand.js package.json csv-stringify README.md lib browser index.js sync.js es5 index.d.ts index.js sync.d.ts sync.js index.d.ts index.js sync.d.ts sync.js package.json debug README.md package.json src browser.js common.js index.js node.js decamelize index.js package.json readme.md deep-is .travis.yml example cmp.js index.js package.json test NaN.js cmp.js neg-vs-pos-0.js diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js doctrine CHANGELOG.md README.md lib doctrine.js typed.js utility.js package.json emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js enquirer CHANGELOG.md README.md index.d.ts index.js lib ansi.js combos.js completer.js interpolate.js keypress.js placeholder.js prompt.js prompts autocomplete.js basicauth.js confirm.js editable.js form.js index.js input.js invisible.js list.js multiselect.js numeral.js password.js quiz.js scale.js select.js snippet.js sort.js survey.js text.js toggle.js render.js roles.js state.js styles.js symbols.js theme.js timer.js types array.js auth.js boolean.js index.js number.js string.js utils.js package.json env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js escape-string-regexp index.d.ts index.js package.json readme.md eslint-scope CHANGELOG.md README.md lib definition.js index.js pattern-visitor.js reference.js referencer.js scope-manager.js scope.js variable.js package.json eslint-utils README.md index.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json eslint CHANGELOG.md README.md bin eslint.js conf category-list.json config-schema.js default-cli-options.js eslint-all.js eslint-recommended.js replacements.json lib api.js cli-engine cli-engine.js file-enumerator.js formatters checkstyle.js codeframe.js compact.js html.js jslint-xml.js json-with-metadata.js json.js junit.js stylish.js table.js tap.js unix.js visualstudio.js hash.js index.js lint-result-cache.js load-rules.js xml-escape.js cli.js config default-config.js flat-config-array.js flat-config-schema.js rule-validator.js eslint eslint.js index.js init autoconfig.js config-file.js config-initializer.js config-rule.js npm-utils.js source-code-utils.js linter apply-disable-directives.js code-path-analysis code-path-analyzer.js code-path-segment.js code-path-state.js code-path.js debug-helpers.js fork-context.js id-generator.js config-comment-parser.js index.js interpolate.js linter.js node-event-generator.js report-translator.js rule-fixer.js rules.js safe-emitter.js source-code-fixer.js timing.js options.js rule-tester index.js rule-tester.js rules accessor-pairs.js array-bracket-newline.js array-bracket-spacing.js array-callback-return.js array-element-newline.js arrow-body-style.js arrow-parens.js arrow-spacing.js block-scoped-var.js block-spacing.js brace-style.js callback-return.js camelcase.js capitalized-comments.js class-methods-use-this.js comma-dangle.js comma-spacing.js comma-style.js complexity.js computed-property-spacing.js consistent-return.js consistent-this.js constructor-super.js curly.js default-case-last.js default-case.js default-param-last.js dot-location.js dot-notation.js eol-last.js eqeqeq.js for-direction.js func-call-spacing.js func-name-matching.js func-names.js func-style.js function-call-argument-newline.js function-paren-newline.js generator-star-spacing.js getter-return.js global-require.js grouped-accessor-pairs.js guard-for-in.js handle-callback-err.js id-blacklist.js id-denylist.js id-length.js id-match.js implicit-arrow-linebreak.js indent-legacy.js indent.js index.js init-declarations.js jsx-quotes.js key-spacing.js keyword-spacing.js line-comment-position.js linebreak-style.js lines-around-comment.js lines-around-directive.js lines-between-class-members.js max-classes-per-file.js max-depth.js max-len.js max-lines-per-function.js max-lines.js max-nested-callbacks.js max-params.js max-statements-per-line.js max-statements.js multiline-comment-style.js multiline-ternary.js new-cap.js new-parens.js newline-after-var.js newline-before-return.js newline-per-chained-call.js no-alert.js no-array-constructor.js no-async-promise-executor.js no-await-in-loop.js no-bitwise.js no-buffer-constructor.js no-caller.js no-case-declarations.js no-catch-shadow.js no-class-assign.js no-compare-neg-zero.js no-cond-assign.js no-confusing-arrow.js no-console.js no-const-assign.js no-constant-condition.js no-constructor-return.js no-continue.js no-control-regex.js no-debugger.js no-delete-var.js no-div-regex.js no-dupe-args.js no-dupe-class-members.js no-dupe-else-if.js no-dupe-keys.js no-duplicate-case.js no-duplicate-imports.js no-else-return.js no-empty-character-class.js no-empty-function.js no-empty-pattern.js no-empty.js no-eq-null.js no-eval.js no-ex-assign.js no-extend-native.js no-extra-bind.js no-extra-boolean-cast.js no-extra-label.js no-extra-parens.js no-extra-semi.js no-fallthrough.js no-floating-decimal.js no-func-assign.js no-global-assign.js no-implicit-coercion.js no-implicit-globals.js no-implied-eval.js no-import-assign.js no-inline-comments.js no-inner-declarations.js no-invalid-regexp.js no-invalid-this.js no-irregular-whitespace.js no-iterator.js no-label-var.js no-labels.js no-lone-blocks.js no-lonely-if.js no-loop-func.js no-loss-of-precision.js no-magic-numbers.js no-misleading-character-class.js no-mixed-operators.js no-mixed-requires.js no-mixed-spaces-and-tabs.js no-multi-assign.js no-multi-spaces.js no-multi-str.js no-multiple-empty-lines.js no-native-reassign.js no-negated-condition.js no-negated-in-lhs.js no-nested-ternary.js no-new-func.js no-new-object.js no-new-require.js no-new-symbol.js no-new-wrappers.js no-new.js no-nonoctal-decimal-escape.js no-obj-calls.js no-octal-escape.js no-octal.js no-param-reassign.js no-path-concat.js no-plusplus.js no-process-env.js no-process-exit.js no-promise-executor-return.js no-proto.js no-prototype-builtins.js no-redeclare.js no-regex-spaces.js no-restricted-exports.js no-restricted-globals.js no-restricted-imports.js no-restricted-modules.js no-restricted-properties.js no-restricted-syntax.js no-return-assign.js no-return-await.js no-script-url.js no-self-assign.js no-self-compare.js no-sequences.js no-setter-return.js no-shadow-restricted-names.js no-shadow.js no-spaced-func.js no-sparse-arrays.js no-sync.js no-tabs.js no-template-curly-in-string.js no-ternary.js no-this-before-super.js no-throw-literal.js no-trailing-spaces.js no-undef-init.js no-undef.js no-undefined.js no-underscore-dangle.js no-unexpected-multiline.js no-unmodified-loop-condition.js no-unneeded-ternary.js no-unreachable-loop.js no-unreachable.js no-unsafe-finally.js no-unsafe-negation.js no-unsafe-optional-chaining.js no-unused-expressions.js no-unused-labels.js no-unused-vars.js no-use-before-define.js no-useless-backreference.js no-useless-call.js no-useless-catch.js no-useless-computed-key.js no-useless-concat.js no-useless-constructor.js no-useless-escape.js no-useless-rename.js no-useless-return.js no-var.js no-void.js no-warning-comments.js no-whitespace-before-property.js no-with.js nonblock-statement-body-position.js object-curly-newline.js object-curly-spacing.js object-property-newline.js object-shorthand.js one-var-declaration-per-line.js one-var.js operator-assignment.js operator-linebreak.js padded-blocks.js padding-line-between-statements.js prefer-arrow-callback.js prefer-const.js prefer-destructuring.js prefer-exponentiation-operator.js prefer-named-capture-group.js prefer-numeric-literals.js prefer-object-spread.js prefer-promise-reject-errors.js prefer-reflect.js prefer-regex-literals.js prefer-rest-params.js prefer-spread.js prefer-template.js quote-props.js quotes.js radix.js require-atomic-updates.js require-await.js require-jsdoc.js require-unicode-regexp.js require-yield.js rest-spread-spacing.js semi-spacing.js semi-style.js semi.js sort-imports.js sort-keys.js sort-vars.js space-before-blocks.js space-before-function-paren.js space-in-parens.js space-infix-ops.js space-unary-ops.js spaced-comment.js strict.js switch-colon-spacing.js symbol-description.js template-curly-spacing.js template-tag-spacing.js unicode-bom.js use-isnan.js utils ast-utils.js fix-tracker.js keywords.js lazy-loading-rule-map.js patterns letters.js unicode index.js is-combining-character.js is-emoji-modifier.js is-regional-indicator-symbol.js is-surrogate-pair.js valid-jsdoc.js valid-typeof.js vars-on-top.js wrap-iife.js wrap-regex.js yield-star-spacing.js yoda.js shared ajv.js ast-utils.js config-validator.js deprecation-warnings.js logging.js relative-module-resolver.js runtime-info.js string-utils.js traverser.js types.js source-code index.js source-code.js token-store backward-token-comment-cursor.js backward-token-cursor.js cursor.js cursors.js decorative-cursor.js filter-cursor.js forward-token-comment-cursor.js forward-token-cursor.js index.js limit-cursor.js padded-token-cursor.js skip-cursor.js utils.js messages all-files-ignored.js extend-config-missing.js failed-to-read-json.js file-not-found.js no-config-found.js plugin-conflict.js plugin-invalid.js plugin-missing.js print-config-with-directory-path.js whitespace-found.js node_modules ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json espree CHANGELOG.md README.md espree.js lib ast-node-types.js espree.js features.js options.js token-translator.js visitor-keys.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json esprima README.md bin esparse.js esvalidate.js dist esprima.js package.json esquery README.md dist esquery.esm.js esquery.esm.min.js esquery.js esquery.lite.js esquery.lite.min.js esquery.min.js license.txt node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json parser.js esrecurse README.md esrecurse.js gulpfile.babel.js node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json estraverse README.md estraverse.js gulpfile.js package.json esutils README.md lib ast.js code.js keyword.js utils.js package.json fast-deep-equal README.md es6 index.d.ts index.js react.d.ts react.js index.d.ts index.js package.json react.d.ts react.js fast-json-stable-stringify .eslintrc.yml .github FUNDING.yml .travis.yml README.md benchmark index.js test.json example key_cmp.js nested.js str.js value_cmp.js index.d.ts index.js package.json test cmp.js nested.js str.js to-json.js fast-levenshtein LICENSE.md README.md levenshtein.js package.json file-entry-cache README.md cache.js changelog.md package.json find-up index.d.ts index.js package.json readme.md flat-cache README.md changelog.md package.json src cache.js del.js utils.js flatted .github FUNDING.yml README.md SPECS.md cjs index.js package.json es.js esm index.js index.js min.js package.json php flatted.php types.d.ts follow-redirects README.md http.js https.js index.js node_modules debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js ms index.js license.md package.json readme.md package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json function-bind .jscs.json .travis.yml README.md implementation.js index.js package.json test index.js functional-red-black-tree README.md bench test.js package.json rbtree.js test test.js get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob-parent CHANGELOG.md README.md index.js package.json glob README.md common.js glob.js package.json sync.js globals globals.json index.d.ts index.js package.json readme.md has-flag index.d.ts index.js package.json readme.md has README.md package.json src index.js test index.js hasurl README.md index.js package.json ignore CHANGELOG.md README.md index.d.ts index.js legacy.js package.json import-fresh index.d.ts index.js package.json readme.md imurmurhash README.md imurmurhash.js imurmurhash.min.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json interpret README.md index.js mjs-stub.js package.json is-core-module CHANGELOG.md README.md core.json index.js package.json test index.js is-extglob README.md index.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md is-glob README.md index.js package.json isarray .travis.yml README.md component.json index.js package.json test.js isexe README.md index.js mode.js package.json test basic.js windows.js isobject README.md index.js package.json js-base64 LICENSE.md README.md base64.d.ts base64.js package.json js-tokens CHANGELOG.md README.md index.js package.json js-yaml CHANGELOG.md README.md bin js-yaml.js dist js-yaml.js js-yaml.min.js index.js lib js-yaml.js js-yaml common.js dumper.js exception.js loader.js mark.js schema.js schema core.js default_full.js default_safe.js failsafe.js json.js type.js type binary.js bool.js float.js int.js js function.js regexp.js undefined.js map.js merge.js null.js omap.js pairs.js seq.js set.js str.js timestamp.js package.json json-schema-traverse .eslintrc.yml .github FUNDING.yml workflows build.yml publish.yml README.md index.d.ts index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js json-stable-stringify-without-jsonify .travis.yml example key_cmp.js nested.js str.js value_cmp.js index.js package.json test cmp.js nested.js replacer.js space.js str.js to-json.js levn README.md lib cast.js index.js parse-string.js package.json line-column README.md lib line-column.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.merge README.md index.js package.json lodash.sortby README.md index.js package.json lodash.truncate README.md index.js package.json long README.md dist long.js index.js package.json src long.js lru-cache README.md index.js package.json minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md natural-compare README.md index.js package.json near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts datetime.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ empty.ts exportAs.ts model.ts sentences.ts singleton copy.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json optionator CHANGELOG.md README.md lib help.js index.js util.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md parent-module index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md path-key index.d.ts index.js package.json readme.md path-parse README.md index.js package.json prelude-ls CHANGELOG.md README.md lib Func.js List.js Num.js Obj.js Str.js index.js package.json progress CHANGELOG.md Readme.md index.js lib node-progress.js package.json punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json rechoir .travis.yml README.md index.js lib extension.js normalize.js register.js package.json regexpp README.md index.d.ts index.js package.json require-directory .travis.yml index.js package.json require-from-string index.js package.json readme.md require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json resolve-from index.js package.json readme.md resolve SECURITY.md appveyor.yml example async.js sync.js index.js lib async.js caller.js core.js core.json is-core.js node-modules-paths.js normalize-options.js sync.js package.json test core.js dotdot.js dotdot abc index.js index.js faulty_basedir.js filter.js filter_sync.js mock.js mock_sync.js module_dir.js module_dir xmodules aaa index.js ymodules aaa index.js zmodules bbb main.js package.json node-modules-paths.js node_path.js node_path x aaa index.js ccc index.js y bbb index.js ccc index.js nonstring.js pathfilter.js pathfilter deep_ref main.js precedence.js precedence aaa.js aaa index.js main.js bbb.js bbb main.js resolver.js resolver baz doom.js package.json quux.js browser_field a.js b.js package.json cup.coffee dot_main index.js package.json dot_slash_main index.js package.json foo.js incorrect_main index.js package.json invalid_main package.json mug.coffee mug.js multirepo lerna.json package.json packages package-a index.js package.json package-b index.js package.json nested_symlinks mylib async.js package.json sync.js other_path lib other-lib.js root.js quux foo index.js same_names foo.js foo index.js symlinked _ node_modules foo.js package bar.js package.json without_basedir main.js resolver_sync.js shadowed_core.js shadowed_core node_modules util index.js subdirs.js symlinks.js ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json semver CHANGELOG.md README.md bin semver.js classes comparator.js index.js range.js semver.js functions clean.js cmp.js coerce.js compare-build.js compare-loose.js compare.js diff.js eq.js gt.js gte.js inc.js lt.js lte.js major.js minor.js neq.js parse.js patch.js prerelease.js rcompare.js rsort.js satisfies.js sort.js valid.js index.js internal constants.js debug.js identifiers.js parse-options.js re.js package.json preload.js ranges gtr.js intersects.js ltr.js max-satisfying.js min-satisfying.js min-version.js outside.js simplify.js subset.js to-comparators.js valid.js set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json shebang-command index.js package.json readme.md shebang-regex index.d.ts index.js package.json readme.md shelljs CHANGELOG.md README.md commands.js global.js make.js package.json plugin.js shell.js src cat.js cd.js chmod.js common.js cp.js dirs.js echo.js error.js exec-child.js exec.js find.js grep.js head.js ln.js ls.js mkdir.js mv.js popd.js pushd.js pwd.js rm.js sed.js set.js sort.js tail.js tempdir.js test.js to.js toEnd.js touch.js uniq.js which.js slice-ansi index.js package.json readme.md sprintf-js README.md bower.json demo angular.html dist angular-sprintf.min.js sprintf.min.js gruntfile.js package.json src angular-sprintf.js sprintf.js test test.js string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md strip-json-comments index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md table README.md dist alignString.d.ts alignString.js alignTableData.d.ts alignTableData.js calculateCellHeight.d.ts calculateCellHeight.js calculateCellWidths.d.ts calculateCellWidths.js calculateColumnWidths.d.ts calculateColumnWidths.js calculateRowHeights.d.ts calculateRowHeights.js createStream.d.ts createStream.js drawBorder.d.ts drawBorder.js drawContent.d.ts drawContent.js drawHeader.d.ts drawHeader.js drawRow.d.ts drawRow.js drawTable.d.ts drawTable.js generated validators.d.ts validators.js getBorderCharacters.d.ts getBorderCharacters.js index.d.ts index.js makeStreamConfig.d.ts makeStreamConfig.js makeTableConfig.d.ts makeTableConfig.js mapDataUsingRowHeights.d.ts mapDataUsingRowHeights.js padTableData.d.ts padTableData.js stringifyTableData.d.ts stringifyTableData.js table.d.ts table.js truncateTableData.d.ts truncateTableData.js types api.d.ts api.js internal.d.ts internal.js utils.d.ts utils.js validateConfig.d.ts validateConfig.js validateTableData.d.ts validateTableData.js wrapCell.d.ts wrapCell.js wrapString.d.ts wrapString.js wrapWord.d.ts wrapWord.js package.json tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js normalize-windows-path.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js strip-absolute-path.js strip-trailing-slashes.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js package.json text-table .travis.yml example align.js center.js dotalign.js doubledot.js table.js index.js package.json test align.js ansi-colors.js center.js dotalign.js doubledot.js table.js tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist cjs decorator.js index.js mixin-tracking.js mixins.js proxy.js settings.js types.js util.js esm index.js index.min.js types decorator.d.ts index.d.ts mixin-tracking.d.ts mixins.d.ts proxy.d.ts settings.d.ts types.d.ts util.d.ts package.json type-check README.md lib check.js index.js parse-type.js package.json type-fest base.d.ts index.d.ts package.json readme.md source async-return-type.d.ts asyncify.d.ts basic.d.ts conditional-except.d.ts conditional-keys.d.ts conditional-pick.d.ts entries.d.ts entry.d.ts except.d.ts fixed-length-array.d.ts iterable-element.d.ts literal-union.d.ts merge-exclusive.d.ts merge.d.ts mutable.d.ts opaque.d.ts package-json.d.ts partial-deep.d.ts promisable.d.ts promise-value.d.ts readonly-deep.d.ts require-at-least-one.d.ts require-exactly-one.d.ts set-optional.d.ts set-required.d.ts set-return-type.d.ts stringified.d.ts tsconfig-json.d.ts union-to-intersection.d.ts utilities.d.ts value-of.d.ts ts41 camel-case.d.ts delimiter-case.d.ts index.d.ts kebab-case.d.ts pascal-case.d.ts snake-case.d.ts universal-url README.md browser.js index.js package.json uri-js README.md dist es5 uri.all.d.ts uri.all.js uri.all.min.d.ts uri.all.min.js esnext index.d.ts index.js regexps-iri.d.ts regexps-iri.js regexps-uri.d.ts regexps-uri.js schemes http.d.ts http.js https.d.ts https.js mailto.d.ts mailto.js urn-uuid.d.ts urn-uuid.js urn.d.ts urn.js ws.d.ts ws.js wss.d.ts wss.js uri.d.ts uri.js util.d.ts util.js package.json v8-compile-cache CHANGELOG.md README.md package.json v8-compile-cache.js visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json which CHANGELOG.md README.md package.json which.js word-wrap README.md index.d.ts index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json package-lock.json package.json package.json src App.js Context context.js Frontend Components Conducted Card Conducted Card.js Election Card Election Card.js Election Result Card.js Login Form Login.css Login.js Menubar Menubar.js Register Form Register.css Register.js Result Card Result Card.js Scrollbars Scrollbars.js Vote Card Vote Card.js Modal Modal.js Pages Conducted Conducted.js Details Details.js Election Election.js Home Home.css Home.js Login Login.css Login.js Results Results.js Vote Vote.js __mocks__ fileMock.js assets logo-black.svg logo-white.svg config.js global.css index.html index.js jest.init.js main.test.js utils.js wallet login index.html | features not yet implemented issues with the tests differences between PCRE and JS regex | | |
# URI.js URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI ([RFC 3987](http://www.ietf.org/rfc/rfc3987.txt)), IDNA ([RFC 5890](http://www.ietf.org/rfc/rfc5890.txt)), IPv6 Address ([RFC 5952](http://www.ietf.org/rfc/rfc5952.txt)), IPv6 Zone Identifier ([RFC 6874](http://www.ietf.org/rfc/rfc6874.txt)) specifications. URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated). ## API ### Parsing URI.parse("uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body"); //returns: //{ // scheme : "uri", // userinfo : "user:pass", // host : "example.com", // port : 123, // path : "/one/two.three", // query : "q1=a1&q2=a2", // fragment : "body" //} ### Serializing URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer" ### Resolving URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g" ### Normalizing URI.normalize("HTTP://ABC.com:80/%7Esmith/home.html") === "http://abc.com/~smith/home.html" ### Comparison URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true ### IP Support //IPv4 normalization URI.normalize("//192.068.001.000") === "//192.68.1.0" //IPv6 normalization URI.normalize("//[2001:0:0DB8::0:0001]") === "//[2001:0:db8::1]" //IPv6 zone identifier support URI.parse("//[2001:db8::7%25en1]"); //returns: //{ // host : "2001:db8::7%en1" //} ### IRI Support //convert IRI to URI URI.serialize(URI.parse("http://examplé.org/rosé")) === "http://xn--exampl-gva.org/ros%C3%A9" //convert URI to IRI URI.serialize(URI.parse("http://xn--exampl-gva.org/ros%C3%A9"), {iri:true}) === "http://examplé.org/rosé" ### Options All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties: * `scheme` (string) Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior. * `reference` (string) If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme. * `tolerant` (boolean, false) If set to `true`, the parser will relax URI resolving rules. * `absolutePath` (boolean, false) If set to `true`, the serializer will not resolve a relative `path` component. * `iri` (boolean, false) If set to `true`, the serializer will unescape non-ASCII characters as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `unicodeSupport` (boolean, false) If set to `true`, the parser will unescape non-ASCII characters in the parsed output as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `domainHost` (boolean, false) If set to `true`, the library will treat the `host` component as a domain name, and convert IDNs (International Domain Names) as per [RFC 5891](http://www.ietf.org/rfc/rfc5891.txt). ## Scheme Extendable URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. Currently, URI.js has built in support for the following schemes: * http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\] * https \[[RFC 2818](http://www.ietf.org/rfc/rfc2818.txt)\] * ws \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * wss \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * mailto \[[RFC 6068](http://www.ietf.org/rfc/rfc6068.txt)\] * urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\] * urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\] ### HTTP/HTTPS Support URI.equal("HTTP://ABC.COM:80", "http://abc.com/") === true URI.equal("https://abc.com", "HTTPS://ABC.COM:443/") === true ### WS/WSS Support URI.parse("wss://example.com/foo?bar=baz"); //returns: //{ // scheme : "wss", // host: "example.com", // resourceName: "/foo?bar=baz", // secure: true, //} URI.equal("WS://ABC.COM:80/chat#one", "ws://abc.com/chat") === true ### Mailto Support URI.parse("mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!"); //returns: //{ // scheme : "mailto", // to : ["alpha@example.com", "bravo@example.com"], // subject : "SUBSCRIBE", // body : "Sign me up!" //} URI.serialize({ scheme : "mailto", to : ["alpha@example.com"], subject : "REMOVE", body : "Please remove me", headers : { cc : "charlie@example.com" } }) === "mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me" ### URN Support URI.parse("urn:example:foo"); //returns: //{ // scheme : "urn", // nid : "example", // nss : "foo", //} #### URN UUID Support URI.parse("urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6"); //returns: //{ // scheme : "urn", // nid : "uuid", // uuid : "f81d4fae-7dec-11d0-a765-00a0c91e6bf6", //} ## Usage To load in a browser, use the following tag: <script type="text/javascript" src="uri-js/dist/es5/uri.all.min.js"></script> To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line: npm install uri-js # OR yarn add uri-js Then, in your code, load it using: const URI = require("uri-js"); If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using: import * as URI from "uri-js"; Or you can load just what you need using named exports: import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from "uri-js"; ## Breaking changes ### Breaking changes from 3.x URN parsing has been completely changed to better align with the specification. Scheme is now always `urn`, but has two new properties: `nid` which contains the Namspace Identifier, and `nss` which contains the Namespace Specific String. The `nss` property will be removed by higher order scheme handlers, such as the UUID URN scheme handler. The UUID of a URN can now be found in the `uuid` property. ### Breaking changes from 2.x URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful. ### Breaking changes from 1.x The `errors` array on parsed components is now an `error` string. # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![a fun cartoon logo made of glob characters](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. * `fs` File-system object with Node's `fs` API. By default, the built-in `fs` module will be used. Set to a volume provided by a library like `memfs` to avoid using the "real" file-system. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 67 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/main/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/yargs-parser@19.0.0/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _: [], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _: [2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # flatted [![Downloads](https://img.shields.io/npm/dm/flatted.svg)](https://www.npmjs.com/package/flatted) [![Coverage Status](https://coveralls.io/repos/github/WebReflection/flatted/badge.svg?branch=main)](https://coveralls.io/github/WebReflection/flatted?branch=main) [![Build Status](https://travis-ci.com/WebReflection/flatted.svg?branch=main)](https://travis-ci.com/WebReflection/flatted) [![License: ISC](https://img.shields.io/badge/License-ISC-yellow.svg)](https://opensource.org/licenses/ISC) ![WebReflection status](https://offline.report/status/webreflection.svg) ![snow flake](./flatted.jpg) <sup>**Social Media Photo by [Matt Seymour](https://unsplash.com/@mattseymour) on [Unsplash](https://unsplash.com/)**</sup> A super light (0.5K) and fast circular JSON parser, directly from the creator of [CircularJSON](https://github.com/WebReflection/circular-json/#circularjson). Now available also for **[PHP](./php/flatted.php)**. ```js npm i flatted ``` Usable via [CDN](https://unpkg.com/flatted) or as regular module. ```js // ESM import {parse, stringify, toJSON, fromJSON} from 'flatted'; // CJS const {parse, stringify, toJSON, fromJSON} = require('flatted'); const a = [{}]; a[0].a = a; a.push(a); stringify(a); // [["1","0"],{"a":"0"}] ``` ## toJSON and from JSON If you'd like to implicitly survive JSON serialization, these two helpers helps: ```js import {toJSON, fromJSON} from 'flatted'; class RecursiveMap extends Map { static fromJSON(any) { return new this(fromJSON(any)); } toJSON() { return toJSON([...this.entries()]); } } const recursive = new RecursiveMap; const same = {}; same.same = same; recursive.set('same', same); const asString = JSON.stringify(recursive); const asMap = RecursiveMap.fromJSON(JSON.parse(asString)); asMap.get('same') === asMap.get('same').same; // true ``` ## Flatted VS JSON As it is for every other specialized format capable of serializing and deserializing circular data, you should never `JSON.parse(Flatted.stringify(data))`, and you should never `Flatted.parse(JSON.stringify(data))`. The only way this could work is to `Flatted.parse(Flatted.stringify(data))`, as it is also for _CircularJSON_ or any other, otherwise there's no granted data integrity. Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won't be serialized and unserialized as expected. ### New in V1: Exact same JSON API * Added a [reviver](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Syntax) parameter to `.parse(string, reviver)` and revive your own objects. * Added a [replacer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Syntax) and a `space` parameter to `.stringify(object, replacer, space)` for feature parity with JSON signature. ### Compatibility All ECMAScript engines compatible with `Map`, `Set`, `Object.keys`, and `Array.prototype.reduce` will work, even if polyfilled. ### How does it work ? While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. `*` Once parsed, all indexes will be replaced through the flattened collection. <sup><sub>`*` represented as string to avoid conflicts with numbers</sub></sup> ```js // logic example var a = [{one: 1}, {two: '2'}]; a[0].a = a; // a is the main object, will be at index '0' // {one: 1} is the second object, index '1' // {two: '2'} the third, in '2', and it has a string // which will be found at index '3' Flatted.stringify(a); // [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"] // a[one,two] {one: 1, a} {two: '2'} '2' ``` [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) ### Status: Work in progress Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ # path-parse [![Build Status](https://travis-ci.org/jbgutierrez/path-parse.svg?branch=master)](https://travis-ci.org/jbgutierrez/path-parse) > Node.js [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) [ponyfill](https://ponyfill.com). ## Install ``` $ npm install --save path-parse ``` ## Usage ```js var pathParse = require('path-parse'); pathParse('/home/user/dir/file.txt'); //=> { // root : "/", // dir : "/home/user/dir", // base : "file.txt", // ext : ".txt", // name : "file" // } ``` ## API See [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) docs. ### pathParse(path) ### pathParse.posix(path) The Posix specific version. ### pathParse.win32(path) The Windows specific version. ## License MIT © [Javier Blanco](http://jbgutierrez.info) <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/img/ajv.svg"> &nbsp; # Ajv JSON schema validator The fastest JSON validator for Node.js and browser. Supports JSON Schema draft-04/06/07/2019-09/2020-12 ([draft-04 support](https://ajv.js.org/json-schema.html#draft-04) requires ajv-draft-04 package) and JSON Type Definition [RFC8927](https://datatracker.ietf.org/doc/rfc8927/). [![build](https://github.com/ajv-validator/ajv/workflows/build/badge.svg)](https://github.com/ajv-validator/ajv/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Platinum sponsors [<img src="https://ajv.js.org/img/mozilla.svg" width="45%">](https://www.mozilla.org)<img src="https://ajv.js.org/img/gap.svg" width="8%">[<img src="https://ajv.js.org/img/reserved.svg" width="45%">](https://opencollective.com/ajv) ## Ajv online event - May 20, 10am PT / 6pm UK We will talk about: - new features of Ajv version 8. - the improvements sponsored by Mozilla's MOSS grant. - how Ajv is used in JavaScript applications. Speakers: - [Evgeny Poberezkin](https://github.com/epoberezkin), the creator of Ajv. - [Mehan Jayasuriya](https://github.com/mehan), Program Officer at Mozilla Foundation, leading the [MOSS](https://www.mozilla.org/en-US/moss/) and other programs investing in the open source and community ecosystems. - [Matteo Collina](https://github.com/mcollina), Technical Director at NearForm and Node.js Technical Steering Committee member, creator of Fastify web framework. - [Kin Lane](https://github.com/kinlane), Chief Evangelist at Postman. Studying the tech, business & politics of APIs since 2010. Presidential Innovation Fellow during the Obama administration. - [Ulysse Carion](https://github.com/ucarion), the creator of JSON Type Definition specification. [Gajus Kuizinas](https://github.com/gajus) will host the event. Please [register here](https://us02web.zoom.us/webinar/register/2716192553618/WN_erJ_t4ICTHOnGC1SOybNnw). ## Contributing More than 100 people contributed to Ajv, and we would love to have you join the development. We welcome implementing new features that will benefit many users and ideas to improve our documentation. Please review [Contributing guidelines](./CONTRIBUTING.md) and [Code components](https://ajv.js.org/components.html). ## Documentation All documentation is available on the [Ajv website](https://ajv.js.org). Some useful site links: - [Getting started](https://ajv.js.org/guide/getting-started.html) - [JSON Schema vs JSON Type Definition](https://ajv.js.org/guide/schema-language.html) - [API reference](https://ajv.js.org/api.html) - [Strict mode](https://ajv.js.org/strict-mode.html) - [Standalone validation code](https://ajv.js.org/standalone.html) - [Security considerations](https://ajv.js.org/security.html) - [Command line interface](https://ajv.js.org/packages/ajv-cli.html) - [Frequently Asked Questions](https://ajv.js.org/faq.html) ## <a name="sponsors"></a>Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Performance Ajv generates code to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=62,4,1&chs=600x416&chxl=-1:|ajv|@exodus&#x2F;schemasafe|is-my-json-valid|djv|@cfworker&#x2F;json-schema|jsonschema&chd=t:100,69.2,51.5,13.1,5.1,1.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements JSON Schema [draft-06/07/2019-09/2020-12](http://json-schema.org/) standards (draft-04 is supported in v6): - all validation keywords (see [JSON Schema validation keywords](https://ajv.js.org/json-schema.html)) - [OpenAPI](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md) extensions: - NEW: keyword [discriminator](https://ajv.js.org/json-schema.html#discriminator). - keyword [nullable](https://ajv.js.org/json-schema.html#nullable). - full support of remote references (remote schemas have to be added with `addSchema` or compiled to be available) - support of recursive references between schemas - correct string lengths for strings with unicode pairs - JSON Schema [formats](https://ajv.js.org/guide/formats.html) (with [ajv-formats](https://github.com/ajv-validator/ajv-formats) plugin). - [validates schemas against meta-schema](https://ajv.js.org/api.html#api-validateschema) - NEW: supports [JSON Type Definition](https://datatracker.ietf.org/doc/rfc8927/): - all keywords (see [JSON Type Definition schema forms](https://ajv.js.org/json-type-definition.html)) - meta-schema for JTD schemas - "union" keyword and user-defined keywords (can be used inside "metadata" member of the schema) - supports [browsers](https://ajv.js.org/guide/environments.html#browsers) and Node.js 10.x - current - [asynchronous loading](https://ajv.js.org/guide/managing-schemas.html#asynchronous-schema-loading) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](https://ajv.js.org/options.html#allerrors) - [error messages with parameters](https://ajv.js.org/api.html#validation-errors) describing error reasons to allow error message generation - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [removing-additional-properties](https://ajv.js.org/guide/modifying-data.html#removing-additional-properties) - [assigning defaults](https://ajv.js.org/guide/modifying-data.html#assigning-defaults) to missing properties and items - [coercing data](https://ajv.js.org/guide/modifying-data.html#coercing-data-types) to the types specified in `type` keywords - [user-defined keywords](https://ajv.js.org/guide/user-keywords.html) - additional extension keywords with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [\$data reference](https://ajv.js.org/guide/combining-schemas.html#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](https://ajv.js.org/guide/async-validation.html) of user-defined formats and keywords ## Install To install version 8: ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://runkit.com/npm/ajv In JavaScript: ```javascript // or ESM/TypeScript import import Ajv from "ajv" // Node.js require: const Ajv = require("ajv") const ajv = new Ajv() // options can be passed, e.g. {allErrors: true} const schema = { type: "object", properties: { foo: {type: "integer"}, bar: {type: "string"} }, required: ["foo"], additionalProperties: false, } const data = { foo: 1, bar: "abc" } const validate = ajv.compile(schema) const valid = validate(data) if (!valid) console.log(validate.errors) ``` Learn how to use Ajv and see more examples in the [Guide: getting started](https://ajv.js.org/guide/getting-started.html) ## Changes history See [https://github.com/ajv-validator/ajv/releases](https://github.com/ajv-validator/ajv/releases) **Please note**: [Changes in version 8.0.0](https://github.com/ajv-validator/ajv/releases/tag/v8.0.0) [Version 7.0.0](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](./CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](./LICENSE) # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # is-extglob [![NPM version](https://img.shields.io/npm/v/is-extglob.svg?style=flat)](https://www.npmjs.com/package/is-extglob) [![NPM downloads](https://img.shields.io/npm/dm/is-extglob.svg?style=flat)](https://npmjs.org/package/is-extglob) [![Build Status](https://img.shields.io/travis/jonschlinkert/is-extglob.svg?style=flat)](https://travis-ci.org/jonschlinkert/is-extglob) > Returns true if a string has an extglob. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-extglob ``` ## Usage ```js var isExtglob = require('is-extglob'); ``` **True** ```js isExtglob('?(abc)'); isExtglob('@(abc)'); isExtglob('!(abc)'); isExtglob('*(abc)'); isExtglob('+(abc)'); ``` **False** Escaped extglobs: ```js isExtglob('\\?(abc)'); isExtglob('\\@(abc)'); isExtglob('\\!(abc)'); isExtglob('\\*(abc)'); isExtglob('\\+(abc)'); ``` Everything else... ```js isExtglob('foo.js'); isExtglob('!foo.js'); isExtglob('*.js'); isExtglob('**/abc.js'); isExtglob('abc/*.js'); isExtglob('abc/(aaa|bbb).js'); isExtglob('abc/[a-z].js'); isExtglob('abc/{a,b}.js'); isExtglob('abc/?.js'); isExtglob('abc.js'); isExtglob('abc/def/ghi.js'); ``` ## History **v2.0** Adds support for escaping. Escaped exglobs no longer return true. ## About ### Related projects * [has-glob](https://www.npmjs.com/package/has-glob): Returns `true` if an array has a glob pattern. | [homepage](https://github.com/jonschlinkert/has-glob "Returns `true` if an array has a glob pattern.") * [is-glob](https://www.npmjs.com/package/is-glob): Returns `true` if the given string looks like a glob pattern or an extglob pattern… [more](https://github.com/jonschlinkert/is-glob) | [homepage](https://github.com/jonschlinkert/is-glob "Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a bet") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/jonschlinkert/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ### License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/is-extglob/blob/master/LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.1.31, on October 12, 2016._ # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports `pipe()`ing (including multi-`pipe()` and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap-parser) - [treport](http://npm.im/treport) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with node-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i-- > 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { try { // JSON.parse can throw, emit an error on that super.write(JSON.parse(jsonData[i])) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` [![NPM version](https://img.shields.io/npm/v/esprima.svg)](https://www.npmjs.com/package/esprima) [![npm download](https://img.shields.io/npm/dm/esprima.svg)](https://www.npmjs.com/package/esprima) [![Build Status](https://img.shields.io/travis/jquery/esprima/master.svg)](https://travis-ci.org/jquery/esprima) [![Coverage Status](https://img.shields.io/codecov/c/github/jquery/esprima/master.svg)](https://codecov.io/github/jquery/esprima) **Esprima** ([esprima.org](http://esprima.org), BSD license) is a high performance, standard-compliant [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) parser written in ECMAScript (also popularly known as [JavaScript](https://en.wikipedia.org/wiki/JavaScript)). Esprima is created and maintained by [Ariya Hidayat](https://twitter.com/ariyahidayat), with the help of [many contributors](https://github.com/jquery/esprima/contributors). ### Features - Full support for ECMAScript 2017 ([ECMA-262 8th Edition](http://www.ecma-international.org/publications/standards/Ecma-262.htm)) - Sensible [syntax tree format](https://github.com/estree/estree/blob/master/es5.md) as standardized by [ESTree project](https://github.com/estree/estree) - Experimental support for [JSX](https://facebook.github.io/jsx/), a syntax extension for [React](https://facebook.github.io/react/) - Optional tracking of syntax node location (index-based and line-column) - [Heavily tested](http://esprima.org/test/ci.html) (~1500 [unit tests](https://github.com/jquery/esprima/tree/master/test/fixtures) with [full code coverage](https://codecov.io/github/jquery/esprima)) ### API Esprima can be used to perform [lexical analysis](https://en.wikipedia.org/wiki/Lexical_analysis) (tokenization) or [syntactic analysis](https://en.wikipedia.org/wiki/Parsing) (parsing) of a JavaScript program. A simple example on Node.js REPL: ```javascript > var esprima = require('esprima'); > var program = 'const answer = 42'; > esprima.tokenize(program); [ { type: 'Keyword', value: 'const' }, { type: 'Identifier', value: 'answer' }, { type: 'Punctuator', value: '=' }, { type: 'Numeric', value: '42' } ] > esprima.parseScript(program); { type: 'Program', body: [ { type: 'VariableDeclaration', declarations: [Object], kind: 'const' } ], sourceType: 'script' } ``` For more information, please read the [complete documentation](http://esprima.org/doc). # isobject [![NPM version](https://img.shields.io/npm/v/isobject.svg?style=flat)](https://www.npmjs.com/package/isobject) [![NPM downloads](https://img.shields.io/npm/dm/isobject.svg?style=flat)](https://npmjs.org/package/isobject) [![Build Status](https://img.shields.io/travis/jonschlinkert/isobject.svg?style=flat)](https://travis-ci.org/jonschlinkert/isobject) Returns true if the value is an object and not an array or null. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject --save ``` Use [is-plain-object](https://github.com/jonschlinkert/is-plain-object) if you want only objects that are created by the `Object` constructor. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject ``` Install with [bower](http://bower.io/) ```sh $ bower install isobject ``` ## Usage ```js var isObject = require('isobject'); ``` **True** All of the following return `true`: ```js isObject({}); isObject(Object.create({})); isObject(Object.create(Object.prototype)); isObject(Object.create(null)); isObject({}); isObject(new Foo); isObject(/foo/); ``` **False** All of the following return `false`: ```js isObject(); isObject(function () {}); isObject(1); isObject([]); isObject(undefined); isObject(null); ``` ## Related projects You might also be interested in these projects: [merge-deep](https://www.npmjs.com/package/merge-deep): Recursively merge values in a javascript object. | [homepage](https://github.com/jonschlinkert/merge-deep) * [extend-shallow](https://www.npmjs.com/package/extend-shallow): Extend an object with the properties of additional objects. node.js/javascript util. | [homepage](https://github.com/jonschlinkert/extend-shallow) * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object) * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of) ## Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](https://github.com/jonschlinkert/isobject/issues/new). ## Building docs Generate readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install verb && npm run docs ``` Or, if [verb](https://github.com/verbose/verb) is installed globally: ```sh $ verb ``` ## Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ## Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ## License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/isobject/blob/master/LICENSE). *** _This file was generated by [verb](https://github.com/verbose/verb), v0.9.0, on April 25, 2016._ # fast-levenshtein - Levenshtein algorithm in Javascript [![Build Status](https://secure.travis-ci.org/hiddentao/fast-levenshtein.png)](http://travis-ci.org/hiddentao/fast-levenshtein) [![NPM module](https://badge.fury.io/js/fast-levenshtein.png)](https://badge.fury.io/js/fast-levenshtein) [![NPM downloads](https://img.shields.io/npm/dm/fast-levenshtein.svg?maxAge=2592000)](https://www.npmjs.com/package/fast-levenshtein) [![Follow on Twitter](https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/hiddentao) An efficient Javascript implementation of the [Levenshtein algorithm](http://en.wikipedia.org/wiki/Levenshtein_distance) with locale-specific collator support. ## Features * Works in node.js and in the browser. * Better performance than other implementations by not needing to store the whole matrix ([more info](http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm)). * Locale-sensitive string comparisions if needed. * Comprehensive test suite and performance benchmark. * Small: <1 KB minified and gzipped ## Installation ### node.js Install using [npm](http://npmjs.org/): ```bash $ npm install fast-levenshtein ``` ### Browser Using bower: ```bash $ bower install fast-levenshtein ``` If you are not using any module loader system then the API will then be accessible via the `window.Levenshtein` object. ## Examples **Default usage** ```javascript var levenshtein = require('fast-levenshtein'); var distance = levenshtein.get('back', 'book'); // 2 var distance = levenshtein.get('我愛你', '我叫你'); // 1 ``` **Locale-sensitive string comparisons** It supports using [Intl.Collator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Collator) for locale-sensitive string comparisons: ```javascript var levenshtein = require('fast-levenshtein'); levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true}); // 1 ``` ## Building and Testing To build the code and run the tests: ```bash $ npm install -g grunt-cli $ npm install $ npm run build ``` ## Performance _Thanks to [Titus Wormer](https://github.com/wooorm) for [encouraging me](https://github.com/hiddentao/fast-levenshtein/issues/1) to do this._ Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM): ```bash Running suite Implementation comparison [benchmark/speed.js]... >> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled) >> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled) >> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled) >> natural x 255 ops/sec ±0.76% (88 runs sampled) >> levenshtein x 180 ops/sec ±3.55% (86 runs sampled) >> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled) Benchmark done. Fastest test is fast-levenshtein at 4.2x faster than levenshtein-component ``` You can run this benchmark yourself by doing: ```bash $ npm install $ npm run build $ npm run benchmark ``` ## Contributing If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes. See [CONTRIBUTING.md](https://github.com/hiddentao/fast-levenshtein/blob/master/CONTRIBUTING.md) for details. ## License MIT - see [LICENSE.md](https://github.com/hiddentao/fast-levenshtein/blob/master/LICENSE.md) # ShellJS - Unix shell commands for Node.js [![Travis](https://img.shields.io/travis/shelljs/shelljs/master.svg?style=flat-square&label=unix)](https://travis-ci.org/shelljs/shelljs) [![AppVeyor](https://img.shields.io/appveyor/ci/shelljs/shelljs/master.svg?style=flat-square&label=windows)](https://ci.appveyor.com/project/shelljs/shelljs/branch/master) [![Codecov](https://img.shields.io/codecov/c/github/shelljs/shelljs/master.svg?style=flat-square&label=coverage)](https://codecov.io/gh/shelljs/shelljs) [![npm version](https://img.shields.io/npm/v/shelljs.svg?style=flat-square)](https://www.npmjs.com/package/shelljs) [![npm downloads](https://img.shields.io/npm/dm/shelljs.svg?style=flat-square)](https://www.npmjs.com/package/shelljs) ShellJS is a portable **(Windows/Linux/OS X)** implementation of Unix shell commands on top of the Node.js API. You can use it to eliminate your shell script's dependency on Unix while still keeping its familiar and powerful commands. You can also install it globally so you can run it from outside Node projects - say goodbye to those gnarly Bash scripts! ShellJS is proudly tested on every node release since `v4`! The project is [unit-tested](http://travis-ci.org/shelljs/shelljs) and battle-tested in projects like: + [Firebug](http://getfirebug.com/) - Firefox's infamous debugger + [JSHint](http://jshint.com) & [ESLint](http://eslint.org/) - popular JavaScript linters + [Zepto](http://zeptojs.com) - jQuery-compatible JavaScript library for modern browsers + [Yeoman](http://yeoman.io/) - Web application stack and development tool + [Deployd.com](http://deployd.com) - Open source PaaS for quick API backend generation + And [many more](https://npmjs.org/browse/depended/shelljs). If you have feedback, suggestions, or need help, feel free to post in our [issue tracker](https://github.com/shelljs/shelljs/issues). Think ShellJS is cool? Check out some related projects in our [Wiki page](https://github.com/shelljs/shelljs/wiki)! Upgrading from an older version? Check out our [breaking changes](https://github.com/shelljs/shelljs/wiki/Breaking-Changes) page to see what changes to watch out for while upgrading. ## Command line use If you just want cross platform UNIX commands, checkout our new project [shelljs/shx](https://github.com/shelljs/shx), a utility to expose `shelljs` to the command line. For example: ``` $ shx mkdir -p foo $ shx touch foo/bar.txt $ shx rm -rf foo ``` ## Plugin API ShellJS now supports third-party plugins! You can learn more about using plugins and writing your own ShellJS commands in [the wiki](https://github.com/shelljs/shelljs/wiki/Using-ShellJS-Plugins). ## A quick note about the docs For documentation on all the latest features, check out our [README](https://github.com/shelljs/shelljs). To read docs that are consistent with the latest release, check out [the npm page](https://www.npmjs.com/package/shelljs) or [shelljs.org](http://documentup.com/shelljs/shelljs). ## Installing Via npm: ```bash $ npm install [-g] shelljs ``` ## Examples ```javascript var shell = require('shelljs'); if (!shell.which('git')) { shell.echo('Sorry, this script requires git'); shell.exit(1); } // Copy files to release dir shell.rm('-rf', 'out/Release'); shell.cp('-R', 'stuff/', 'out/Release'); // Replace macros in each .js file shell.cd('lib'); shell.ls('*.js').forEach(function (file) { shell.sed('-i', 'BUILD_VERSION', 'v0.1.2', file); shell.sed('-i', /^.*REMOVE_THIS_LINE.*$/, '', file); shell.sed('-i', /.*REPLACE_LINE_WITH_MACRO.*\n/, shell.cat('macro.js'), file); }); shell.cd('..'); // Run external tool synchronously if (shell.exec('git commit -am "Auto-commit"').code !== 0) { shell.echo('Error: Git commit failed'); shell.exit(1); } ``` ## Exclude options If you need to pass a parameter that looks like an option, you can do so like: ```js shell.grep('--', '-v', 'path/to/file'); // Search for "-v", no grep options shell.cp('-R', '-dir', 'outdir'); // If already using an option, you're done ``` ## Global vs. Local We no longer recommend using a global-import for ShellJS (i.e. `require('shelljs/global')`). While still supported for convenience, this pollutes the global namespace, and should therefore only be used with caution. Instead, we recommend a local import (standard for npm packages): ```javascript var shell = require('shelljs'); shell.echo('hello world'); ``` <!-- DO NOT MODIFY BEYOND THIS POINT - IT'S AUTOMATICALLY GENERATED --> ## Command reference All commands run synchronously, unless otherwise stated. All commands accept standard bash globbing characters (`*`, `?`, etc.), compatible with the [node `glob` module](https://github.com/isaacs/node-glob). For less-commonly used commands and features, please check out our [wiki page](https://github.com/shelljs/shelljs/wiki). ### cat([options,] file [, file ...]) ### cat([options,] file_array) Available options: + `-n`: number all output lines Examples: ```javascript var str = cat('file*.txt'); var str = cat('file1', 'file2'); var str = cat(['file1', 'file2']); // same as above ``` Returns a string containing the given file, or a concatenated string containing the files if more than one file is given (a new line character is introduced between each file). ### cd([dir]) Changes to directory `dir` for the duration of the script. Changes to home directory if no argument is supplied. ### chmod([options,] octal_mode || octal_string, file) ### chmod([options,] symbolic_mode, file) Available options: + `-v`: output a diagnostic for every file processed + `-c`: like verbose, but report only when a change is made + `-R`: change files and directories recursively Examples: ```javascript chmod(755, '/Users/brandon'); chmod('755', '/Users/brandon'); // same as above chmod('u+x', '/Users/brandon'); chmod('-R', 'a-w', '/Users/brandon'); ``` Alters the permissions of a file or directory by either specifying the absolute permissions in octal form or expressing the changes in symbols. This command tries to mimic the POSIX behavior as much as possible. Notable exceptions: + In symbolic modes, `a-r` and `-r` are identical. No consideration is given to the `umask`. + There is no "quiet" option, since default behavior is to run silent. ### cp([options,] source [, source ...], dest) ### cp([options,] source_array, dest) Available options: + `-f`: force (default behavior) + `-n`: no-clobber + `-u`: only copy if `source` is newer than `dest` + `-r`, `-R`: recursive + `-L`: follow symlinks + `-P`: don't follow symlinks Examples: ```javascript cp('file1', 'dir1'); cp('-R', 'path/to/dir/', '~/newCopy/'); cp('-Rf', '/tmp/*', '/usr/local/*', '/home/tmp'); cp('-Rf', ['/tmp/*', '/usr/local/*'], '/home/tmp'); // same as above ``` Copies files. ### pushd([options,] [dir | '-N' | '+N']) Available options: + `-n`: Suppresses the normal change of directory when adding directories to the stack, so that only the stack is manipulated. + `-q`: Supresses output to the console. Arguments: + `dir`: Sets the current working directory to the top of the stack, then executes the equivalent of `cd dir`. + `+N`: Brings the Nth directory (counting from the left of the list printed by dirs, starting with zero) to the top of the list by rotating the stack. + `-N`: Brings the Nth directory (counting from the right of the list printed by dirs, starting with zero) to the top of the list by rotating the stack. Examples: ```javascript // process.cwd() === '/usr' pushd('/etc'); // Returns /etc /usr pushd('+1'); // Returns /usr /etc ``` Save the current directory on the top of the directory stack and then `cd` to `dir`. With no arguments, `pushd` exchanges the top two directories. Returns an array of paths in the stack. ### popd([options,] ['-N' | '+N']) Available options: + `-n`: Suppress the normal directory change when removing directories from the stack, so that only the stack is manipulated. + `-q`: Supresses output to the console. Arguments: + `+N`: Removes the Nth directory (counting from the left of the list printed by dirs), starting with zero. + `-N`: Removes the Nth directory (counting from the right of the list printed by dirs), starting with zero. Examples: ```javascript echo(process.cwd()); // '/usr' pushd('/etc'); // '/etc /usr' echo(process.cwd()); // '/etc' popd(); // '/usr' echo(process.cwd()); // '/usr' ``` When no arguments are given, `popd` removes the top directory from the stack and performs a `cd` to the new top directory. The elements are numbered from 0, starting at the first directory listed with dirs (i.e., `popd` is equivalent to `popd +0`). Returns an array of paths in the stack. ### dirs([options | '+N' | '-N']) Available options: + `-c`: Clears the directory stack by deleting all of the elements. + `-q`: Supresses output to the console. Arguments: + `+N`: Displays the Nth directory (counting from the left of the list printed by dirs when invoked without options), starting with zero. + `-N`: Displays the Nth directory (counting from the right of the list printed by dirs when invoked without options), starting with zero. Display the list of currently remembered directories. Returns an array of paths in the stack, or a single path if `+N` or `-N` was specified. See also: `pushd`, `popd` ### echo([options,] string [, string ...]) Available options: + `-e`: interpret backslash escapes (default) + `-n`: remove trailing newline from output Examples: ```javascript echo('hello world'); var str = echo('hello world'); echo('-n', 'no newline at end'); ``` Prints `string` to stdout, and returns string with additional utility methods like `.to()`. ### exec(command [, options] [, callback]) Available options: + `async`: Asynchronous execution. If a callback is provided, it will be set to `true`, regardless of the passed value (default: `false`). + `silent`: Do not echo program output to console (default: `false`). + `encoding`: Character encoding to use. Affects the values returned to stdout and stderr, and what is written to stdout and stderr when not in silent mode (default: `'utf8'`). + and any option available to Node.js's [`child_process.exec()`](https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback) Examples: ```javascript var version = exec('node --version', {silent:true}).stdout; var child = exec('some_long_running_process', {async:true}); child.stdout.on('data', function(data) { /* ... do something with data ... */ }); exec('some_long_running_process', function(code, stdout, stderr) { console.log('Exit code:', code); console.log('Program output:', stdout); console.log('Program stderr:', stderr); }); ``` Executes the given `command` _synchronously_, unless otherwise specified. When in synchronous mode, this returns a `ShellString` (compatible with ShellJS v0.6.x, which returns an object of the form `{ code:..., stdout:... , stderr:... }`). Otherwise, this returns the child process object, and the `callback` receives the arguments `(code, stdout, stderr)`. Not seeing the behavior you want? `exec()` runs everything through `sh` by default (or `cmd.exe` on Windows), which differs from `bash`. If you need bash-specific behavior, try out the `{shell: 'path/to/bash'}` option. ### find(path [, path ...]) ### find(path_array) Examples: ```javascript find('src', 'lib'); find(['src', 'lib']); // same as above find('.').filter(function(file) { return file.match(/\.js$/); }); ``` Returns array of all files (however deep) in the given paths. The main difference from `ls('-R', path)` is that the resulting file names include the base directories (e.g., `lib/resources/file1` instead of just `file1`). ### grep([options,] regex_filter, file [, file ...]) ### grep([options,] regex_filter, file_array) Available options: + `-v`: Invert `regex_filter` (only print non-matching lines). + `-l`: Print only filenames of matching files. + `-i`: Ignore case. Examples: ```javascript grep('-v', 'GLOBAL_VARIABLE', '*.js'); grep('GLOBAL_VARIABLE', '*.js'); ``` Reads input string from given files and returns a string containing all lines of the file that match the given `regex_filter`. ### head([{'-n': \<num\>},] file [, file ...]) ### head([{'-n': \<num\>},] file_array) Available options: + `-n <num>`: Show the first `<num>` lines of the files Examples: ```javascript var str = head({'-n': 1}, 'file*.txt'); var str = head('file1', 'file2'); var str = head(['file1', 'file2']); // same as above ``` Read the start of a file. ### ln([options,] source, dest) Available options: + `-s`: symlink + `-f`: force Examples: ```javascript ln('file', 'newlink'); ln('-sf', 'file', 'existing'); ``` Links `source` to `dest`. Use `-f` to force the link, should `dest` already exist. ### ls([options,] [path, ...]) ### ls([options,] path_array) Available options: + `-R`: recursive + `-A`: all files (include files beginning with `.`, except for `.` and `..`) + `-L`: follow symlinks + `-d`: list directories themselves, not their contents + `-l`: list objects representing each file, each with fields containing `ls -l` output fields. See [`fs.Stats`](https://nodejs.org/api/fs.html#fs_class_fs_stats) for more info Examples: ```javascript ls('projs/*.js'); ls('-R', '/users/me', '/tmp'); ls('-R', ['/users/me', '/tmp']); // same as above ls('-l', 'file.txt'); // { name: 'file.txt', mode: 33188, nlink: 1, ...} ``` Returns array of files in the given `path`, or files in the current directory if no `path` is provided. ### mkdir([options,] dir [, dir ...]) ### mkdir([options,] dir_array) Available options: + `-p`: full path (and create intermediate directories, if necessary) Examples: ```javascript mkdir('-p', '/tmp/a/b/c/d', '/tmp/e/f/g'); mkdir('-p', ['/tmp/a/b/c/d', '/tmp/e/f/g']); // same as above ``` Creates directories. ### mv([options ,] source [, source ...], dest') ### mv([options ,] source_array, dest') Available options: + `-f`: force (default behavior) + `-n`: no-clobber Examples: ```javascript mv('-n', 'file', 'dir/'); mv('file1', 'file2', 'dir/'); mv(['file1', 'file2'], 'dir/'); // same as above ``` Moves `source` file(s) to `dest`. ### pwd() Returns the current directory. ### rm([options,] file [, file ...]) ### rm([options,] file_array) Available options: + `-f`: force + `-r, -R`: recursive Examples: ```javascript rm('-rf', '/tmp/*'); rm('some_file.txt', 'another_file.txt'); rm(['some_file.txt', 'another_file.txt']); // same as above ``` Removes files. ### sed([options,] search_regex, replacement, file [, file ...]) ### sed([options,] search_regex, replacement, file_array) Available options: + `-i`: Replace contents of `file` in-place. _Note that no backups will be created!_ Examples: ```javascript sed('-i', 'PROGRAM_VERSION', 'v0.1.3', 'source.js'); sed(/.*DELETE_THIS_LINE.*\n/, '', 'source.js'); ``` Reads an input string from `file`s, and performs a JavaScript `replace()` on the input using the given `search_regex` and `replacement` string or function. Returns the new string after replacement. Note: Like unix `sed`, ShellJS `sed` supports capture groups. Capture groups are specified using the `$n` syntax: ```javascript sed(/(\w+)\s(\w+)/, '$2, $1', 'file.txt'); ``` ### set(options) Available options: + `+/-e`: exit upon error (`config.fatal`) + `+/-v`: verbose: show all commands (`config.verbose`) + `+/-f`: disable filename expansion (globbing) Examples: ```javascript set('-e'); // exit upon first error set('+e'); // this undoes a "set('-e')" ``` Sets global configuration variables. ### sort([options,] file [, file ...]) ### sort([options,] file_array) Available options: + `-r`: Reverse the results + `-n`: Compare according to numerical value Examples: ```javascript sort('foo.txt', 'bar.txt'); sort('-r', 'foo.txt'); ``` Return the contents of the `file`s, sorted line-by-line. Sorting multiple files mixes their content (just as unix `sort` does). ### tail([{'-n': \<num\>},] file [, file ...]) ### tail([{'-n': \<num\>},] file_array) Available options: + `-n <num>`: Show the last `<num>` lines of `file`s Examples: ```javascript var str = tail({'-n': 1}, 'file*.txt'); var str = tail('file1', 'file2'); var str = tail(['file1', 'file2']); // same as above ``` Read the end of a `file`. ### tempdir() Examples: ```javascript var tmp = tempdir(); // "/tmp" for most *nix platforms ``` Searches and returns string containing a writeable, platform-dependent temporary directory. Follows Python's [tempfile algorithm](http://docs.python.org/library/tempfile.html#tempfile.tempdir). ### test(expression) Available expression primaries: + `'-b', 'path'`: true if path is a block device + `'-c', 'path'`: true if path is a character device + `'-d', 'path'`: true if path is a directory + `'-e', 'path'`: true if path exists + `'-f', 'path'`: true if path is a regular file + `'-L', 'path'`: true if path is a symbolic link + `'-p', 'path'`: true if path is a pipe (FIFO) + `'-S', 'path'`: true if path is a socket Examples: ```javascript if (test('-d', path)) { /* do something with dir */ }; if (!test('-f', path)) continue; // skip if it's a regular file ``` Evaluates `expression` using the available primaries and returns corresponding value. ### ShellString.prototype.to(file) Examples: ```javascript cat('input.txt').to('output.txt'); ``` Analogous to the redirection operator `>` in Unix, but works with `ShellStrings` (such as those returned by `cat`, `grep`, etc.). _Like Unix redirections, `to()` will overwrite any existing file!_ ### ShellString.prototype.toEnd(file) Examples: ```javascript cat('input.txt').toEnd('output.txt'); ``` Analogous to the redirect-and-append operator `>>` in Unix, but works with `ShellStrings` (such as those returned by `cat`, `grep`, etc.). ### touch([options,] file [, file ...]) ### touch([options,] file_array) Available options: + `-a`: Change only the access time + `-c`: Do not create any files + `-m`: Change only the modification time + `-d DATE`: Parse `DATE` and use it instead of current time + `-r FILE`: Use `FILE`'s times instead of current time Examples: ```javascript touch('source.js'); touch('-c', '/path/to/some/dir/source.js'); touch({ '-r': FILE }, '/path/to/some/dir/source.js'); ``` Update the access and modification times of each `FILE` to the current time. A `FILE` argument that does not exist is created empty, unless `-c` is supplied. This is a partial implementation of [`touch(1)`](http://linux.die.net/man/1/touch). ### uniq([options,] [input, [output]]) Available options: + `-i`: Ignore case while comparing + `-c`: Prefix lines by the number of occurrences + `-d`: Only print duplicate lines, one for each group of identical lines Examples: ```javascript uniq('foo.txt'); uniq('-i', 'foo.txt'); uniq('-cd', 'foo.txt', 'bar.txt'); ``` Filter adjacent matching lines from `input`. ### which(command) Examples: ```javascript var nodeExec = which('node'); ``` Searches for `command` in the system's `PATH`. On Windows, this uses the `PATHEXT` variable to append the extension if it's not already executable. Returns string containing the absolute path to `command`. ### exit(code) Exits the current process with the given exit `code`. ### error() Tests if error occurred in the last command. Returns a truthy value if an error returned, or a falsy value otherwise. **Note**: do not rely on the return value to be an error message. If you need the last error message, use the `.stderr` attribute from the last command's return value instead. ### ShellString(str) Examples: ```javascript var foo = ShellString('hello world'); ``` Turns a regular string into a string-like object similar to what each command returns. This has special methods, like `.to()` and `.toEnd()`. ### env['VAR_NAME'] Object containing environment variables (both getter and setter). Shortcut to `process.env`. ### Pipes Examples: ```javascript grep('foo', 'file1.txt', 'file2.txt').sed(/o/g, 'a').to('output.txt'); echo('files with o\'s in the name:\n' + ls().grep('o')); cat('test.js').exec('node'); // pipe to exec() call ``` Commands can send their output to another command in a pipe-like fashion. `sed`, `grep`, `cat`, `exec`, `to`, and `toEnd` can appear on the right-hand side of a pipe. Pipes can be chained. ## Configuration ### config.silent Example: ```javascript var sh = require('shelljs'); var silentState = sh.config.silent; // save old silent state sh.config.silent = true; /* ... */ sh.config.silent = silentState; // restore old silent state ``` Suppresses all command output if `true`, except for `echo()` calls. Default is `false`. ### config.fatal Example: ```javascript require('shelljs/global'); config.fatal = true; // or set('-e'); cp('this_file_does_not_exist', '/dev/null'); // throws Error here /* more commands... */ ``` If `true`, the script will throw a Javascript error when any shell.js command encounters an error. Default is `false`. This is analogous to Bash's `set -e`. ### config.verbose Example: ```javascript config.verbose = true; // or set('-v'); cd('dir/'); rm('-rf', 'foo.txt', 'bar.txt'); exec('echo hello'); ``` Will print each command as follows: ``` cd dir/ rm -rf foo.txt bar.txt exec echo hello ``` ### config.globOptions Example: ```javascript config.globOptions = {nodir: true}; ``` Use this value for calls to `glob.sync()` instead of the default options. ### config.reset() Example: ```javascript var shell = require('shelljs'); // Make changes to shell.config, and do stuff... /* ... */ shell.config.reset(); // reset to original state // Do more stuff, but with original settings /* ... */ ``` Reset `shell.config` to the defaults: ```javascript { fatal: false, globOptions: {}, maxdepth: 255, noglob: false, silent: false, verbose: false, } ``` ## Team | [![Nate Fischer](https://avatars.githubusercontent.com/u/5801521?s=130)](https://github.com/nfischer) | [![Brandon Freitag](https://avatars1.githubusercontent.com/u/5988055?v=3&s=130)](http://github.com/freitagbr) | |:---:|:---:| | [Nate Fischer](https://github.com/nfischer) | [Brandon Freitag](http://github.com/freitagbr) | # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![build](https://github.com/epoberezkin/json-schema-traverse/workflows/build/badge.svg)](https://github.com/epoberezkin/json-schema-traverse/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/json-schema-traverse)](https://www.npmjs.com/package/json-schema-traverse) [![coverage](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## Enterprise support json-schema-traverse package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-json-schema-traverse?utm_source=npm-json-schema-traverse&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # sprintf.js **sprintf.js** is a complete open source JavaScript sprintf implementation for the *browser* and *node.js*. Its prototype is simple: string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]]) The placeholders in the format string are marked by `%` and are followed by one or more of these elements, in this order: * An optional number followed by a `$` sign that selects which argument index to use for the value. If not specified, arguments will be placed in the same order as the placeholders in the input string. * An optional `+` sign that forces to preceed the result with a plus or minus sign on numeric values. By default, only the `-` sign is used on negative numbers. * An optional padding specifier that says what character to use for padding (if specified). Possible values are `0` or any other character precedeed by a `'` (single quote). The default is to pad with *spaces*. * An optional `-` sign, that causes sprintf to left-align the result of this placeholder. The default is to right-align the result. * An optional number, that says how many characters the result should have. If the value to be returned is shorter than this number, the result will be padded. When used with the `j` (JSON) type specifier, the padding length specifies the tab size used for indentation. * An optional precision modifier, consisting of a `.` (dot) followed by a number, that says how many digits should be displayed for floating point numbers. When used with the `g` type specifier, it specifies the number of significant digits. When used on a string, it causes the result to be truncated. * A type specifier that can be any of: * `%` — yields a literal `%` character * `b` — yields an integer as a binary number * `c` — yields an integer as the character with that ASCII value * `d` or `i` — yields an integer as a signed decimal number * `e` — yields a float using scientific notation * `u` — yields an integer as an unsigned decimal number * `f` — yields a float as is; see notes on precision above * `g` — yields a float as is; see notes on precision above * `o` — yields an integer as an octal number * `s` — yields a string as is * `x` — yields an integer as a hexadecimal number (lower-case) * `X` — yields an integer as a hexadecimal number (upper-case) * `j` — yields a JavaScript object or array as a JSON encoded string ## JavaScript `vsprintf` `vsprintf` is the same as `sprintf` except that it accepts an array of arguments, rather than a variable number of arguments: vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) ## Argument swapping You can also swap the arguments. That is, the order of the placeholders doesn't have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to: sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") And, of course, you can repeat the placeholders without having to increase the number of arguments. ## Named arguments Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - `(` and `)` - and begin with a keyword that refers to a key: var user = { name: "Dolly" } sprintf("Hello %(name)s", user) // Hello Dolly Keywords in replacement fields can be optionally followed by any number of keywords or indexes: var users = [ {name: "Dolly"}, {name: "Molly"}, {name: "Polly"} ] sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly Note: mixing positional and named placeholders is not (yet) supported ## Computed values You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly. sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890 sprintf("Current date and time: %s", function() { return new Date().toString() }) # AngularJS You can now use `sprintf` and `vsprintf` (also aliased as `fmt` and `vfmt` respectively) in your AngularJS projects. See `demo/`. # Installation ## Via Bower bower install sprintf ## Or as a node.js module npm install sprintf-js ### Usage var sprintf = require("sprintf-js").sprintf, vsprintf = require("sprintf-js").vsprintf sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) # License **sprintf.js** is licensed under the terms of the 3-clause BSD license. # regexpp [![npm version](https://img.shields.io/npm/v/regexpp.svg)](https://www.npmjs.com/package/regexpp) [![Downloads/month](https://img.shields.io/npm/dm/regexpp.svg)](http://www.npmtrends.com/regexpp) [![Build Status](https://github.com/mysticatea/regexpp/workflows/CI/badge.svg)](https://github.com/mysticatea/regexpp/actions) [![codecov](https://codecov.io/gh/mysticatea/regexpp/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/regexpp) [![Dependency Status](https://david-dm.org/mysticatea/regexpp.svg)](https://david-dm.org/mysticatea/regexpp) A regular expression parser for ECMAScript. ## 💿 Installation ```bash $ npm install regexpp ``` - require Node.js 8 or newer. ## 📖 Usage ```ts import { AST, RegExpParser, RegExpValidator, RegExpVisitor, parseRegExpLiteral, validateRegExpLiteral, visitRegExpAST } from "regexpp" ``` ### parseRegExpLiteral(source, options?) Parse a given regular expression literal then make AST object. This is equivalent to `new RegExpParser(options).parseLiteral(source)`. - **Parameters:** - `source` (`string | RegExp`) The source code to parse. - `options?` ([`RegExpParser.Options`]) The options to parse. - **Return:** - The AST of the regular expression. ### validateRegExpLiteral(source, options?) Validate a given regular expression literal. This is equivalent to `new RegExpValidator(options).validateLiteral(source)`. - **Parameters:** - `source` (`string`) The source code to validate. - `options?` ([`RegExpValidator.Options`]) The options to validate. ### visitRegExpAST(ast, handlers) Visit each node of a given AST. This is equivalent to `new RegExpVisitor(handlers).visit(ast)`. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. ### RegExpParser #### new RegExpParser(options?) - **Parameters:** - `options?` ([`RegExpParser.Options`]) The options to parse. #### parser.parseLiteral(source, start?, end?) Parse a regular expression literal. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"/abc/g"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression. #### parser.parsePattern(source, start?, end?, uFlag?) Parse a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"abc"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. - **Return:** - The AST of the regular expression pattern. #### parser.parseFlags(source, start?, end?) Parse a regular expression flags. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"gim"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression flags. ### RegExpValidator #### new RegExpValidator(options) - **Parameters:** - `options` ([`RegExpValidator.Options`]) The options to validate. #### validator.validateLiteral(source, start, end) Validate a regular expression literal. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. #### validator.validatePattern(source, start, end, uFlag) Validate a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. #### validator.validateFlags(source, start, end) Validate a regular expression flags. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. ### RegExpVisitor #### new RegExpVisitor(handlers) - **Parameters:** - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. #### visitor.visit(ast) Validate a regular expression literal. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. ## 📰 Changelog - [GitHub Releases](https://github.com/mysticatea/regexpp/releases) ## 🍻 Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run build` compiles TypeScript source code to `index.js`, `index.js.map`, and `index.d.ts`. - `npm run clean` removes the temporary files which are created by `npm test` and `npm run build`. - `npm run lint` runs ESLint. - `npm run update:test` updates test fixtures. - `npm run update:ids` updates `src/unicode/ids.ts`. - `npm run watch` runs tests with `--watch` option. [`AST.Node`]: src/ast.ts#L4 [`RegExpParser.Options`]: src/parser.ts#L539 [`RegExpValidator.Options`]: src/validator.ts#L127 [`RegExpVisitor.Handlers`]: src/visitor.ts#L204 Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. [![build status](https://app.travis-ci.com/dankogai/js-base64.svg)](https://app.travis-ci.com/github/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.encode(latin, true)); // ZGFua29nYWk skips padding Base64.encodeURI(latin)); // ZGFua29nYWk Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.decode( 'ZGFua29nYWk'); // dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend Uint8Array.prototype Base64.extendUint8Array(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IEs before 11. Do the following in your shell. ```shell $ make base64.es5.js ``` ## Brief History * Since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. * Since version 3.7 `base64.js` is ES5-compatible again (hence IE11-compabile). * Since 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see above) # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT # word-wrap [![NPM version](https://img.shields.io/npm/v/word-wrap.svg?style=flat)](https://www.npmjs.com/package/word-wrap) [![NPM monthly downloads](https://img.shields.io/npm/dm/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![NPM total downloads](https://img.shields.io/npm/dt/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/word-wrap.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/word-wrap) > Wrap words to a specified length. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save word-wrap ``` ## Usage ```js var wrap = require('word-wrap'); wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'); ``` Results in: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. ``` ## Options ![image](https://cloud.githubusercontent.com/assets/383994/6543728/7a381c08-c4f6-11e4-8b7d-b6ba197569c9.png) ### options.width Type: `Number` Default: `50` The width of the text before wrapping to a new line. **Example:** ```js wrap(str, {width: 60}); ``` ### options.indent Type: `String` Default: `` (two spaces) The string to use at the beginning of each line. **Example:** ```js wrap(str, {indent: ' '}); ``` ### options.newline Type: `String` Default: `\n` The string to use at the end of each line. **Example:** ```js wrap(str, {newline: '\n\n'}); ``` ### options.escape Type: `function` Default: `function(str){return str;}` An escape function to run on each line after splitting them. **Example:** ```js var xmlescape = require('xml-escape'); wrap(str, { escape: function(string){ return xmlescape(string); } }); ``` ### options.trim Type: `Boolean` Default: `false` Trim trailing whitespace from the returned string. This option is included since `.trim()` would also strip the leading indentation from the first line. **Example:** ```js wrap(str, {trim: true}); ``` ### options.cut Type: `Boolean` Default: `false` Break a word between any two letters when the word is longer than the specified width. **Example:** ```js wrap(str, {cut: true}); ``` ## About ### Related projects * [common-words](https://www.npmjs.com/package/common-words): Updated list (JSON) of the 100 most common words in the English language. Useful for… [more](https://github.com/jonschlinkert/common-words) | [homepage](https://github.com/jonschlinkert/common-words "Updated list (JSON) of the 100 most common words in the English language. Useful for excluding these words from arrays.") * [shuffle-words](https://www.npmjs.com/package/shuffle-words): Shuffle the words in a string and optionally the letters in each word using the… [more](https://github.com/jonschlinkert/shuffle-words) | [homepage](https://github.com/jonschlinkert/shuffle-words "Shuffle the words in a string and optionally the letters in each word using the Fisher-Yates algorithm. Useful for creating test fixtures, benchmarking samples, etc.") * [unique-words](https://www.npmjs.com/package/unique-words): Return the unique words in a string or array. | [homepage](https://github.com/jonschlinkert/unique-words "Return the unique words in a string or array.") * [wordcount](https://www.npmjs.com/package/wordcount): Count the words in a string. Support for english, CJK and Cyrillic. | [homepage](https://github.com/jonschlinkert/wordcount "Count the words in a string. Support for english, CJK and Cyrillic.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Contributors | **Commits** | **Contributor** | | --- | --- | | 43 | [jonschlinkert](https://github.com/jonschlinkert) | | 2 | [lordvlad](https://github.com/lordvlad) | | 2 | [hildjj](https://github.com/hildjj) | | 1 | [danilosampaio](https://github.com/danilosampaio) | | 1 | [2fd](https://github.com/2fd) | | 1 | [toddself](https://github.com/toddself) | | 1 | [wolfgang42](https://github.com/wolfgang42) | | 1 | [zachhale](https://github.com/zachhale) | ### Building docs _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` ### Running tests Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](https://twitter.com/jonschlinkert) ### License Copyright © 2017, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 02, 2017._ semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` You can also just load the module for the function that you care about, if you'd like to minimize your footprint. ```js // load the whole API at once in a single object const semver = require('semver') // or just load the bits you need // all of them listed here, just pick and choose what you want // classes const SemVer = require('semver/classes/semver') const Comparator = require('semver/classes/comparator') const Range = require('semver/classes/range') // functions for working with versions const semverParse = require('semver/functions/parse') const semverValid = require('semver/functions/valid') const semverClean = require('semver/functions/clean') const semverInc = require('semver/functions/inc') const semverDiff = require('semver/functions/diff') const semverMajor = require('semver/functions/major') const semverMinor = require('semver/functions/minor') const semverPatch = require('semver/functions/patch') const semverPrerelease = require('semver/functions/prerelease') const semverCompare = require('semver/functions/compare') const semverRcompare = require('semver/functions/rcompare') const semverCompareLoose = require('semver/functions/compare-loose') const semverCompareBuild = require('semver/functions/compare-build') const semverSort = require('semver/functions/sort') const semverRsort = require('semver/functions/rsort') // low-level comparators between versions const semverGt = require('semver/functions/gt') const semverLt = require('semver/functions/lt') const semverEq = require('semver/functions/eq') const semverNeq = require('semver/functions/neq') const semverGte = require('semver/functions/gte') const semverLte = require('semver/functions/lte') const semverCmp = require('semver/functions/cmp') const semverCoerce = require('semver/functions/coerce') // working with ranges const semverSatisfies = require('semver/functions/satisfies') const semverMaxSatisfying = require('semver/ranges/max-satisfying') const semverMinSatisfying = require('semver/ranges/min-satisfying') const semverToComparators = require('semver/ranges/to-comparators') const semverMinVersion = require('semver/ranges/min-version') const semverValidRange = require('semver/ranges/valid') const semverOutside = require('semver/ranges/outside') const semverGtr = require('semver/ranges/gtr') const semverLtr = require('semver/ranges/ltr') const semverIntersects = require('semver/ranges/intersects') const simplifyRange = require('semver/ranges/simplify') const rangeSubset = require('semver/ranges/subset') ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0-0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0-0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0-0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0-0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0-0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0-0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0-0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0-0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0-0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0-0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0-0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0-0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0-0` * `^0.2.3` := `>=0.2.3 <0.3.0-0` * `^0.0.3` := `>=0.0.3 <0.0.4-0` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4-0` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0-0` * `^0.0.x` := `>=0.0.0 <0.1.0-0` * `^0.0` := `>=0.0.0 <0.1.0-0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0-0` * `^0.x` := `>=0.0.0 <1.0.0-0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect * `simplifyRange(versions, range)`: Return a "simplified" range that matches the same items in `versions` list as the range specified. Note that it does *not* guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with `||` programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned. * `subset(subRange, superRange)`: Return `true` if the `subRange` range is entirely contained by the `superRange` range. Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` ## Exported Modules <!-- TODO: Make sure that all of these items are documented (classes aren't, eg), and then pull the module name into the documentation for that specific thing. --> You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main `require('semver')` export uses getter functions to lazily load the parts of the API that are used. The following modules are available: * `require('semver')` * `require('semver/classes')` * `require('semver/classes/comparator')` * `require('semver/classes/range')` * `require('semver/classes/semver')` * `require('semver/functions/clean')` * `require('semver/functions/cmp')` * `require('semver/functions/coerce')` * `require('semver/functions/compare')` * `require('semver/functions/compare-build')` * `require('semver/functions/compare-loose')` * `require('semver/functions/diff')` * `require('semver/functions/eq')` * `require('semver/functions/gt')` * `require('semver/functions/gte')` * `require('semver/functions/inc')` * `require('semver/functions/lt')` * `require('semver/functions/lte')` * `require('semver/functions/major')` * `require('semver/functions/minor')` * `require('semver/functions/neq')` * `require('semver/functions/parse')` * `require('semver/functions/patch')` * `require('semver/functions/prerelease')` * `require('semver/functions/rcompare')` * `require('semver/functions/rsort')` * `require('semver/functions/satisfies')` * `require('semver/functions/sort')` * `require('semver/functions/valid')` * `require('semver/ranges/gtr')` * `require('semver/ranges/intersects')` * `require('semver/ranges/ltr')` * `require('semver/ranges/max-satisfying')` * `require('semver/ranges/min-satisfying')` * `require('semver/ranges/min-version')` * `require('semver/ranges/outside')` * `require('semver/ranges/to-comparators')` * `require('semver/ranges/valid')` # Acorn-JSX [![Build Status](https://travis-ci.org/acornjs/acorn-jsx.svg?branch=master)](https://travis-ci.org/acornjs/acorn-jsx) [![NPM version](https://img.shields.io/npm/v/acorn-jsx.svg)](https://www.npmjs.org/package/acorn-jsx) This is plugin for [Acorn](http://marijnhaverbeke.nl/acorn/) - a tiny, fast JavaScript parser, written completely in JavaScript. It was created as an experimental alternative, faster [React.js JSX](http://facebook.github.io/react/docs/jsx-in-depth.html) parser. Later, it replaced the [official parser](https://github.com/facebookarchive/esprima) and these days is used by many prominent development tools. ## Transpiler Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out [Babel](https://babeljs.io/) and [Buble](https://buble.surge.sh/) transpilers which use `acorn-jsx` under the hood. ## Usage Requiring this module provides you with an Acorn plugin that you can use like this: ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');"); ``` Note that official spec doesn't support mix of XML namespaces and object-style access in tag names (#27) like in `<namespace:Object.Property />`, so it was deprecated in `acorn-jsx@3.0`. If you still want to opt-in to support of such constructions, you can pass the following option: ```javascript acorn.Parser.extend(jsx({ allowNamespacedObjects: true })) ``` Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely: ```javascript acorn.Parser.extend(jsx({ allowNamespaces: false })) ``` Note that by default `allowNamespaces` is enabled for spec compliancy. ## License This plugin is issued under the [MIT license](./LICENSE). # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![Build Status](https://travis-ci.org/epoberezkin/json-schema-traverse.svg?branch=master)](https://travis-ci.org/epoberezkin/json-schema-traverse) [![npm version](https://badge.fury.io/js/json-schema-traverse.svg)](https://www.npmjs.com/package/json-schema-traverse) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. <a name="table"></a> # Table > Produces a string that represents array data in a text table. [![Travis build status](http://img.shields.io/travis/gajus/table/master.svg?style=flat-square)](https://travis-ci.org/gajus/table) [![Coveralls](https://img.shields.io/coveralls/gajus/table.svg?style=flat-square)](https://coveralls.io/github/gajus/table) [![NPM version](http://img.shields.io/npm/v/table.svg?style=flat-square)](https://www.npmjs.org/package/table) [![Canonical Code Style](https://img.shields.io/badge/code%20style-canonical-blue.svg?style=flat-square)](https://github.com/gajus/canonical) [![Twitter Follow](https://img.shields.io/twitter/follow/kuizinas.svg?style=social&label=Follow)](https://twitter.com/kuizinas) * [Table](#table) * [Features](#table-features) * [Install](#table-install) * [Usage](#table-usage) * [API](#table-api) * [table](#table-api-table-1) * [createStream](#table-api-createstream) * [getBorderCharacters](#table-api-getbordercharacters) ![Demo of table displaying a list of missions to the Moon.](./.README/demo.png) <a name="table-features"></a> ## Features * Works with strings containing [fullwidth](https://en.wikipedia.org/wiki/Halfwidth_and_fullwidth_forms) characters. * Works with strings containing [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code). * Configurable border characters. * Configurable content alignment per column. * Configurable content padding per column. * Configurable column width. * Text wrapping. <a name="table-install"></a> ## Install ```bash npm install table ``` [![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/gajus) [![Become a Patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/gajus) <a name="table-usage"></a> ## Usage ```js import { table } from 'table'; // Using commonjs? // const { table } = require('table'); const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; console.log(table(data)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ ``` <a name="table-api"></a> ## API <a name="table-api-table-1"></a> ### table Returns the string in the table format **Parameters:** - **_data_:** The data to display - Type: `any[][]` - Required: `true` - **_config_:** Table configuration - Type: `object` - Required: `false` <a name="table-api-table-1-config-border"></a> ##### config.border Type: `{ [type: string]: string }`\ Default: `honeywell` [template](#getbordercharacters) Custom borders. The keys are any of: - `topLeft`, `topRight`, `topBody`,`topJoin` - `bottomLeft`, `bottomRight`, `bottomBody`, `bottomJoin` - `joinLeft`, `joinRight`, `joinBody`, `joinJoin` - `bodyLeft`, `bodyRight`, `bodyJoin` - `headerJoin` ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: { topBody: `─`, topJoin: `┬`, topLeft: `┌`, topRight: `┐`, bottomBody: `─`, bottomJoin: `┴`, bottomLeft: `└`, bottomRight: `┘`, bodyLeft: `│`, bodyRight: `│`, bodyJoin: `│`, joinBody: `─`, joinLeft: `├`, joinRight: `┤`, joinJoin: `┼` } }; console.log(table(data, config)); ``` ``` ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ ``` <a name="table-api-table-1-config-drawverticalline"></a> ##### config.drawVerticalLine Type: `(lineIndex: number, columnCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a vertical line. This callback is called for each vertical border of the table. If the table has `n` columns, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawVerticalLine: (lineIndex, columnCount) => { return lineIndex === 0 || lineIndex === columnCount; } }; console.log(table(data, config)); ``` ``` ╔════════════╗ ║ 0A 0B 0C ║ ╟────────────╢ ║ 1A 1B 1C ║ ╟────────────╢ ║ 2A 2B 2C ║ ╟────────────╢ ║ 3A 3B 3C ║ ╟────────────╢ ║ 4A 4B 4C ║ ╚════════════╝ ``` <a name="table-api-table-1-config-drawhorizontalline"></a> ##### config.drawHorizontalLine Type: `(lineIndex: number, rowCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a horizontal line. This callback is called for each horizontal border of the table. If the table has `n` rows, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. If the table has `n` rows and contains the header, then the range will be `[0, n+1]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawHorizontalLine: (lineIndex, rowCount) => { return lineIndex === 0 || lineIndex === 1 || lineIndex === rowCount - 1 || lineIndex === rowCount; } }; console.log(table(data, config)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ║ 2A │ 2B │ 2C ║ ║ 3A │ 3B │ 3C ║ ╟────┼────┼────╢ ║ 4A │ 4B │ 4C ║ ╚════╧════╧════╝ ``` <a name="table-api-table-1-config-singleline"></a> ##### config.singleLine Type: `boolean`\ Default: `false` If `true`, horizontal lines inside the table are not drawn. This option also overrides the `config.drawHorizontalLine` if specified. ```js const data = [ ['-rw-r--r--', '1', 'pandorym', 'staff', '1529', 'May 23 11:25', 'LICENSE'], ['-rw-r--r--', '1', 'pandorym', 'staff', '16327', 'May 23 11:58', 'README.md'], ['drwxr-xr-x', '76', 'pandorym', 'staff', '2432', 'May 23 12:02', 'dist'], ['drwxr-xr-x', '634', 'pandorym', 'staff', '20288', 'May 23 11:54', 'node_modules'], ['-rw-r--r--', '1,', 'pandorym', 'staff', '525688', 'May 23 11:52', 'package-lock.json'], ['-rw-r--r--@', '1', 'pandorym', 'staff', '2440', 'May 23 11:25', 'package.json'], ['drwxr-xr-x', '27', 'pandorym', 'staff', '864', 'May 23 11:25', 'src'], ['drwxr-xr-x', '20', 'pandorym', 'staff', '640', 'May 23 11:25', 'test'], ]; const config = { singleLine: true }; console.log(table(data, config)); ``` ``` ╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 1529 │ May 23 11:25 │ LICENSE ║ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 16327 │ May 23 11:58 │ README.md ║ ║ drwxr-xr-x │ 76 │ pandorym │ staff │ 2432 │ May 23 12:02 │ dist ║ ║ drwxr-xr-x │ 634 │ pandorym │ staff │ 20288 │ May 23 11:54 │ node_modules ║ ║ -rw-r--r-- │ 1, │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║ ║ -rw-r--r--@ │ 1 │ pandorym │ staff │ 2440 │ May 23 11:25 │ package.json ║ ║ drwxr-xr-x │ 27 │ pandorym │ staff │ 864 │ May 23 11:25 │ src ║ ║ drwxr-xr-x │ 20 │ pandorym │ staff │ 640 │ May 23 11:25 │ test ║ ╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝ ``` <a name="table-api-table-1-config-columns"></a> ##### config.columns Type: `Column[] | { [columnIndex: number]: Column }` Column specific configurations. <a name="table-api-table-1-config-columns-config-columns-width"></a> ###### config.columns[*].width Type: `number`\ Default: the maximum cell widths of the column Column width (excluding the paddings). ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: { 1: { width: 10 } } }; console.log(table(data, config)); ``` ``` ╔════╤════════════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────────────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────────────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════════════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-alignment"></a> ###### config.columns[*].alignment Type: `'center' | 'justify' | 'left' | 'right'`\ Default: `'left'` Cell content horizontal alignment ```js const data = [ ['0A', '0B', '0C', '0D 0E 0F'], ['1A', '1B', '1C', '1D 1E 1F'], ['2A', '2B', '2C', '2D 2E 2F'], ]; const config = { columnDefault: { width: 10, }, columns: [ { alignment: 'left' }, { alignment: 'center' }, { alignment: 'right' }, { alignment: 'justify' } ], }; console.log(table(data, config)); ``` ``` ╔════════════╤════════════╤════════════╤════════════╗ ║ 0A │ 0B │ 0C │ 0D 0E 0F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C │ 1D 1E 1F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C │ 2D 2E 2F ║ ╚════════════╧════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-verticalalignment"></a> ###### config.columns[*].verticalAlignment Type: `'top' | 'middle' | 'bottom'`\ Default: `'top'` Cell content vertical alignment ```js const data = [ ['A', 'B', 'C', 'DEF'], ]; const config = { columnDefault: { width: 1, }, columns: [ { verticalAlignment: 'top' }, { verticalAlignment: 'middle' }, { verticalAlignment: 'bottom' }, ], }; console.log(table(data, config)); ``` ``` ╔═══╤═══╤═══╤═══╗ ║ A │ │ │ D ║ ║ │ B │ │ E ║ ║ │ │ C │ F ║ ╚═══╧═══╧═══╧═══╝ ``` <a name="table-api-table-1-config-columns-config-columns-paddingleft"></a> ###### config.columns[*].paddingLeft Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the left. <a name="table-api-table-1-config-columns-config-columns-paddingright"></a> ###### config.columns[*].paddingRight Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the right. The `paddingLeft` and `paddingRight` options do not count on the column width. So the column has `width = 5`, `paddingLeft = 2` and `paddingRight = 2` will have the total width is `9`. ```js const data = [ ['0A', 'AABBCC', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: [ { paddingLeft: 3 }, { width: 2, paddingRight: 3 } ] }; console.log(table(data, config)); ``` ``` ╔══════╤══════╤════╗ ║ 0A │ AA │ 0C ║ ║ │ BB │ ║ ║ │ CC │ ║ ╟──────┼──────┼────╢ ║ 1A │ 1B │ 1C ║ ╟──────┼──────┼────╢ ║ 2A │ 2B │ 2C ║ ╚══════╧══════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-truncate"></a> ###### config.columns[*].truncate Type: `number`\ Default: `Infinity` The number of characters is which the content will be truncated. To handle a content that overflows the container width, `table` package implements [text wrapping](#config.columns[*].wrapWord). However, sometimes you may want to truncate content that is too long to be displayed in the table. ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20, truncate: 100 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convall… ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-wrapword"></a> ###### config.columns[*].wrapWord Type: `boolean`\ Default: `false` The `table` package implements auto text wrapping, i.e., text that has the width greater than the container width will be separated into multiple lines at the nearest space or one of the special characters: `\|/_.,;-`. When `wrapWord` is `false`: ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convallis ║ ║ dapibus. Nunc venena ║ ║ tis tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` When `wrapWord` is `true`: ``` ╔══════════════════════╗ ║ Lorem ipsum dolor ║ ║ sit amet, ║ ║ consectetur ║ ║ adipiscing elit. ║ ║ Phasellus pulvinar ║ ║ nibh sed mauris ║ ║ convallis dapibus. ║ ║ Nunc venenatis ║ ║ tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columndefault"></a> ##### config.columnDefault Type: `Column`\ Default: `{}` The default configuration for all columns. Column-specific settings will overwrite the default values. <a name="table-api-table-1-config-header"></a> ##### config.header Type: `object` Header configuration. The header configuration inherits the most of the column's, except: - `content` **{string}**: the header content. - `width:` calculate based on the content width automatically. - `alignment:` `center` be default. - `verticalAlignment:` is not supported. - `config.border.topJoin` will be `config.border.topBody` for prettier. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ]; const config = { columnDefault: { width: 10, }, header: { alignment: 'center', content: 'THE HEADER\nThis is the table about something', }, } console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════╗ ║ THE HEADER ║ ║ This is the table about something ║ ╟────────────┬────────────┬────────────╢ ║ 0A │ 0B │ 0C ║ ╟────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C ║ ╟────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C ║ ╚════════════╧════════════╧════════════╝ ``` <a name="table-api-createstream"></a> ### createStream `table` package exports `createStream` function used to draw a table and append rows. **Parameter:** - _**config:**_ the same as `table`'s, except `config.columnDefault.width` and `config.columnCount` must be provided. ```js import { createStream } from 'table'; const config = { columnDefault: { width: 50 }, columnCount: 1 }; const stream = createStream(config); setInterval(() => { stream.write([new Date()]); }, 500); ``` ![Streaming current date.](./.README/api/stream/streaming.gif) `table` package uses ANSI escape codes to overwrite the output of the last line when a new row is printed. The underlying implementation is explained in this [Stack Overflow answer](http://stackoverflow.com/a/32938658/368691). Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g. ```js import { createStream } from 'table'; import _ from 'lodash'; const config = { columnDefault: { width: 50 }, columnCount: 3, columns: [ { width: 10, alignment: 'right' }, { alignment: 'center' }, { width: 10 } ] }; const stream = createStream(config); let i = 0; setInterval(() => { let random; random = _.sample('abcdefghijklmnopqrstuvwxyz', _.random(1, 30)).join(''); stream.write([i++, new Date(), random]); }, 500); ``` ![Streaming random data.](./.README/api/stream/streaming-random.gif) <a name="table-api-getbordercharacters"></a> ### getBorderCharacters **Parameter:** - **_template_** - Type: `'honeywell' | 'norc' | 'ramac' | 'void'` - Required: `true` You can load one of the predefined border templates using `getBorderCharacters` function. ```js import { table, getBorderCharacters } from 'table'; const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: getBorderCharacters(`name of the template`) }; console.log(table(data, config)); ``` ``` # honeywell ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ # norc ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ # ramac (ASCII; for use in terminals that do not support Unicode characters) +----+----+----+ | 0A | 0B | 0C | |----|----|----| | 1A | 1B | 1C | |----|----|----| | 2A | 2B | 2C | +----+----+----+ # void (no borders; see "borderless table" section of the documentation) 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` Raise [an issue](https://github.com/gajus/table/issues) if you'd like to contribute a new border template. <a name="table-api-getbordercharacters-borderless-table"></a> #### Borderless Table Simply using `void` border character template creates a table with a lot of unnecessary spacing. To create a more pleasant to the eye table, reset the padding and remove the joining rows, e.g. ```js const output = table(data, { border: getBorderCharacters('void'), columnDefault: { paddingLeft: 0, paddingRight: 1 }, drawHorizontalLine: () => false } ); console.log(output); ``` ``` 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # `asbuild` [![Stars](https://img.shields.io/github/stars/AssemblyScript/asbuild.svg?style=social&maxAge=3600&label=Star)](https://github.com/AssemblyScript/asbuild/stargazers) *A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc.* ## 🚩 Table of Contents - [Installing](#-installing) - [Usage](#-usage) - [`asb init`](#asb-init---create-an-empty-project) - [`asb test`](#asb-test---run-as-pect-tests) - [`asb fmt`](#asb-fmt---format-as-files-using-eslint) - [`asb run`](#asb-run---run-a-wasi-binary) - [`asb build`](#asb-build---compile-the-project-using-asc) - [Background](#-background) ## 🔧 Installing Install it globally ``` npm install -g asbuild ``` Or, locally as dev dependencies ``` npm install --save-dev asbuild ``` ## 💡 Usage ``` Build tool for AssemblyScript projects. Usage: asb [command] [options] Commands: asb Alias of build command, to maintain back-ward compatibility [default] asb build Compile a local package and all of its dependencies [aliases: compile, make] asb init [baseDir] Create a new AS package in an given directory asb test Run as-pect tests asb fmt [paths..] This utility formats current module using eslint. [aliases: format, lint] Options: --version Show version number [boolean] --help Show help [boolean] ``` ### `asb init` - Create an empty project ``` asb init [baseDir] Create a new AS package in an given directory Positionals: baseDir Create a sample AS project in this directory [string] [default: "."] Options: --version Show version number [boolean] --help Show help [boolean] --yes Skip the interactive prompt [boolean] [default: false] ``` ### `asb test` - Run as-pect tests ``` asb test Run as-pect tests USAGE: asb test [options] -- [aspect_options] Options: --version Show version number [boolean] --help Show help [boolean] --verbose, --vv Print out arguments passed to as-pect [boolean] [default: false] ``` ### `asb fmt` - Format AS files using ESlint ``` asb fmt [paths..] This utility formats current module using eslint. Positionals: paths Paths to format [array] [default: ["."]] Initialisation: --init Generates recommended eslint config for AS Projects [boolean] Miscellaneous --lint, --dry-run Tries to fix problems without saving the changes to the file system [boolean] [default: false] Options: --version Show version number [boolean] --help Show help ``` ### `asb run` - Run a WASI binary ``` asb run Run a WASI binary USAGE: asb run [options] [binary path] -- [binary options] Positionals: binary path to Wasm binary [string] [required] Options: --version Show version number [boolean] --help Show help [boolean] --preopen, -p comma separated list of directories to open. [default: "."] ``` ### `asb build` - Compile the project using asc ``` asb build Compile a local package and all of its dependencies USAGE: asb build [entry_file] [options] -- [asc_options] Options: --version Show version number [boolean] --help Show help [boolean] --baseDir, -d Base directory of project. [string] [default: "."] --config, -c Path to asconfig file [string] [default: "./asconfig.json"] --wat Output wat file to outDir [boolean] [default: false] --outDir Directory to place built binaries. Default "./build/<target>/" [string] --target Target for compilation [string] [default: "release"] --verbose Print out arguments passed to asc [boolean] [default: false] Examples: asb build Build release of 'assembly/index.ts to build/release/packageName.wasm asb build --target release Build a release binary asb build -- --measure Pass argument to 'asc' ``` #### Defaults ##### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ##### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./tests/build_test) ## 📖 Background Asbuild started as wrapper around `asc` to provide an easier CLI interface and now has been extened to support other commands like `init`, `test` and `fmt` just like `cargo` to become a one stop build tool for AS Projects. ## 📜 License This library is provided under the open-source [MIT license](https://choosealicense.com/licenses/mit/). # has > Object.prototype.hasOwnProperty.call shortcut ## Installation ```sh npm install --save has ``` ## Usage ```js var has = require('has'); has({}, 'hasOwnProperty'); // false has(Object.prototype, 'hasOwnProperty'); // true ``` # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support. [![Build Status](https://travis-ci.org/epoberezkin/fast-deep-equal.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-deep-equal) [![npm](https://img.shields.io/npm/v/fast-deep-equal.svg)](https://www.npmjs.com/package/fast-deep-equal) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-deep-equal/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-deep-equal?branch=master) ## Install ```bash npm install fast-deep-equal ``` ## Features - ES5 compatible - works in node.js (8+) and browsers (IE9+) - checks equality of Date and RegExp objects by value. ES6 equal (`require('fast-deep-equal/es6')`) also supports: - Maps - Sets - Typed arrays ## Usage ```javascript var equal = require('fast-deep-equal'); console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true ``` To support ES6 Maps, Sets and Typed arrays equality use: ```javascript var equal = require('fast-deep-equal/es6'); console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true ``` To use with React (avoiding the traversal of React elements' _owner property that contains circular references and is not needed when comparing the elements - borrowed from [react-fast-compare](https://github.com/FormidableLabs/react-fast-compare)): ```javascript var equal = require('fast-deep-equal/react'); var equal = require('fast-deep-equal/es6/react'); ``` ## Performance benchmark Node.js v12.6.0: ``` fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled) fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled) fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled) nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled) shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled) underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled) lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled) deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled) deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled) ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled) util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled) assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled) The fastest is fast-deep-equal ``` To run benchmark (requires node.js 6+): ```bash npm run benchmark ``` __Please note__: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application. ## Enterprise support fast-deep-equal package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-deep-equal?utm_source=npm-fast-deep-equal&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/fast-deep-equal/blob/master/LICENSE) # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = new LRU(options) , otherCache = new LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. Setting it to a non-number or negative number will throw a `TypeError`. Setting it to 0 makes it be `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a `TypeError`. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. * `updateAgeOnGet` When using time-expiring entries with `maxAge`, setting this to `true` will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.) ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # which Like the unix `which` utility. Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so `hash -r` is not needed when the PATH changes. ## USAGE ```javascript var which = require('which') // async usage which('node', function (er, resolvedPath) { // er is returned if no "node" is found on the PATH // if it is found, then the absolute path to the exec is returned }) // or promise which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... }) // sync usage // throws if not found var resolved = which.sync('node') // if nothrow option is used, returns null if not found resolved = which.sync('node', {nothrow: true}) // Pass options to override the PATH and PATHEXT environment vars. which('node', { path: someOtherPath }, function (er, resolved) { if (er) throw er console.log('found at %j', resolved) }) ``` ## CLI USAGE Same as the BSD `which(1)` binary. ``` usage: which [-as] program ... ``` ## OPTIONS You may pass an options object as the second argument. - `path`: Use instead of the `PATH` environment variable. - `pathExt`: Use instead of the `PATHEXT` environment variable. - `all`: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string. # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # isarray `Array#isArray` for older browsers. [![build status](https://secure.travis-ci.org/juliangruber/isarray.svg)](http://travis-ci.org/juliangruber/isarray) [![downloads](https://img.shields.io/npm/dm/isarray.svg)](https://www.npmjs.org/package/isarray) [![browser support](https://ci.testling.com/juliangruber/isarray.png) ](https://ci.testling.com/juliangruber/isarray) ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # cross-spawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build status][appveyor-image]][appveyor-url] [![Coverage Status][codecov-image]][codecov-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/cross-spawn [downloads-image]:https://img.shields.io/npm/dm/cross-spawn.svg [npm-image]:https://img.shields.io/npm/v/cross-spawn.svg [travis-url]:https://travis-ci.org/moxystudio/node-cross-spawn [travis-image]:https://img.shields.io/travis/moxystudio/node-cross-spawn/master.svg [appveyor-url]:https://ci.appveyor.com/project/satazor/node-cross-spawn [appveyor-image]:https://img.shields.io/appveyor/ci/satazor/node-cross-spawn/master.svg [codecov-url]:https://codecov.io/gh/moxystudio/node-cross-spawn [codecov-image]:https://img.shields.io/codecov/c/github/moxystudio/node-cross-spawn/master.svg [david-dm-url]:https://david-dm.org/moxystudio/node-cross-spawn [david-dm-image]:https://img.shields.io/david/moxystudio/node-cross-spawn.svg [david-dm-dev-url]:https://david-dm.org/moxystudio/node-cross-spawn?type=dev [david-dm-dev-image]:https://img.shields.io/david/dev/moxystudio/node-cross-spawn.svg A cross platform solution to node's spawn and spawnSync. ## Installation Node.js version 8 and up: `$ npm install cross-spawn` Node.js version 7 and under: `$ npm install cross-spawn@6` ## Why Node has issues when using spawn on Windows: - It ignores [PATHEXT](https://github.com/joyent/node/issues/2318) - It does not support [shebangs](https://en.wikipedia.org/wiki/Shebang_(Unix)) - Has problems running commands with [spaces](https://github.com/nodejs/node/issues/7367) - Has problems running commands with posix relative paths (e.g.: `./my-folder/my-executable`) - Has an [issue](https://github.com/moxystudio/node-cross-spawn/issues/82) with command shims (files in `node_modules/.bin/`), where arguments with quotes and parenthesis would result in [invalid syntax error](https://github.com/moxystudio/node-cross-spawn/blob/e77b8f22a416db46b6196767bcd35601d7e11d54/test/index.test.js#L149) - No `options.shell` support on node `<v4.8` All these issues are handled correctly by `cross-spawn`. There are some known modules, such as [win-spawn](https://github.com/ForbesLindesay/win-spawn), that try to solve this but they are either broken or provide faulty escaping of shell arguments. ## Usage Exactly the same way as node's [`spawn`](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options) or [`spawnSync`](https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options), so it's a drop in replacement. ```js const spawn = require('cross-spawn'); // Spawn NPM asynchronously const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); // Spawn NPM synchronously const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); ``` ## Caveats ### Using `options.shell` as an alternative to `cross-spawn` Starting from node `v4.8`, `spawn` has a `shell` option that allows you run commands from within a shell. This new option solves the [PATHEXT](https://github.com/joyent/node/issues/2318) issue but: - It's not supported in node `<v4.8` - You must manually escape the command and arguments which is very error prone, specially when passing user input - There are a lot of other unresolved issues from the [Why](#why) section that you must take into account If you are using the `shell` option to spawn a command in a cross platform way, consider using `cross-spawn` instead. You have been warned. ### `options.shell` support While `cross-spawn` adds support for `options.shell` in node `<v4.8`, all of its enhancements are disabled. This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using `options.shell` you are probably targeting a specific platform anyway and you don't want things to get into your way. ### Shebangs support While `cross-spawn` handles shebangs on Windows, its support is limited. More specifically, it just supports `#!/usr/bin/env <program>` where `<program>` must not contain any arguments. If you would like to have the shebang support improved, feel free to contribute via a pull-request. Remember to always test your code on Windows! ## Tests `$ npm test` `$ npm test -- --watch` during development ## License Released under the [MIT License](https://www.opensource.org/licenses/mit-license.php). # Acorn A tiny, fast JavaScript parser written in JavaScript. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface **parse**`(input, options)` is the main interface to the library. The `input` parameter is a string, `options` can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the [ESTree spec](https://github.com/estree/estree). ```javascript let acorn = require("acorn"); console.log(acorn.parse("1 + 1")); ``` When encountering a syntax error, the parser will raise a `SyntaxError` object with a meaningful message. The error object will have a `pos` property that indicates the string offset at which the error occurred, and a `loc` object that contains a `{line, column}` object referring to that same position. Options can be provided by passing a second argument, which should be an object containing any of these fields: - **ecmaVersion**: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10. **NOTE**: Only 'stage 4' (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins. - **sourceType**: Indicate the mode the code should be parsed in. Can be either `"script"` or `"module"`. This influences global strict mode and parsing of `import` and `export` declarations. **NOTE**: If set to `"module"`, then static `import` / `export` syntax will be valid, even if `ecmaVersion` is less than 6. - **onInsertedSemicolon**: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if `locations` is on, also a `{line, column}` object representing this position. - **onTrailingComma**: Like `onInsertedSemicolon`, but for trailing commas. - **allowReserved**: If `false`, using a reserved word will generate an error. Defaults to `true` for `ecmaVersion` 3, `false` for higher versions. When given the value `"never"`, reserved words and keywords can also not be used as property names (as in Internet Explorer's old parser). - **allowReturnOutsideFunction**: By default, a return statement at the top level raises an error. Set this to `true` to accept such code. - **allowImportExportEverywhere**: By default, `import` and `export` declarations can only appear at a program's top level. Setting this option to `true` allows them anywhere where a statement is allowed. - **allowAwaitOutsideFunction**: By default, `await` expressions can only appear inside `async` functions. Setting this option to `true` allows to have top-level `await` expressions. They are still not allowed in non-`async` functions, though. - **allowHashBang**: When this is enabled (off by default), if the code starts with the characters `#!` (as in a shellscript), the first line will be treated as a comment. - **locations**: When `true`, each node has a `loc` object attached with `start` and `end` subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. Default is `false`. - **onToken**: If a function is passed for this option, each found token will be passed in same format as tokens returned from `tokenizer().getToken()`. If array is passed, each found token is pushed to it. Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **onComment**: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters: - `block`: `true` if the comment is a block comment, false if it is a line comment. - `text`: The content of the comment. - `start`: Character offset of the start of the comment. - `end`: Character offset of the end of the comment. When the `locations` options is on, the `{line, column}` locations of the comment’s start and end are passed as two additional parameters. If array is passed for this option, each found comment is pushed to it as object in Esprima format: ```javascript { "type": "Line" | "Block", "value": "comment text", "start": Number, "end": Number, // If `locations` option is on: "loc": { "start": {line: Number, column: Number} "end": {line: Number, column: Number} }, // If `ranges` option is on: "range": [Number, Number] } ``` Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **ranges**: Nodes have their start and end characters offsets recorded in `start` and `end` properties (directly on the node, rather than the `loc` object, which holds line/column data. To also add a [semi-standardized](https://bugzilla.mozilla.org/show_bug.cgi?id=745678) `range` property holding a `[start, end]` array with the same numbers, set the `ranges` option to `true`. - **program**: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the `program` option in subsequent parses. This will add the toplevel forms of the parsed file to the "Program" (top) node of an existing parse tree. - **sourceFile**: When the `locations` option is `true`, you can pass this option to add a `source` attribute in every node’s `loc` object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose. - **directSourceFile**: Like `sourceFile`, but a `sourceFile` property will be added (regardless of the `location` option) directly to the nodes, rather than the `loc` object. - **preserveParens**: If this option is `true`, parenthesized expressions are represented by (non-standard) `ParenthesizedExpression` nodes that have a single `expression` property containing the expression inside parentheses. **parseExpressionAt**`(input, offset, options)` will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression. **tokenizer**`(input, options)` returns an object with a `getToken` method that can be called repeatedly to get the next token, a `{start, end, type, value}` object (with added `loc` property when the `locations` option is enabled and `range` property when the `ranges` option is enabled). When the token's type is `tokTypes.eof`, you should stop calling the method, since it will keep returning that same token forever. In ES6 environment, returned result can be used as any other protocol-compliant iterable: ```javascript for (let token of acorn.tokenizer(str)) { // iterate over the tokens } // transform code to array of tokens: var tokens = [...acorn.tokenizer(str)]; ``` **tokTypes** holds an object mapping names to the token type objects that end up in the `type` properties of tokens. **getLineInfo**`(input, offset)` can be used to get a `{line, column}` object for a given program string and offset. ### The `Parser` class Instances of the **`Parser`** class contain all the state and logic that drives a parse. It has static methods `parse`, `parseExpressionAt`, and `tokenizer` that match the top-level functions by the same name. When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static `extend` method. ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); var JSXParser = acorn.Parser.extend(jsx()); JSXParser.parse("foo(<bar/>)"); ``` The `extend` method takes any number of plugin values, and returns a new `Parser` class that includes the extra parser logic provided by the plugins. ## Command line interface The `bin/acorn` utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options: - `--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10`: Sets the ECMAScript version to parse. Default is version 9. - `--module`: Sets the parsing mode to `"module"`. Is set to `"script"` otherwise. - `--locations`: Attaches a "loc" object to each node with "start" and "end" subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. - `--allow-hash-bang`: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment. - `--compact`: No whitespace is used in the AST output. - `--silent`: Do not output the AST, just return the exit status. - `--help`: Print the usage information and quit. The utility spits out the syntax tree as JSON data. ## Existing plugins - [`acorn-jsx`](https://github.com/RReverser/acorn-jsx): Parse [Facebook JSX syntax extensions](https://github.com/facebook/jsx) Plugins for ECMAScript proposals: - [`acorn-stage3`](https://github.com/acornjs/acorn-stage3): Parse most stage 3 proposals, bundling: - [`acorn-class-fields`](https://github.com/acornjs/acorn-class-fields): Parse [class fields proposal](https://github.com/tc39/proposal-class-fields) - [`acorn-import-meta`](https://github.com/acornjs/acorn-import-meta): Parse [import.meta proposal](https://github.com/tc39/proposal-import-meta) - [`acorn-private-methods`](https://github.com/acornjs/acorn-private-methods): parse [private methods, getters and setters proposal](https://github.com/tc39/proposal-private-methods)n # fast-json-stable-stringify Deterministic `JSON.stringify()` - a faster version of [@substack](https://github.com/substack)'s json-stable-strigify without [jsonify](https://github.com/substack/jsonify). You can also pass in a custom comparison function. [![Build Status](https://travis-ci.org/epoberezkin/fast-json-stable-stringify.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-json-stable-stringify) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-json-stable-stringify/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-json-stable-stringify?branch=master) # example ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; console.log(stringify(obj)); ``` output: ``` {"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8} ``` # methods ``` js var stringify = require('fast-json-stable-stringify') ``` ## var str = stringify(obj, opts) Return a deterministic stringified string `str` from the object `obj`. ## options ### cmp If `opts` is given, you can supply an `opts.cmp` to have a custom comparison function for object keys. Your function `opts.cmp` is called with these parameters: ``` js opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue }) ``` For example, to sort on the object key names in reverse order you could write: ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; var s = stringify(obj, function (a, b) { return a.key < b.key ? 1 : -1; }); console.log(s); ``` which results in the output string: ``` {"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3} ``` Or if you wanted to sort on the object values in reverse order, you could write: ``` var stringify = require('fast-json-stable-stringify'); var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 }; var s = stringify(obj, function (a, b) { return a.value < b.value ? 1 : -1; }); console.log(s); ``` which outputs: ``` {"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10} ``` ### cycles Pass `true` in `opts.cycles` to stringify circular property as `__cycle__` - the result will not be a valid JSON string in this case. TypeError will be thrown in case of circular object without this option. # install With [npm](https://npmjs.org) do: ``` npm install fast-json-stable-stringify ``` # benchmark To run benchmark (requires Node.js 6+): ``` node benchmark ``` Results: ``` fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled) json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled) fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled) faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled) The fastest is fast-stable-stringify ``` ## Enterprise support fast-json-stable-stringify package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-json-stable-stringify?utm_source=npm-fast-json-stable-stringify&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. # license [MIT](https://github.com/epoberezkin/fast-json-stable-stringify/blob/master/LICENSE) # eslint-utils [![npm version](https://img.shields.io/npm/v/eslint-utils.svg)](https://www.npmjs.com/package/eslint-utils) [![Downloads/month](https://img.shields.io/npm/dm/eslint-utils.svg)](http://www.npmtrends.com/eslint-utils) [![Build Status](https://github.com/mysticatea/eslint-utils/workflows/CI/badge.svg)](https://github.com/mysticatea/eslint-utils/actions) [![Coverage Status](https://codecov.io/gh/mysticatea/eslint-utils/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/eslint-utils) [![Dependency Status](https://david-dm.org/mysticatea/eslint-utils.svg)](https://david-dm.org/mysticatea/eslint-utils) ## 🏁 Goal This package provides utility functions and classes for make ESLint custom rules. For examples: - [getStaticValue](https://eslint-utils.mysticatea.dev/api/ast-utils.html#getstaticvalue) evaluates static value on AST. - [ReferenceTracker](https://eslint-utils.mysticatea.dev/api/scope-utils.html#referencetracker-class) checks the members of modules/globals as handling assignments and destructuring. ## 📖 Usage See [documentation](https://eslint-utils.mysticatea.dev/). ## 📰 Changelog See [releases](https://github.com/mysticatea/eslint-utils/releases). ## ❤️ Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run clean` removes the coverage result of `npm test` command. - `npm run coverage` shows the coverage result of the last `npm test` command. - `npm run lint` runs ESLint. - `npm run watch` runs tests on each file change. # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. <h1 align="center">Enquirer</h1> <p align="center"> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/v/enquirer.svg" alt="version"> </a> <a href="https://travis-ci.org/enquirer/enquirer"> <img src="https://img.shields.io/travis/enquirer/enquirer.svg" alt="travis"> </a> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/dm/enquirer.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <b>Stylish CLI prompts that are user-friendly, intuitive and easy to create.</b><br> <sub>>_ Prompts should be more like conversations than inquisitions▌</sub> </p> <br> <p align="center"> <sub>(Example shows Enquirer's <a href="#survey-prompt">Survey Prompt</a>)</a></sub> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"><br> <sub>The terminal in all examples is <a href="https://hyper.is/">Hyper</a>, theme is <a href="https://github.com/jonschlinkert/hyper-monokai-extended">hyper-monokai-extended</a>.</sub><br><br> <a href="#built-in-prompts"><strong>See more prompt examples</strong></a> </p> <br> <br> Created by [jonschlinkert](https://github.com/jonschlinkert) and [doowb](https://github.com/doowb), Enquirer is fast, easy to use, and lightweight enough for small projects, while also being powerful and customizable enough for the most advanced use cases. * **Fast** - [Loads in ~4ms](#-performance) (that's about _3-4 times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps_) * **Lightweight** - Only one dependency, the excellent [ansi-colors](https://github.com/doowb/ansi-colors) by [Brian Woodward](https://github.com/doowb). * **Easy to implement** - Uses promises and async/await and sensible defaults to make prompts easy to create and implement. * **Easy to use** - Thrill your users with a better experience! Navigating around input and choices is a breeze. You can even create [quizzes](examples/fun/countdown.js), or [record](examples/fun/record.js) and [playback](examples/fun/play.js) key bindings to aid with tutorials and videos. * **Intuitive** - Keypress combos are available to simplify usage. * **Flexible** - All prompts can be used standalone or chained together. * **Stylish** - Easily override semantic styles and symbols for any part of the prompt. * **Extensible** - Easily create and use custom prompts by extending Enquirer's built-in [prompts](#-prompts). * **Pluggable** - Add advanced features to Enquirer using plugins. * **Validation** - Optionally validate user input with any prompt. * **Well tested** - All prompts are well-tested, and tests are easy to create without having to use brittle, hacky solutions to spy on prompts or "inject" values. * **Examples** - There are numerous [examples](examples) available to help you get started. If you like Enquirer, please consider starring or tweeting about this project to show your support. Thanks! <br> <p align="center"> <b>>_ Ready to start making prompts your users will love? ▌</b><br> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/heartbeat.gif" alt="Enquirer Select Prompt with heartbeat example" width="750"> </p> <br> <br> ## ❯ Getting started Get started with Enquirer, the most powerful and easy-to-use Node.js library for creating interactive CLI prompts. * [Install](#-install) * [Usage](#-usage) * [Enquirer](#-enquirer) * [Prompts](#-prompts) - [Built-in Prompts](#-prompts) - [Custom Prompts](#-custom-prompts) * [Key Bindings](#-key-bindings) * [Options](#-options) * [Release History](#-release-history) * [Performance](#-performance) * [About](#-about) <br> ## ❯ Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install enquirer --save ``` Install with [yarn](https://yarnpkg.com/en/): ```sh $ yarn add enquirer ``` <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/npm-install.gif" alt="Install Enquirer with NPM" width="750"> </p> _(Requires Node.js 8.6 or higher. Please let us know if you need support for an earlier version by creating an [issue](../../issues/new).)_ <br> ## ❯ Usage ### Single prompt The easiest way to get started with enquirer is to pass a [question object](#prompt-options) to the `prompt` method. ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); // { username: 'jonschlinkert' } ``` _(Examples with `await` need to be run inside an `async` function)_ ### Multiple prompts Pass an array of ["question" objects](#prompt-options) to run a series of prompts. ```js const response = await prompt([ { type: 'input', name: 'name', message: 'What is your name?' }, { type: 'input', name: 'username', message: 'What is your username?' } ]); console.log(response); // { name: 'Edward Chan', username: 'edwardmchan' } ``` ### Different ways to run enquirer #### 1. By importing the specific `built-in prompt` ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Did you like enquirer?' }); prompt.run() .then(answer => console.log('Answer:', answer)); ``` #### 2. By passing the options to `prompt` ```js const { prompt } = require('enquirer'); prompt({ type: 'confirm', name: 'question', message: 'Did you like enquirer?' }) .then(answer => console.log('Answer:', answer)); ``` **Jump to**: [Getting Started](#-getting-started) · [Prompts](#-prompts) · [Options](#-options) · [Key Bindings](#-key-bindings) <br> ## ❯ Enquirer **Enquirer is a prompt runner** Add Enquirer to your JavaScript project with following line of code. ```js const Enquirer = require('enquirer'); ``` The main export of this library is the `Enquirer` class, which has methods and features designed to simplify running prompts. ```js const { prompt } = require('enquirer'); const question = [ { type: 'input', name: 'username', message: 'What is your username?' }, { type: 'password', name: 'password', message: 'What is your password?' } ]; let answers = await prompt(question); console.log(answers); ``` **Prompts control how values are rendered and returned** Each individual prompt is a class with special features and functionality for rendering the types of values you want to show users in the terminal, and subsequently returning the types of values you need to use in your application. **How can I customize prompts?** Below in this guide you will find information about creating [custom prompts](#-custom-prompts). For now, we'll focus on how to customize an existing prompt. All of the individual [prompt classes](#built-in-prompts) in this library are exposed as static properties on Enquirer. This allows them to be used directly without using `enquirer.prompt()`. Use this approach if you need to modify a prompt instance, or listen for events on the prompt. **Example** ```js const { Input } = require('enquirer'); const prompt = new Input({ name: 'username', message: 'What is your username?' }); prompt.run() .then(answer => console.log('Username:', answer)) .catch(console.error); ``` ### [Enquirer](index.js#L20) Create an instance of `Enquirer`. **Params** * `options` **{Object}**: (optional) Options to use with all prompts. * `answers` **{Object}**: (optional) Answers object to initialize with. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` ### [register()](index.js#L42) Register a custom prompt type. **Params** * `type` **{String}** * `fn` **{Function|Prompt}**: `Prompt` class, or a function that returns a `Prompt` class. * `returns` **{Object}**: Returns the Enquirer instance **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); enquirer.register('customType', require('./custom-prompt')); ``` ### [prompt()](index.js#L78) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const response = await enquirer.prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` ### [use()](index.js#L160) Use an enquirer plugin. **Params** * `plugin` **{Function}**: Plugin function that takes an instance of Enquirer. * `returns` **{Object}**: Returns the Enquirer instance. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const plugin = enquirer => { // do stuff to enquire instance }; enquirer.use(plugin); ``` ### [Enquirer#prompt](index.js#L210) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` <br> ## ❯ Prompts This section is about Enquirer's prompts: what they look like, how they work, how to run them, available options, and how to customize the prompts or create your own prompt concept. **Getting started with Enquirer's prompts** * [Prompt](#prompt) - The base `Prompt` class used by other prompts - [Prompt Options](#prompt-options) * [Built-in prompts](#built-in-prompts) * [Prompt Types](#prompt-types) - The base `Prompt` class used by other prompts * [Custom prompts](#%E2%9D%AF-custom-prompts) - Enquirer 2.0 introduced the concept of prompt "types", with the goal of making custom prompts easier than ever to create and use. ### Prompt The base `Prompt` class is used to create all other prompts. ```js const { Prompt } = require('enquirer'); class MyCustomPrompt extends Prompt {} ``` See the documentation for [creating custom prompts](#-custom-prompts) to learn more about how this works. #### Prompt Options Each prompt takes an options object (aka "question" object), that implements the following interface: ```js { // required type: string | function, name: string | function, message: string | function | async function, // optional skip: boolean | function | async function, initial: string | function | async function, format: function | async function, result: function | async function, validate: function | async function, } ``` Each property of the options object is described below: | **Property** | **Required?** | **Type** | **Description** | | ------------ | ------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `type` | yes | `string\|function` | Enquirer uses this value to determine the type of prompt to run, but it's optional when prompts are run directly. | | `name` | yes | `string\|function` | Used as the key for the answer on the returned values (answers) object. | | `message` | yes | `string\|function` | The message to display when the prompt is rendered in the terminal. | | `skip` | no | `boolean\|function` | If `true` it will not ask that prompt. | | `initial` | no | `string\|function` | The default value to return if the user does not supply a value. | | `format` | no | `function` | Function to format user input in the terminal. | | `result` | no | `function` | Function to format the final submitted value before it's returned. | | `validate` | no | `function` | Function to validate the submitted value before it's returned. This function may return a boolean or a string. If a string is returned it will be used as the validation error message. | **Example usage** ```js const { prompt } = require('enquirer'); const question = { type: 'input', name: 'username', message: 'What is your username?' }; prompt(question) .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` <br> ### Built-in prompts * [AutoComplete Prompt](#autocomplete-prompt) * [BasicAuth Prompt](#basicauth-prompt) * [Confirm Prompt](#confirm-prompt) * [Form Prompt](#form-prompt) * [Input Prompt](#input-prompt) * [Invisible Prompt](#invisible-prompt) * [List Prompt](#list-prompt) * [MultiSelect Prompt](#multiselect-prompt) * [Numeral Prompt](#numeral-prompt) * [Password Prompt](#password-prompt) * [Quiz Prompt](#quiz-prompt) * [Survey Prompt](#survey-prompt) * [Scale Prompt](#scale-prompt) * [Select Prompt](#select-prompt) * [Sort Prompt](#sort-prompt) * [Snippet Prompt](#snippet-prompt) * [Toggle Prompt](#toggle-prompt) ### AutoComplete Prompt Prompt that auto-completes as the user types, and returns the selected value as a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/autocomplete-prompt.gif" alt="Enquirer AutoComplete Prompt" width="750"> </p> **Example Usage** ```js const { AutoComplete } = require('enquirer'); const prompt = new AutoComplete({ name: 'flavor', message: 'Pick your favorite flavor', limit: 10, initial: 2, choices: [ 'Almond', 'Apple', 'Banana', 'Blackberry', 'Blueberry', 'Cherry', 'Chocolate', 'Cinnamon', 'Coconut', 'Cranberry', 'Grape', 'Nougat', 'Orange', 'Pear', 'Pineapple', 'Raspberry', 'Strawberry', 'Vanilla', 'Watermelon', 'Wintergreen' ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **AutoComplete Options** | Option | Type | Default | Description | | ----------- | ---------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `highlight` | `function` | `dim` version of primary style | The color to use when "highlighting" characters in the list that match user input. | | `multiple` | `boolean` | `false` | Allow multiple choices to be selected. | | `suggest` | `function` | Greedy match, returns true if choice message contains input string. | Function that filters choices. Takes user input and a choices array, and returns a list of matching choices. | | `initial` | `number` | 0 | Preselected item in the list of choices. | | `footer` | `function` | None | Function that displays [footer text](https://github.com/enquirer/enquirer/blob/6c2819518a1e2ed284242a99a685655fbaabfa28/examples/autocomplete/option-footer.js#L10) | **Related prompts** * [Select](#select-prompt) * [MultiSelect](#multiselect-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### BasicAuth Prompt Prompt that asks for username and password to authenticate the user. The default implementation of `authenticate` function in `BasicAuth` prompt is to compare the username and password with the values supplied while running the prompt. The implementer is expected to override the `authenticate` function with a custom logic such as making an API request to a server to authenticate the username and password entered and expect a token back. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61570485-7ffd9c00-aaaa-11e9-857a-d47dc7008284.gif" alt="Enquirer BasicAuth Prompt" width="750"> </p> **Example Usage** ```js const { BasicAuth } = require('enquirer'); const prompt = new BasicAuth({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '123', showPassword: true }); prompt .run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Confirm Prompt Prompt that returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/confirm-prompt.gif" alt="Enquirer Confirm Prompt" width="750"> </p> **Example Usage** ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Want to answer?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Form Prompt Prompt that allows the user to enter and submit multiple values on a single terminal screen. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/form-prompt.gif" alt="Enquirer Form Prompt" width="750"> </p> **Example Usage** ```js const { Form } = require('enquirer'); const prompt = new Form({ name: 'user', message: 'Please provide the following information:', choices: [ { name: 'firstname', message: 'First Name', initial: 'Jon' }, { name: 'lastname', message: 'Last Name', initial: 'Schlinkert' }, { name: 'username', message: 'GitHub username', initial: 'jonschlinkert' } ] }); prompt.run() .then(value => console.log('Answer:', value)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Input Prompt Prompt that takes user input and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/input-prompt.gif" alt="Enquirer Input Prompt" width="750"> </p> **Example Usage** ```js const { Input } = require('enquirer'); const prompt = new Input({ message: 'What is your username?', initial: 'jonschlinkert' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.log); ``` You can use [data-store](https://github.com/jonschlinkert/data-store) to store [input history](https://github.com/enquirer/enquirer/blob/master/examples/input/option-history.js) that the user can cycle through (see [source](https://github.com/enquirer/enquirer/blob/8407dc3579123df5e6e20215078e33bb605b0c37/lib/prompts/input.js)). **Related prompts** * [Confirm](#confirm-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Invisible Prompt Prompt that takes user input, hides it from the terminal, and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/invisible-prompt.gif" alt="Enquirer Invisible Prompt" width="750"> </p> **Example Usage** ```js const { Invisible } = require('enquirer'); const prompt = new Invisible({ name: 'secret', message: 'What is your secret?' }); prompt.run() .then(answer => console.log('Answer:', { secret: answer })) .catch(console.error); ``` **Related prompts** * [Password](#password-prompt) * [Input](#input-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### List Prompt Prompt that returns a list of values, created by splitting the user input. The default split character is `,` with optional trailing whitespace. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/list-prompt.gif" alt="Enquirer List Prompt" width="750"> </p> **Example Usage** ```js const { List } = require('enquirer'); const prompt = new List({ name: 'keywords', message: 'Type comma-separated keywords' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Sort](#sort-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### MultiSelect Prompt Prompt that allows the user to select multiple items from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/multiselect-prompt.gif" alt="Enquirer MultiSelect Prompt" width="750"> </p> **Example Usage** ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: ['aqua', 'blue', 'fuchsia'] ``` **Example key-value pairs** Optionally, pass a `result` function and use the `.map` method to return an object of key-value pairs of the selected names and values: [example](./examples/multiselect/option-result.js) ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ], result(names) { return this.map(names); } }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: { aqua: '#00ffff', blue: '#0000ff', fuchsia: '#ff00ff' } ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Numeral Prompt Prompt that takes a number as input. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/numeral-prompt.gif" alt="Enquirer Numeral Prompt" width="750"> </p> **Example Usage** ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ name: 'number', message: 'Please enter a number' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Confirm](#confirm-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Password Prompt Prompt that takes user input and masks it in the terminal. Also see the [invisible prompt](#invisible-prompt) <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/password-prompt.gif" alt="Enquirer Password Prompt" width="750"> </p> **Example Usage** ```js const { Password } = require('enquirer'); const prompt = new Password({ name: 'password', message: 'What is your password?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Invisible](#invisible-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Quiz Prompt Prompt that allows the user to play multiple-choice quiz questions. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61567561-891d4780-aa6f-11e9-9b09-3d504abd24ed.gif" alt="Enquirer Quiz Prompt" width="750"> </p> **Example Usage** ```js const { Quiz } = require('enquirer'); const prompt = new Quiz({ name: 'countries', message: 'How many countries are there in the world?', choices: ['165', '175', '185', '195', '205'], correctChoice: 3 }); prompt .run() .then(answer => { if (answer.correct) { console.log('Correct!'); } else { console.log(`Wrong! Correct answer is ${answer.correctAnswer}`); } }) .catch(console.error); ``` **Quiz Options** | Option | Type | Required | Description | | ----------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | | `choices` | `array` | Yes | The list of possible answers to the quiz question. | | `correctChoice`| `number` | Yes | Index of the correct choice from the `choices` array. | **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Survey Prompt Prompt that allows the user to provide feedback for a list of questions. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"> </p> **Example Usage** ```js const { Survey } = require('enquirer'); const prompt = new Survey({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.' }, { name: 'navigation', message: 'The website is easy to navigate.' }, { name: 'images', message: 'The website usually has good images.' }, { name: 'upload', message: 'The website makes it easy to upload images.' }, { name: 'colors', message: 'The website has a pleasing color palette.' } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [Scale](#scale-prompt) * [Snippet](#snippet-prompt) * [Select](#select-prompt) *** ### Scale Prompt A more compact version of the [Survey prompt](#survey-prompt), the Scale prompt allows the user to quickly provide feedback using a [Likert Scale](https://en.wikipedia.org/wiki/Likert_scale). <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/scale-prompt.gif" alt="Enquirer Scale Prompt" width="750"> </p> **Example Usage** ```js const { Scale } = require('enquirer'); const prompt = new Scale({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.', initial: 2 }, { name: 'navigation', message: 'The website is easy to navigate.', initial: 2 }, { name: 'images', message: 'The website usually has good images.', initial: 2 }, { name: 'upload', message: 'The website makes it easy to upload images.', initial: 2 }, { name: 'colors', message: 'The website has a pleasing color palette.', initial: 2 } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Select Prompt Prompt that allows the user to select from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/select-prompt.gif" alt="Enquirer Select Prompt" width="750"> </p> **Example Usage** ```js const { Select } = require('enquirer'); const prompt = new Select({ name: 'color', message: 'Pick a flavor', choices: ['apple', 'grape', 'watermelon', 'cherry', 'orange'] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [MultiSelect](#multiselect-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Sort Prompt Prompt that allows the user to sort items in a list. **Example** In this [example](https://github.com/enquirer/enquirer/raw/master/examples/sort/prompt.js), custom styling is applied to the returned values to make it easier to see what's happening. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/sort-prompt.gif" alt="Enquirer Sort Prompt" width="750"> </p> **Example Usage** ```js const colors = require('ansi-colors'); const { Sort } = require('enquirer'); const prompt = new Sort({ name: 'colors', message: 'Sort the colors in order of preference', hint: 'Top is best, bottom is worst', numbered: true, choices: ['red', 'white', 'green', 'cyan', 'yellow'].map(n => ({ name: n, message: colors[n](n) })) }); prompt.run() .then(function(answer = []) { console.log(answer); console.log('Your preferred order of colors is:'); console.log(answer.map(key => colors[key](key)).join('\n')); }) .catch(console.error); ``` **Related prompts** * [List](#list-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Snippet Prompt Prompt that allows the user to replace placeholders in a snippet of code or text. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/snippet-prompt.gif" alt="Prompts" width="750"> </p> **Example Usage** ```js const semver = require('semver'); const { Snippet } = require('enquirer'); const prompt = new Snippet({ name: 'username', message: 'Fill out the fields in package.json', required: true, fields: [ { name: 'author_name', message: 'Author Name' }, { name: 'version', validate(value, state, item, index) { if (item && item.name === 'version' && !semver.valid(value)) { return prompt.styles.danger('version should be a valid semver value'); } return true; } } ], template: `{ "name": "\${name}", "description": "\${description}", "version": "\${version}", "homepage": "https://github.com/\${username}/\${name}", "author": "\${author_name} (https://github.com/\${username})", "repository": "\${username}/\${name}", "license": "\${license:ISC}" } ` }); prompt.run() .then(answer => console.log('Answer:', answer.result)) .catch(console.error); ``` **Related prompts** * [Survey](#survey-prompt) * [AutoComplete](#autocomplete-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Toggle Prompt Prompt that allows the user to toggle between two values then returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/toggle-prompt.gif" alt="Enquirer Toggle Prompt" width="750"> </p> **Example Usage** ```js const { Toggle } = require('enquirer'); const prompt = new Toggle({ message: 'Want to answer?', enabled: 'Yep', disabled: 'Nope' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Confirm](#confirm-prompt) * [Input](#input-prompt) * [Sort](#sort-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Prompt Types There are 5 (soon to be 6!) type classes: * [ArrayPrompt](#arrayprompt) - [Options](#options) - [Properties](#properties) - [Methods](#methods) - [Choices](#choices) - [Defining choices](#defining-choices) - [Choice properties](#choice-properties) - [Related prompts](#related-prompts) * [AuthPrompt](#authprompt) * [BooleanPrompt](#booleanprompt) * DatePrompt (Coming Soon!) * [NumberPrompt](#numberprompt) * [StringPrompt](#stringprompt) Each type is a low-level class that may be used as a starting point for creating higher level prompts. Continue reading to learn how. ### ArrayPrompt The `ArrayPrompt` class is used for creating prompts that display a list of choices in the terminal. For example, Enquirer uses this class as the basis for the [Select](#select) and [Survey](#survey) prompts. #### Options In addition to the [options](#options) available to all prompts, Array prompts also support the following options. | **Option** | **Required?** | **Type** | **Description** | | ----------- | ------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------- | | `autofocus` | `no` | `string\|number` | The index or name of the choice that should have focus when the prompt loads. Only one choice may have focus at a time. | | | `stdin` | `no` | `stream` | The input stream to use for emitting keypress events. Defaults to `process.stdin`. | | `stdout` | `no` | `stream` | The output stream to use for writing the prompt to the terminal. Defaults to `process.stdout`. | | | #### Properties Array prompts have the following instance properties and getters. | **Property name** | **Type** | **Description** | | ----------------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `choices` | `array` | Array of choices that have been normalized from choices passed on the prompt options. | | `cursor` | `number` | Position of the cursor relative to the _user input (string)_. | | `enabled` | `array` | Returns an array of enabled choices. | | `focused` | `array` | Returns the currently selected choice in the visible list of choices. This is similar to the concept of focus in HTML and CSS. Focused choices are always visible (on-screen). When a list of choices is longer than the list of visible choices, and an off-screen choice is _focused_, the list will scroll to the focused choice and re-render. | | `focused` | Gets the currently selected choice. Equivalent to `prompt.choices[prompt.index]`. | | `index` | `number` | Position of the pointer in the _visible list (array) of choices_. | | `limit` | `number` | The number of choices to display on-screen. | | `selected` | `array` | Either a list of enabled choices (when `options.multiple` is true) or the currently focused choice. | | `visible` | `string` | | #### Methods | **Method** | **Description** | | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pointer()` | Returns the visual symbol to use to identify the choice that currently has focus. The `❯` symbol is often used for this. The pointer is not always visible, as with the `autocomplete` prompt. | | `indicator()` | Returns the visual symbol that indicates whether or not a choice is checked/enabled. | | `focus()` | Sets focus on a choice, if it can be focused. | #### Choices Array prompts support the `choices` option, which is the array of choices users will be able to select from when rendered in the terminal. **Type**: `string|object` **Example** ```js const { prompt } = require('enquirer'); const questions = [{ type: 'select', name: 'color', message: 'Favorite color?', initial: 1, choices: [ { name: 'red', message: 'Red', value: '#ff0000' }, //<= choice object { name: 'green', message: 'Green', value: '#00ff00' }, //<= choice object { name: 'blue', message: 'Blue', value: '#0000ff' } //<= choice object ] }]; let answers = await prompt(questions); console.log('Answer:', answers.color); ``` #### Defining choices Whether defined as a string or object, choices are normalized to the following interface: ```js { name: string; message: string | undefined; value: string | undefined; hint: string | undefined; disabled: boolean | string | undefined; } ``` **Example** ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: ['Apple', 'Orange', 'Raspberry'] }; ``` Normalizes to the following when the prompt is run: ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: [ { name: 'Apple', message: 'Apple', value: 'Apple' }, { name: 'Orange', message: 'Orange', value: 'Orange' }, { name: 'Raspberry', message: 'Raspberry', value: 'Raspberry' } ] }; ``` #### Choice properties The following properties are supported on `choice` objects. | **Option** | **Type** | **Description** | | ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `name` | `string` | The unique key to identify a choice | | `message` | `string` | The message to display in the terminal. `name` is used when this is undefined. | | `value` | `string` | Value to associate with the choice. Useful for creating key-value pairs from user choices. `name` is used when this is undefined. | | `choices` | `array` | Array of "child" choices. | | `hint` | `string` | Help message to display next to a choice. | | `role` | `string` | Determines how the choice will be displayed. Currently the only role supported is `separator`. Additional roles may be added in the future (like `heading`, etc). Please create a [feature request] | | `enabled` | `boolean` | Enabled a choice by default. This is only supported when `options.multiple` is true or on prompts that support multiple choices, like [MultiSelect](#-multiselect). | | `disabled` | `boolean\|string` | Disable a choice so that it cannot be selected. This value may either be `true`, `false`, or a message to display. | | `indicator` | `string\|function` | Custom indicator to render for a choice (like a check or radio button). | #### Related prompts * [AutoComplete](#autocomplete-prompt) * [Form](#form-prompt) * [MultiSelect](#multiselect-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) *** ### AuthPrompt The `AuthPrompt` is used to create prompts to log in user using any authentication method. For example, Enquirer uses this class as the basis for the [BasicAuth Prompt](#basicauth-prompt). You can also find prompt examples in `examples/auth/` folder that utilizes `AuthPrompt` to create OAuth based authentication prompt or a prompt that authenticates using time-based OTP, among others. `AuthPrompt` has a factory function that creates an instance of `AuthPrompt` class and it expects an `authenticate` function, as an argument, which overrides the `authenticate` function of the `AuthPrompt` class. #### Methods | **Method** | **Description** | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `authenticate()` | Contain all the authentication logic. This function should be overridden to implement custom authentication logic. The default `authenticate` function throws an error if no other function is provided. | #### Choices Auth prompt supports the `choices` option, which is the similar to the choices used in [Form Prompt](#form-prompt). **Example** ```js const { AuthPrompt } = require('enquirer'); function authenticate(value, state) { if (value.username === this.options.username && value.password === this.options.password) { return true; } return false; } const CustomAuthPrompt = AuthPrompt.create(authenticate); const prompt = new CustomAuthPrompt({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '1234567', choices: [ { name: 'username', message: 'username' }, { name: 'password', message: 'password' } ] }); prompt .run() .then(answer => console.log('Authenticated?', answer)) .catch(console.error); ``` #### Related prompts * [BasicAuth Prompt](#basicauth-prompt) *** ### BooleanPrompt The `BooleanPrompt` class is used for creating prompts that display and return a boolean value. ```js const { BooleanPrompt } = require('enquirer'); const prompt = new BooleanPrompt({ header: '========================', message: 'Do you love enquirer?', footer: '========================', }); prompt.run() .then(answer => console.log('Selected:', answer)) .catch(console.error); ``` **Returns**: `boolean` *** ### NumberPrompt The `NumberPrompt` class is used for creating prompts that display and return a numerical value. ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ header: '************************', message: 'Input the Numbers:', footer: '************************', }); prompt.run() .then(answer => console.log('Numbers are:', answer)) .catch(console.error); ``` **Returns**: `string|number` (number, or number formatted as a string) *** ### StringPrompt The `StringPrompt` class is used for creating prompts that display and return a string value. ```js const { StringPrompt } = require('enquirer'); const prompt = new StringPrompt({ header: '************************', message: 'Input the String:', footer: '************************' }); prompt.run() .then(answer => console.log('String is:', answer)) .catch(console.error); ``` **Returns**: `string` <br> ## ❯ Custom prompts With Enquirer 2.0, custom prompts are easier than ever to create and use. **How do I create a custom prompt?** Custom prompts are created by extending either: * Enquirer's `Prompt` class * one of the built-in [prompts](#-prompts), or * low-level [types](#-types). <!-- Example: HaiKarate Custom Prompt --> ```js const { Prompt } = require('enquirer'); class HaiKarate extends Prompt { constructor(options = {}) { super(options); this.value = options.initial || 0; this.cursorHide(); } up() { this.value++; this.render(); } down() { this.value--; this.render(); } render() { this.clear(); // clear previously rendered prompt from the terminal this.write(`${this.state.message}: ${this.value}`); } } // Use the prompt by creating an instance of your custom prompt class. const prompt = new HaiKarate({ message: 'How many sprays do you want?', initial: 10 }); prompt.run() .then(answer => console.log('Sprays:', answer)) .catch(console.error); ``` If you want to be able to specify your prompt by `type` so that it may be used alongside other prompts, you will need to first create an instance of `Enquirer`. ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` Then use the `.register()` method to add your custom prompt. ```js enquirer.register('haikarate', HaiKarate); ``` Now you can do the following when defining "questions". ```js let spritzer = require('cologne-drone'); let answers = await enquirer.prompt([ { type: 'haikarate', name: 'cologne', message: 'How many sprays do you need?', initial: 10, async onSubmit(name, value) { await spritzer.activate(value); //<= activate drone return value; } } ]); ``` <br> ## ❯ Key Bindings ### All prompts These key combinations may be used with all prompts. | **command** | **description** | | -------------------------------- | -------------------------------------- | | <kbd>ctrl</kbd> + <kbd>c</kbd> | Cancel the prompt. | | <kbd>ctrl</kbd> + <kbd>g</kbd> | Reset the prompt to its initial state. | <br> ### Move cursor These combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>left</kbd> | Move the cursor back one character. | | <kbd>right</kbd> | Move the cursor forward one character. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> ### Edit Input These key combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> | **command (Mac)** | **command (Windows)** | **description** | | ----------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | <kbd>delete</kbd> | <kbd>backspace</kbd> | Delete one character to the left. | | <kbd>fn</kbd> + <kbd>delete</kbd> | <kbd>delete</kbd> | Delete one character to the right. | | <kbd>option</kbd> + <kbd>up</kbd> | <kbd>alt</kbd> + <kbd>up</kbd> | Scroll to the previous item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | | <kbd>option</kbd> + <kbd>down</kbd> | <kbd>alt</kbd> + <kbd>down</kbd> | Scroll to the next item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | ### Select choices These key combinations may be used on prompts that support _multiple_ choices, such as the [multiselect prompt](#multiselect-prompt), or the [select prompt](#select-prompt) when the `multiple` options is true. | **command** | **description** | | ----------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>space</kbd> | Toggle the currently selected choice when `options.multiple` is true. | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>a</kbd> | Toggle all choices to be enabled or disabled. | | <kbd>i</kbd> | Invert the current selection of choices. | | <kbd>g</kbd> | Toggle the current choice group. | <br> ### Hide/show choices | **command** | **description** | | ------------------------------- | ---------------------------------------------- | | <kbd>fn</kbd> + <kbd>up</kbd> | Decrease the number of visible choices by one. | | <kbd>fn</kbd> + <kbd>down</kbd> | Increase the number of visible choices by one. | <br> ### Move/lock Pointer | **command** | **description** | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>up</kbd> | Move the pointer up. | | <kbd>down</kbd> | Move the pointer down. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move the pointer to the first _visible_ choice. | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move the pointer to the last _visible_ choice. | | <kbd>shift</kbd> + <kbd>up</kbd> | Scroll up one choice without changing pointer position (locks the pointer while scrolling). | | <kbd>shift</kbd> + <kbd>down</kbd> | Scroll down one choice without changing pointer position (locks the pointer while scrolling). | <br> | **command (Mac)** | **command (Windows)** | **description** | | -------------------------------- | --------------------- | ---------------------------------------------------------- | | <kbd>fn</kbd> + <kbd>left</kbd> | <kbd>home</kbd> | Move the pointer to the first choice in the choices array. | | <kbd>fn</kbd> + <kbd>right</kbd> | <kbd>end</kbd> | Move the pointer to the last choice in the choices array. | <br> ## ❯ Release History Please see [CHANGELOG.md](CHANGELOG.md). ## ❯ Performance ### System specs MacBook Pro, Intel Core i7, 2.5 GHz, 16 GB. ### Load time Time it takes for the module to load the first time (average of 3 runs): ``` enquirer: 4.013ms inquirer: 286.717ms ``` <br> ## ❯ About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Todo We're currently working on documentation for the following items. Please star and watch the repository for updates! * [ ] Customizing symbols * [ ] Customizing styles (palette) * [ ] Customizing rendered input * [ ] Customizing returned values * [ ] Customizing key bindings * [ ] Question validation * [ ] Choice validation * [ ] Skipping questions * [ ] Async choices * [ ] Async timers: loaders, spinners and other animations * [ ] Links to examples </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ```sh $ yarn && yarn test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> #### Contributors | **Commits** | **Contributor** | | --- | --- | | 283 | [jonschlinkert](https://github.com/jonschlinkert) | | 82 | [doowb](https://github.com/doowb) | | 32 | [rajat-sr](https://github.com/rajat-sr) | | 20 | [318097](https://github.com/318097) | | 15 | [g-plane](https://github.com/g-plane) | | 12 | [pixelass](https://github.com/pixelass) | | 5 | [adityavyas611](https://github.com/adityavyas611) | | 5 | [satotake](https://github.com/satotake) | | 3 | [tunnckoCore](https://github.com/tunnckoCore) | | 3 | [Ovyerus](https://github.com/Ovyerus) | | 3 | [sw-yx](https://github.com/sw-yx) | | 2 | [DanielRuf](https://github.com/DanielRuf) | | 2 | [GabeL7r](https://github.com/GabeL7r) | | 1 | [AlCalzone](https://github.com/AlCalzone) | | 1 | [hipstersmoothie](https://github.com/hipstersmoothie) | | 1 | [danieldelcore](https://github.com/danieldelcore) | | 1 | [ImgBotApp](https://github.com/ImgBotApp) | | 1 | [jsonkao](https://github.com/jsonkao) | | 1 | [knpwrs](https://github.com/knpwrs) | | 1 | [yeskunall](https://github.com/yeskunall) | | 1 | [mischah](https://github.com/mischah) | | 1 | [renarsvilnis](https://github.com/renarsvilnis) | | 1 | [sbugert](https://github.com/sbugert) | | 1 | [stephencweiss](https://github.com/stephencweiss) | | 1 | [skellock](https://github.com/skellock) | | 1 | [whxaxes](https://github.com/whxaxes) | #### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) #### Credit Thanks to [derhuerst](https://github.com/derhuerst), creator of prompt libraries such as [prompt-skeleton](https://github.com/derhuerst/prompt-skeleton), which influenced some of the concepts we used in our prompts. #### License Copyright © 2018-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). argparse ======== [![Build Status](https://secure.travis-ci.org/nodeca/argparse.svg?branch=master)](http://travis-ci.org/nodeca/argparse) [![NPM version](https://img.shields.io/npm/v/argparse.svg)](https://www.npmjs.org/package/argparse) CLI arguments parser for node.js. Javascript port of python's [argparse](http://docs.python.org/dev/library/argparse.html) module (original version 3.2). That's a full port, except some very rare options, recorded in issue tracker. **NB. Difference with original.** - Method names changed to camelCase. See [generated docs](http://nodeca.github.com/argparse/). - Use `defaultValue` instead of `default`. - Use `argparse.Const.REMAINDER` instead of `argparse.REMAINDER`, and similarly for constant values `OPTIONAL`, `ZERO_OR_MORE`, and `ONE_OR_MORE` (aliases for `nargs` values `'?'`, `'*'`, `'+'`, respectively), and `SUPPRESS`. Example ======= test.js file: ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse example' }); parser.addArgument( [ '-f', '--foo' ], { help: 'foo bar' } ); parser.addArgument( [ '-b', '--bar' ], { help: 'bar foo' } ); parser.addArgument( '--baz', { help: 'baz bar' } ); var args = parser.parseArgs(); console.dir(args); ``` Display help: ``` $ ./test.js -h usage: example.js [-h] [-v] [-f FOO] [-b BAR] [--baz BAZ] Argparse example Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -f FOO, --foo FOO foo bar -b BAR, --bar BAR bar foo --baz BAZ baz bar ``` Parse arguments: ``` $ ./test.js -f=3 --bar=4 --baz 5 { foo: '3', bar: '4', baz: '5' } ``` More [examples](https://github.com/nodeca/argparse/tree/master/examples). ArgumentParser objects ====================== ``` new ArgumentParser({parameters hash}); ``` Creates a new ArgumentParser object. **Supported params:** - ```description``` - Text to display before the argument help. - ```epilog``` - Text to display after the argument help. - ```addHelp``` - Add a -h/–help option to the parser. (default: true) - ```argumentDefault``` - Set the global default value for arguments. (default: null) - ```parents``` - A list of ArgumentParser objects whose arguments should also be included. - ```prefixChars``` - The set of characters that prefix optional arguments. (default: ‘-‘) - ```formatterClass``` - A class for customizing the help output. - ```prog``` - The name of the program (default: `path.basename(process.argv[1])`) - ```usage``` - The string describing the program usage (default: generated) - ```conflictHandler``` - Usually unnecessary, defines strategy for resolving conflicting optionals. **Not supported yet** - ```fromfilePrefixChars``` - The set of characters that prefix files from which additional arguments should be read. Details in [original ArgumentParser guide](http://docs.python.org/dev/library/argparse.html#argumentparser-objects) addArgument() method ==================== ``` ArgumentParser.addArgument(name or flag or [name] or [flags...], {options}) ``` Defines how a single command-line argument should be parsed. - ```name or flag or [name] or [flags...]``` - Either a positional name (e.g., `'foo'`), a single option (e.g., `'-f'` or `'--foo'`), an array of a single positional name (e.g., `['foo']`), or an array of options (e.g., `['-f', '--foo']`). Options: - ```action``` - The basic type of action to be taken when this argument is encountered at the command line. - ```nargs```- The number of command-line arguments that should be consumed. - ```constant``` - A constant value required by some action and nargs selections. - ```defaultValue``` - The value produced if the argument is absent from the command line. - ```type``` - The type to which the command-line argument should be converted. - ```choices``` - A container of the allowable values for the argument. - ```required``` - Whether or not the command-line option may be omitted (optionals only). - ```help``` - A brief description of what the argument does. - ```metavar``` - A name for the argument in usage messages. - ```dest``` - The name of the attribute to be added to the object returned by parseArgs(). Details in [original add_argument guide](http://docs.python.org/dev/library/argparse.html#the-add-argument-method) Action (some details) ================ ArgumentParser objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them, though most actions simply add an attribute to the object returned by parseArgs(). The action keyword argument specifies how the command-line arguments should be handled. The supported actions are: - ```store``` - Just stores the argument’s value. This is the default action. - ```storeConst``` - Stores value, specified by the const keyword argument. (Note that the const keyword argument defaults to the rather unhelpful None.) The 'storeConst' action is most commonly used with optional arguments, that specify some sort of flag. - ```storeTrue``` and ```storeFalse``` - Stores values True and False respectively. These are special cases of 'storeConst'. - ```append``` - Stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. - ```appendConst``` - Stores a list, and appends value, specified by the const keyword argument to the list. (Note, that the const keyword argument defaults is None.) The 'appendConst' action is typically used when multiple arguments need to store constants to the same list. - ```count``` - Counts the number of times a keyword argument occurs. For example, used for increasing verbosity levels. - ```help``` - Prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser. See ArgumentParser for details of how the output is created. - ```version``` - Prints version information and exit. Expects a `version=` keyword argument in the addArgument() call. Details in [original action guide](http://docs.python.org/dev/library/argparse.html#action) Sub-commands ============ ArgumentParser.addSubparsers() Many programs split their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands like `svn checkout`, `svn update`, and `svn commit`. Splitting up functionality this way can be a particularly good idea when a program performs several different functions which require different kinds of command-line arguments. `ArgumentParser` supports creation of such sub-commands with `addSubparsers()` method. The `addSubparsers()` method is normally called with no arguments and returns an special action object. This object has a single method `addParser()`, which takes a command name and any `ArgumentParser` constructor arguments, and returns an `ArgumentParser` object that can be modified as usual. Example: sub_commands.js ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse examples: sub-commands', }); var subparsers = parser.addSubparsers({ title:'subcommands', dest:"subcommand_name" }); var bar = subparsers.addParser('c1', {addHelp:true}); bar.addArgument( [ '-f', '--foo' ], { action: 'store', help: 'foo3 bar3' } ); var bar = subparsers.addParser( 'c2', {aliases:['co'], addHelp:true} ); bar.addArgument( [ '-b', '--bar' ], { action: 'store', type: 'int', help: 'foo3 bar3' } ); var args = parser.parseArgs(); console.dir(args); ``` Details in [original sub-commands guide](http://docs.python.org/dev/library/argparse.html#sub-commands) Contributors ============ - [Eugene Shkuropat](https://github.com/shkuropat) - [Paul Jacobson](https://github.com/hpaulj) [others](https://github.com/nodeca/argparse/graphs/contributors) License ======= Copyright (c) 2012 [Vitaly Puzrin](https://github.com/puzrin). Released under the MIT license. See [LICENSE](https://github.com/nodeca/argparse/blob/master/LICENSE) for details. Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it Shims used when bundling asc for browser usage. # line-column [![Build Status](https://travis-ci.org/io-monad/line-column.svg?branch=master)](https://travis-ci.org/io-monad/line-column) [![Coverage Status](https://coveralls.io/repos/github/io-monad/line-column/badge.svg?branch=master)](https://coveralls.io/github/io-monad/line-column?branch=master) [![npm version](https://badge.fury.io/js/line-column.svg)](https://badge.fury.io/js/line-column) Node module to convert efficiently index to/from line-column in a string. ## Install npm install line-column ## Usage ### lineColumn(str, options = {}) Returns a `LineColumnFinder` instance for given string `str`. #### Options | Key | Description | Default | | ------- | ----------- | ------- | | `origin` | The origin value of line number and column number | `1` | ### lineColumn(str, index) This is just a shorthand for `lineColumn(str).fromIndex(index)`. ### LineColumnFinder#fromIndex(index) Find line and column from index in the string. Parameters: - `index` - `number` Index in the string. (0-origin) Returns: - `{ line: x, col: y }` Found line number and column number. - `null` if the given index is out of range. ### LineColumnFinder#toIndex(line, column) Find index from line and column in the string. Parameters: - `line` - `number` Line number in the string. - `column` - `number` Column number in the string. or - `{ line: x, col: y }` - `Object` line and column numbers in the string.<br>A key name `column` can be used instead of `col`. or - `[ line, col ]` - `Array` line and column numbers in the string. Returns: - `number` Found index in the string. - `-1` if the given line or column is out of range. ## Example ```js var lineColumn = require("line-column"); var testString = [ "ABCDEFG\n", // line:0, index:0 "HIJKLMNOPQRSTU\n", // line:1, index:8 "VWXYZ\n", // line:2, index:23 "日本語の文字\n", // line:3, index:29 "English words" // line:4, index:36 ].join(""); // length:49 lineColumn(testString).fromIndex(3) // { line: 1, col: 4 } lineColumn(testString).fromIndex(33) // { line: 4, col: 5 } lineColumn(testString).toIndex(1, 4) // 3 lineColumn(testString).toIndex(4, 5) // 33 // Shorthand of .fromIndex (compatible with find-line-column) lineColumn(testString, 33) // { line:4, col: 5 } // Object or Array is also acceptable lineColumn(testString).toIndex({ line: 4, col: 5 }) // 33 lineColumn(testString).toIndex({ line: 4, column: 5 }) // 33 lineColumn(testString).toIndex([4, 5]) // 33 // You can cache it for the same string. It is so efficient. (See benchmark) var finder = lineColumn(testString); finder.fromIndex(33) // { line: 4, column: 5 } finder.toIndex(4, 5) // 33 // For 0-origin line and column numbers var oneOrigin = lineColumn(testString, { origin: 0 }); oneOrigin.fromIndex(33) // { line: 3, column: 4 } oneOrigin.toIndex(3, 4) // 33 ``` ## Testing npm test ## Benchmark The popular package [find-line-column](https://www.npmjs.com/package/find-line-column) provides the same "index to line-column" feature. Here is some benchmarking on `line-column` vs `find-line-column`. You can run this benchmark by `npm run benchmark`. See [benchmark/](benchmark/) for the source code. ``` long text + line-column (not cached) x 72,989 ops/sec ±0.83% (89 runs sampled) long text + line-column (cached) x 13,074,242 ops/sec ±0.32% (89 runs sampled) long text + find-line-column x 33,887 ops/sec ±0.54% (84 runs sampled) short text + line-column (not cached) x 1,636,766 ops/sec ±0.77% (82 runs sampled) short text + line-column (cached) x 21,699,686 ops/sec ±1.04% (82 runs sampled) short text + find-line-column x 382,145 ops/sec ±1.04% (85 runs sampled) ``` As you might have noticed, even not cached version of `line-column` is 2x - 4x faster than `find-line-column`, and cached version of `line-column` is remarkable 50x - 380x faster. ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -am 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## License MIT (See LICENSE) # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). These files are compiled dot templates from dot folder. Do NOT edit them directly, edit the templates and run `npm run build` from main ajv folder. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # isexe Minimal module to check if a file is executable, and a normal file. Uses `fs.stat` and tests against the `PATHEXT` environment variable on Windows. ## USAGE ```javascript var isexe = require('isexe') isexe('some-file-name', function (err, isExe) { if (err) { console.error('probably file does not exist or something', err) } else if (isExe) { console.error('this thing can be run') } else { console.error('cannot be run') } }) // same thing but synchronous, throws errors var isExe = isexe.sync('some-file-name') // treat errors as just "not executable" isexe('maybe-missing-file', { ignoreErrors: true }, callback) var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true }) ``` ## API ### `isexe(path, [options], [callback])` Check if the path is executable. If no callback provided, and a global `Promise` object is available, then a Promise will be returned. Will raise whatever errors may be raised by `fs.stat`, unless `options.ignoreErrors` is set to true. ### `isexe.sync(path, [options])` Same as `isexe` but returns the value and throws any errors raised. ### Options * `ignoreErrors` Treat all errors as "no, this is not executable", but don't raise them. * `uid` Number to use as the user id * `gid` Number to use as the group id * `pathExt` List of path extensions to use instead of `PATHEXT` environment variable on Windows. ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde (olalonde@gmail.com) - James Talmage (james@talmage.io) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) ## Timezone support In order to provide support for timezones, without relying on the JavaScript host or any other time-zone aware environment, this library makes use of teh IANA Timezone Database directly: https://www.iana.org/time-zones The database files are parsed by the scripts in this folder, which emit AssemblyScript code which is used to process the various rules at runtime. # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. [![Build Status](https://api.travis-ci.org/adaltas/node-csv-stringify.svg)](https://travis-ci.org/#!/adaltas/node-csv-stringify) [![NPM](https://img.shields.io/npm/dm/csv-stringify)](https://www.npmjs.com/package/csv-stringify) [![NPM](https://img.shields.io/npm/v/csv-stringify)](https://www.npmjs.com/package/csv-stringify) This package is a stringifier converting records into a CSV text and implementing the Node.js [`stream.Transform` API](https://nodejs.org/api/stream.html). It also provides the easier synchronous and callback-based APIs for conveniency. It is both extremely easy to use and powerful. It was first released in 2010 and is tested against big data sets by a large community. ## Documentation * [Project homepage](http://csv.js.org/stringify/) * [API](http://csv.js.org/stringify/api/) * [Options](http://csv.js.org/stringify/options/) * [Examples](http://csv.js.org/stringify/examples/) ## Main features * Follow the Node.js streaming API * Simplicity with the optional callback API * Support for custom formatters, delimiters, quotes, escape characters and header * Support big datasets * Complete test coverage and samples for inspiration * Only 1 external dependency * to be used conjointly with `csv-generate`, `csv-parse` and `stream-transform` * MIT License ## Usage The module is built on the Node.js Stream API. For the sake of simplicity, a simple callback API is also provided. To give you a quick look, here's an example of the callback API: ```javascript const stringify = require('csv-stringify') const assert = require('assert') // import stringify from 'csv-stringify' // import assert from 'assert/strict' const input = [ [ '1', '2', '3', '4' ], [ 'a', 'b', 'c', 'd' ] ] stringify(input, function(err, output) { const expected = '1,2,3,4\na,b,c,d\n' assert.strictEqual(output, expected, `output.should.eql ${expected}`) console.log("Passed.", output) }) ``` ## Development Tests are executed with mocha. To install it, run `npm install` followed by `npm test`. It will install mocha and its dependencies in your project "node_modules" directory and run the test suite. The tests run against the CoffeeScript source files. To generate the JavaScript files, run `npm run build`. The test suite is run online with [Travis](https://travis-ci.org/#!/adaltas/node-csv-stringify). See the [Travis definition file](https://github.com/adaltas/node-csv-stringify/blob/master/.travis.yml) to view the tested Node.js version. ## Contributors * David Worms: <https://github.com/wdavidw> [csv_home]: https://github.com/adaltas/node-csv [stream_transform]: http://nodejs.org/api/stream.html#stream_class_stream_transform [examples]: http://csv.js.org/stringify/examples/ [csv]: https://github.com/adaltas/node-csv <table><thead> <tr> <th>Linux</th> <th>OS X</th> <th>Windows</th> <th>Coverage</th> <th>Downloads</th> </tr> </thead><tbody><tr> <td colspan="2" align="center"> <a href="https://travis-ci.org/kaelzhang/node-ignore"> <img src="https://travis-ci.org/kaelzhang/node-ignore.svg?branch=master" alt="Build Status" /></a> </td> <td align="center"> <a href="https://ci.appveyor.com/project/kaelzhang/node-ignore"> <img src="https://ci.appveyor.com/api/projects/status/github/kaelzhang/node-ignore?branch=master&svg=true" alt="Windows Build Status" /></a> </td> <td align="center"> <a href="https://codecov.io/gh/kaelzhang/node-ignore"> <img src="https://codecov.io/gh/kaelzhang/node-ignore/branch/master/graph/badge.svg" alt="Coverage Status" /></a> </td> <td align="center"> <a href="https://www.npmjs.org/package/ignore"> <img src="http://img.shields.io/npm/dm/ignore.svg" alt="npm module downloads per month" /></a> </td> </tr></tbody></table> # ignore `ignore` is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore [spec](http://git-scm.com/docs/gitignore). Pay attention that [`minimatch`](https://www.npmjs.org/package/minimatch) does not work in the gitignore way. To filter filenames according to .gitignore file, I recommend this module. ##### Tested on - Linux + Node: `0.8` - `7.x` - Windows + Node: `0.10` - `7.x`, node < `0.10` is not tested due to the lack of support of appveyor. Actually, `ignore` does not rely on any versions of node specially. Since `4.0.0`, ignore will no longer support `node < 6` by default, to use in node < 6, `require('ignore/legacy')`. For details, see [CHANGELOG](https://github.com/kaelzhang/node-ignore/blob/master/CHANGELOG.md). ## Table Of Main Contents - [Usage](#usage) - [`Pathname` Conventions](#pathname-conventions) - [Guide for 2.x -> 3.x](#upgrade-2x---3x) - [Guide for 3.x -> 4.x](#upgrade-3x---4x) - See Also: - [`glob-gitignore`](https://www.npmjs.com/package/glob-gitignore) matches files using patterns and filters them according to gitignore rules. ## Usage ```js import ignore from 'ignore' const ig = ignore().add(['.abc/*', '!.abc/d/']) ``` ### Filter the given paths ```js const paths = [ '.abc/a.js', // filtered out '.abc/d/e.js' // included ] ig.filter(paths) // ['.abc/d/e.js'] ig.ignores('.abc/a.js') // true ``` ### As the filter function ```js paths.filter(ig.createFilter()); // ['.abc/d/e.js'] ``` ### Win32 paths will be handled ```js ig.filter(['.abc\\a.js', '.abc\\d\\e.js']) // if the code above runs on windows, the result will be // ['.abc\\d\\e.js'] ``` ## Why another ignore? - `ignore` is a standalone module, and is much simpler so that it could easy work with other programs, unlike [isaacs](https://npmjs.org/~isaacs)'s [fstream-ignore](https://npmjs.org/package/fstream-ignore) which must work with the modules of the fstream family. - `ignore` only contains utility methods to filter paths according to the specified ignore rules, so - `ignore` never try to find out ignore rules by traversing directories or fetching from git configurations. - `ignore` don't cares about sub-modules of git projects. - Exactly according to [gitignore man page](http://git-scm.com/docs/gitignore), fixes some known matching issues of fstream-ignore, such as: - '`/*.js`' should only match '`a.js`', but not '`abc/a.js`'. - '`**/foo`' should match '`foo`' anywhere. - Prevent re-including a file if a parent directory of that file is excluded. - Handle trailing whitespaces: - `'a '`(one space) should not match `'a '`(two spaces). - `'a \ '` matches `'a '` - All test cases are verified with the result of `git check-ignore`. # Methods ## .add(pattern: string | Ignore): this ## .add(patterns: Array<string | Ignore>): this - **pattern** `String | Ignore` An ignore pattern string, or the `Ignore` instance - **patterns** `Array<String | Ignore>` Array of ignore patterns. Adds a rule or several rules to the current manager. Returns `this` Notice that a line starting with `'#'`(hash) is treated as a comment. Put a backslash (`'\'`) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename. ```js ignore().add('#abc').ignores('#abc') // false ignore().add('\#abc').ignores('#abc') // true ``` `pattern` could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just `ignore().add()` the content of a ignore file: ```js ignore() .add(fs.readFileSync(filenameOfGitignore).toString()) .filter(filenames) ``` `pattern` could also be an `ignore` instance, so that we could easily inherit the rules of another `Ignore` instance. ## <strike>.addIgnoreFile(path)</strike> REMOVED in `3.x` for now. To upgrade `ignore@2.x` up to `3.x`, use ```js import fs from 'fs' if (fs.existsSync(filename)) { ignore().add(fs.readFileSync(filename).toString()) } ``` instead. ## .filter(paths: Array<Pathname>): Array<Pathname> ```ts type Pathname = string ``` Filters the given array of pathnames, and returns the filtered array. - **paths** `Array.<Pathname>` The array of `pathname`s to be filtered. ### `Pathname` Conventions: #### 1. `Pathname` should be a `path.relative()`d pathname `Pathname` should be a string that have been `path.join()`ed, or the return value of `path.relative()` to the current directory. ```js // WRONG ig.ignores('./abc') // WRONG, for it will never happen. // If the gitignore rule locates at the root directory, // `'/abc'` should be changed to `'abc'`. // ``` // path.relative('/', '/abc') -> 'abc' // ``` ig.ignores('/abc') // Right ig.ignores('abc') // Right ig.ignores(path.join('./abc')) // path.join('./abc') -> 'abc' ``` In other words, each `Pathname` here should be a relative path to the directory of the gitignore rules. Suppose the dir structure is: ``` /path/to/your/repo |-- a | |-- a.js | |-- .b | |-- .c |-- .DS_store ``` Then the `paths` might be like this: ```js [ 'a/a.js' '.b', '.c/.DS_store' ] ``` Usually, you could use [`glob`](http://npmjs.org/package/glob) with `option.mark = true` to fetch the structure of the current directory: ```js import glob from 'glob' glob('**', { // Adds a / character to directory matches. mark: true }, (err, files) => { if (err) { return console.error(err) } let filtered = ignore().add(patterns).filter(files) console.log(filtered) }) ``` #### 2. filenames and dirnames `node-ignore` does NO `fs.stat` during path matching, so for the example below: ```js ig.add('config/') // `ig` does NOT know if 'config' is a normal file, directory or something ig.ignores('config') // And it returns `false` ig.ignores('config/') // returns `true` ``` Specially for people who develop some library based on `node-ignore`, it is important to understand that. ## .ignores(pathname: Pathname): boolean > new in 3.2.0 Returns `Boolean` whether `pathname` should be ignored. ```js ig.ignores('.abc/a.js') // true ``` ## .createFilter() Creates a filter function which could filter an array of paths with `Array.prototype.filter`. Returns `function(path)` the filter function. ## `options.ignorecase` since 4.0.0 Similar as the `core.ignorecase` option of [git-config](https://git-scm.com/docs/git-config), `node-ignore` will be case insensitive if `options.ignorecase` is set to `true` (default value), otherwise case sensitive. ```js const ig = ignore({ ignorecase: false }) ig.add('*.png') ig.ignores('*.PNG') // false ``` **** # Upgrade Guide ## Upgrade 2.x -> 3.x - All `options` of 2.x are unnecessary and removed, so just remove them. - `ignore()` instance is no longer an [`EventEmitter`](nodejs.org/api/events.html), and all events are unnecessary and removed. - `.addIgnoreFile()` is removed, see the [.addIgnoreFile](#addignorefilepath) section for details. ## Upgrade 3.x -> 4.x Since `4.0.0`, `ignore` will no longer support node < 6, to use `ignore` in node < 6: ```js var ignore = require('ignore/legacy') ``` **** # Collaborators - [@whitecolor](https://github.com/whitecolor) *Alex* - [@SamyPesse](https://github.com/SamyPesse) *Samy Pessé* - [@azproduction](https://github.com/azproduction) *Mikhail Davydov* - [@TrySound](https://github.com/TrySound) *Bogdan Chadkin* - [@JanMattner](https://github.com/JanMattner) *Jan Mattner* - [@ntwb](https://github.com/ntwb) *Stephen Edgar* - [@kasperisager](https://github.com/kasperisager) *Kasper Isager* - [@sandersn](https://github.com/sandersn) *Nathan Shively-Sanders* # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. iMurmurHash.js ============== An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on [Gary Court's implementation](https://github.com/garycourt/murmurhash-js) with [kazuyukitanimura's modifications](https://github.com/kazuyukitanimura/murmurhash-js). This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing. Installation ------------ To use iMurmurHash in the browser, [download the latest version](https://raw.github.com/jensyt/imurmurhash-js/master/imurmurhash.min.js) and include it as a script on your site. ```html <script type="text/javascript" src="/scripts/imurmurhash.min.js"></script> <script> // Your code here, access iMurmurHash using the global object MurmurHash3 </script> ``` --- To use iMurmurHash in Node.js, install the module using NPM: ```bash npm install imurmurhash ``` Then simply include it in your scripts: ```javascript MurmurHash3 = require('imurmurhash'); ``` Quick Example ------------- ```javascript // Create the initial hash var hashState = MurmurHash3('string'); // Incrementally add text hashState.hash('more strings'); hashState.hash('even more strings'); // All calls can be chained if desired hashState.hash('and').hash('some').hash('more'); // Get a result hashState.result(); // returns 0xe4ccfe6b ``` Functions --------- ### MurmurHash3 ([string], [seed]) Get a hash state object, optionally initialized with the given _string_ and _seed_. _Seed_ must be a positive integer if provided. Calling this function without the `new` keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use `new` to create a new state object. For example: ```javascript // Use the cached object, calling the function again will return the same // object (but reset, so the current state would be lost) hashState = MurmurHash3(); ... // Create a new object that can be safely used however you wish. Calling the // function again will simply return a new state object, and no state loss // will occur, at the cost of creating more objects. hashState = new MurmurHash3(); ``` Both methods can be mixed however you like if you have different use cases. --- ### MurmurHash3.prototype.hash (string) Incrementally add _string_ to the hash. This can be called as many times as you want for the hash state object, including after a call to `result()`. Returns `this` so calls can be chained. --- ### MurmurHash3.prototype.result () Get the result of the hash as a 32-bit positive integer. This performs the tail and finalizer portions of the algorithm, but does not store the result in the state object. This means that it is perfectly safe to get results and then continue adding strings via `hash`. ```javascript // Do the whole string at once MurmurHash3('this is a test string').result(); // 0x70529328 // Do part of the string, get a result, then the other part var m = MurmurHash3('this is a'); m.result(); // 0xbfc4f834 m.hash(' test string').result(); // 0x70529328 (same as above) ``` --- ### MurmurHash3.prototype.reset ([seed]) Reset the state object for reuse, optionally using the given _seed_ (defaults to 0 like the constructor). Returns `this` so calls can be chained. --- License (MIT) ------------- Copyright (c) 2013 Gary Court, Jens Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # flat-cache > A stupidly simple key/value storage using files to persist the data [![NPM Version](http://img.shields.io/npm/v/flat-cache.svg?style=flat)](https://npmjs.org/package/flat-cache) [![Build Status](https://api.travis-ci.org/royriojas/flat-cache.svg?branch=master)](https://travis-ci.org/royriojas/flat-cache) ## install ```bash npm i --save flat-cache ``` ## Usage ```js var flatCache = require('flat-cache') // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var cache = flatCache.load('cacheId'); // sets a key on the cache cache.setKey('key', { foo: 'var' }); // get a key from the cache cache.getKey('key') // { foo: 'var' } // fetch the entire persisted object cache.all() // { 'key': { foo: 'var' } } // remove a key cache.removeKey('key'); // removes a key from the cache // save it to disk cache.save(); // very important, if you don't save no changes will be persisted. // cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys // loads the cache from a given directory, if one does // not exists for the given Id a new one will be prepared to be created var cache = flatCache.load('cacheId', path.resolve('./path/to/folder')); // The following methods are useful to clear the cache // delete a given cache flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists. // delete all cache flatCache.clearAll(); // remove the cache directory ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistance in order to make a script that will beutify files with `esformatter` only execute on the files that were changed since the last run. To make that possible we need to store the `fileSize` and `modificationTime` of the files. So a simple `key/value` storage was needed and Bam! this module was born. ## Important notes - If no directory is especified when the `load` method is called, a folder named `.cache` will be created inside the module directory when `cache.save` is called. If you're committing your `node_modules` to any vcs, you might want to ignore the default `.cache` folder, or specify a custom directory. - The values set on the keys of the cache should be `stringify-able` ones, meaning no circular references - All the changes to the cache state are done to memory - I could have used a timer or `Object.observe` to deliver the changes to disk, but I wanted to keep this module intentionally dumb and simple - Non visited keys are removed when `cache.save()` is called. If this is not desired, you can pass `true` to the save call like: `cache.save( true /* noPrune */ )`. ## License MIT ## Changelog [changelog](./changelog.md) # ansi-colors [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/ansi-colors.svg?style=flat)](https://www.npmjs.com/package/ansi-colors) [![NPM monthly downloads](https://img.shields.io/npm/dm/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![NPM total downloads](https://img.shields.io/npm/dt/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![Linux Build Status](https://img.shields.io/travis/doowb/ansi-colors.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/ansi-colors) > Easily add ANSI colors to your text and symbols in the terminal. A faster drop-in replacement for chalk, kleur and turbocolor (without the dependencies and rendering bugs). Please consider following this project's author, [Brian Woodward](https://github.com/doowb), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save ansi-colors ``` ![image](https://user-images.githubusercontent.com/383994/39635445-8a98a3a6-4f8b-11e8-89c1-068c45d4fff8.png) ## Why use this? ansi-colors is _the fastest Node.js library for terminal styling_. A more performant drop-in replacement for chalk, with no dependencies. * _Blazing fast_ - Fastest terminal styling library in node.js, 10-20x faster than chalk! * _Drop-in replacement_ for [chalk](https://github.com/chalk/chalk). * _No dependencies_ (Chalk has 7 dependencies in its tree!) * _Safe_ - Does not modify the `String.prototype` like [colors](https://github.com/Marak/colors.js). * Supports [nested colors](#nested-colors), **and does not have the [nested styling bug](#nested-styling-bug) that is present in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur)**. * Supports [chained colors](#chained-colors). * [Toggle color support](#toggle-color-support) on or off. ## Usage ```js const c = require('ansi-colors'); console.log(c.red('This is a red string!')); console.log(c.green('This is a red string!')); console.log(c.cyan('This is a cyan string!')); console.log(c.yellow('This is a yellow string!')); ``` ![image](https://user-images.githubusercontent.com/383994/39653848-a38e67da-4fc0-11e8-89ae-98c65ebe9dcf.png) ## Chained colors ```js console.log(c.bold.red('this is a bold red message')); console.log(c.bold.yellow.italic('this is a bold yellow italicized message')); console.log(c.green.bold.underline('this is a bold green underlined message')); ``` ![image](https://user-images.githubusercontent.com/383994/39635780-7617246a-4f8c-11e8-89e9-05216cc54e38.png) ## Nested colors ```js console.log(c.yellow(`foo ${c.red.bold('red')} bar ${c.cyan('cyan')} baz`)); ``` ![image](https://user-images.githubusercontent.com/383994/39635817-8ed93d44-4f8c-11e8-8afd-8c3ea35f5fbe.png) ### Nested styling bug `ansi-colors` does not have the nested styling bug found in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur). ```js const { bold, red } = require('ansi-styles'); console.log(bold(`foo ${red.dim('bar')} baz`)); const colorette = require('colorette'); console.log(colorette.bold(`foo ${colorette.red(colorette.dim('bar'))} baz`)); const kleur = require('kleur'); console.log(kleur.bold(`foo ${kleur.red.dim('bar')} baz`)); const chalk = require('chalk'); console.log(chalk.bold(`foo ${chalk.red.dim('bar')} baz`)); ``` **Results in the following** (sans icons and labels) ![image](https://user-images.githubusercontent.com/383994/47280326-d2ee0580-d5a3-11e8-9611-ea6010f0a253.png) ## Toggle color support Easily enable/disable colors. ```js const c = require('ansi-colors'); // disable colors manually c.enabled = false; // or use a library to automatically detect support c.enabled = require('color-support').hasBasic; console.log(c.red('I will only be colored red if the terminal supports colors')); ``` ## Strip ANSI codes Use the `.unstyle` method to strip ANSI codes from a string. ```js console.log(c.unstyle(c.blue.bold('foo bar baz'))); //=> 'foo bar baz' ``` ## Available styles **Note** that bright and bright-background colors are not always supported. | Colors | Background Colors | Bright Colors | Bright Background Colors | | ------- | ----------------- | ------------- | ------------------------ | | black | bgBlack | blackBright | bgBlackBright | | red | bgRed | redBright | bgRedBright | | green | bgGreen | greenBright | bgGreenBright | | yellow | bgYellow | yellowBright | bgYellowBright | | blue | bgBlue | blueBright | bgBlueBright | | magenta | bgMagenta | magentaBright | bgMagentaBright | | cyan | bgCyan | cyanBright | bgCyanBright | | white | bgWhite | whiteBright | bgWhiteBright | | gray | | | | | grey | | | | _(`gray` is the U.S. spelling, `grey` is more commonly used in the Canada and U.K.)_ ### Style modifiers * dim * **bold** * hidden * _italic_ * underline * inverse * ~~strikethrough~~ * reset ## Aliases Create custom aliases for styles. ```js const colors = require('ansi-colors'); colors.alias('primary', colors.yellow); colors.alias('secondary', colors.bold); console.log(colors.primary.secondary('Foo')); ``` ## Themes A theme is an object of custom aliases. ```js const colors = require('ansi-colors'); colors.theme({ danger: colors.red, dark: colors.dim.gray, disabled: colors.gray, em: colors.italic, heading: colors.bold.underline, info: colors.cyan, muted: colors.dim, primary: colors.blue, strong: colors.bold, success: colors.green, underline: colors.underline, warning: colors.yellow }); // Now, we can use our custom styles alongside the built-in styles! console.log(colors.danger.strong.em('Error!')); console.log(colors.warning('Heads up!')); console.log(colors.info('Did you know...')); console.log(colors.success.bold('It worked!')); ``` ## Performance **Libraries tested** * ansi-colors v3.0.4 * chalk v2.4.1 ### Mac > MacBook Pro, Intel Core i7, 2.3 GHz, 16 GB. **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.915ms` * chalk - `12.437ms` **Benchmarks** ``` # All Colors ansi-colors x 173,851 ops/sec ±0.42% (91 runs sampled) chalk x 9,944 ops/sec ±2.53% (81 runs sampled))) # Chained colors ansi-colors x 20,791 ops/sec ±0.60% (88 runs sampled) chalk x 2,111 ops/sec ±2.34% (83 runs sampled) # Nested colors ansi-colors x 59,304 ops/sec ±0.98% (92 runs sampled) chalk x 4,590 ops/sec ±2.08% (82 runs sampled) ``` ### Windows > Windows 10, Intel Core i7-7700k CPU @ 4.2 GHz, 32 GB **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.494ms` * chalk - `11.523ms` **Benchmarks** ``` # All Colors ansi-colors x 193,088 ops/sec ±0.51% (95 runs sampled)) chalk x 9,612 ops/sec ±3.31% (77 runs sampled))) # Chained colors ansi-colors x 26,093 ops/sec ±1.13% (94 runs sampled) chalk x 2,267 ops/sec ±2.88% (80 runs sampled)) # Nested colors ansi-colors x 67,747 ops/sec ±0.49% (93 runs sampled) chalk x 4,446 ops/sec ±3.01% (82 runs sampled)) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [ansi-wrap](https://www.npmjs.com/package/ansi-wrap): Create ansi colors by passing the open and close codes. | [homepage](https://github.com/jonschlinkert/ansi-wrap "Create ansi colors by passing the open and close codes.") * [strip-color](https://www.npmjs.com/package/strip-color): Strip ANSI color codes from a string. No dependencies. | [homepage](https://github.com/jonschlinkert/strip-color "Strip ANSI color codes from a string. No dependencies.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 48 | [jonschlinkert](https://github.com/jonschlinkert) | | 42 | [doowb](https://github.com/doowb) | | 6 | [lukeed](https://github.com/lukeed) | | 2 | [Silic0nS0ldier](https://github.com/Silic0nS0ldier) | | 1 | [dwieeb](https://github.com/dwieeb) | | 1 | [jorgebucaran](https://github.com/jorgebucaran) | | 1 | [madhavarshney](https://github.com/madhavarshney) | | 1 | [chapterjason](https://github.com/chapterjason) | ### Author **Brian Woodward** * [GitHub Profile](https://github.com/doowb) * [Twitter Profile](https://twitter.com/doowb) * [LinkedIn Profile](https://linkedin.com/in/woodwardbrian) ### License Copyright © 2019, [Brian Woodward](https://github.com/doowb). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on July 01, 2019._ Like `chown -R`. Takes the same arguments as `fs.chown()` # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc Overview [![Build Status](https://travis-ci.org/lydell/js-tokens.svg?branch=master)](https://travis-ci.org/lydell/js-tokens) ======== A regex that tokenizes JavaScript. ```js var jsTokens = require("js-tokens").default var jsString = "var foo=opts.foo;\n..." jsString.match(jsTokens) // ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...] ``` Installation ============ `npm install js-tokens` ```js import jsTokens from "js-tokens" // or: var jsTokens = require("js-tokens").default ``` Usage ===== ### `jsTokens` ### A regex with the `g` flag that matches JavaScript tokens. The regex _always_ matches, even invalid JavaScript and the empty string. The next match is always directly after the previous. ### `var token = matchToToken(match)` ### ```js import {matchToToken} from "js-tokens" // or: var matchToToken = require("js-tokens").matchToToken ``` Takes a `match` returned by `jsTokens.exec(string)`, and returns a `{type: String, value: String}` object. The following types are available: - string - comment - regex - number - name - punctuator - whitespace - invalid Multi-line comments and strings also have a `closed` property indicating if the token was closed or not (see below). Comments and strings both come in several flavors. To distinguish them, check if the token starts with `//`, `/*`, `'`, `"` or `` ` ``. Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use [is-keyword-js] to tell them apart. Whitespace includes both line terminators and other whitespace. [is-keyword-js]: https://github.com/crissdev/is-keyword-js ECMAScript support ================== The intention is to always support the latest ECMAScript version whose feature set has been finalized. If adding support for a newer version requires changes, a new version with a major verion bump will be released. Currently, ECMAScript 2018 is supported. Invalid code handling ===================== Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input. Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input. Unterminated regex literals are likely matched as division and whatever is inside the regex. Invalid ASCII characters have their own capturing group. Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the [ES2018](#es2018) section. Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple. Strings may contain invalid escape sequences. Limitations =========== Tokenizing JavaScript using regexes—in fact, _one single regex_—won’t be perfect. But that’s not the point either. You may compare jsTokens with [esprima] by using `esprima-compare.js`. See `npm run esprima-compare`! [esprima]: http://esprima.org/ ### Template string interpolation ### Template strings are matched as single tokens, from the starting `` ` `` to the ending `` ` ``, including interpolations (whose tokens are not matched individually). Matching template string interpolations requires recursive balancing of `{` and `}`—something that JavaScript regexes cannot do. Only one level of nesting is supported. ### Division and regex literals collision ### Consider this example: ```js var g = 9.82 var number = bar / 2/g var regex = / 2/g ``` A human can easily understand that in the `number` line we’re dealing with division, and in the `regex` line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the `/` characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the [ES2018](#es2018) section). When the `jsTokens` regex scans throught the above, it will see the following at the end of both the `number` and `regex` rows: ```js / 2/g ``` It is then impossible to know if that is a regex literal, or part of an expression dealing with division. Here is a similar case: ```js foo /= 2/g foo(/= 2/g) ``` The first line divides the `foo` variable with `2/g`. The second line calls the `foo` function with the regex literal `/= 2/g`. Again, since `jsTokens` only sees forwards, it cannot tell the two cases apart. There are some cases where we _can_ tell division and regex literals apart, though. First off, we have the simple cases where there’s only one slash in the line: ```js var foo = 2/g foo /= 2 ``` Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line. Secondly, not every character is a valid regex flag. ```js var number = bar / 2/e ``` The above example is also correctly identified as division, because `e` is not a valid regex flag. I initially wanted to future-proof by allowing `[a-zA-Z]*` (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard `g`, `m`, `i`, `y` and `u` flags are allowed. This means that the above example will be identified as division as long as you don’t rename the `e` variable to some permutation of `gmiyus` 1 to 6 characters long. Lastly, we can look _forward_ for information. - If the token following what looks like a regex literal is not valid after a regex literal, but is valid in a division expression, then the regex literal is treated as division instead. For example, a flagless regex cannot be followed by a string, number or name, but all of those three can be the denominator of a division. - Generally, if what looks like a regex literal is followed by an operator, the regex literal is treated as division instead. This is because regexes are seldomly used with operators (such as `+`, `*`, `&&` and `==`), but division could likely be part of such an expression. Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as: If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code). ### ES2018 ### ES2018 added some nice regex improvements to the language. - [Unicode property escapes] should allow telling names and invalid non-ASCII characters apart without blowing up the regex size. - [Lookbehind assertions] should allow matching telling division and regex literals apart in more cases. - [Named capture groups] might simplify some things. These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features. [Unicode property escapes]: http://2ality.com/2017/07/regexp-unicode-property-escapes.html [Lookbehind assertions]: http://2ality.com/2017/05/regexp-lookbehind-assertions.html [Named capture groups]: http://2ality.com/2017/05/regexp-named-capture-groups.html License ======= [MIT](LICENSE). # lodash.truncate v4.4.2 The [lodash](https://lodash.com/) method `_.truncate` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.truncate ``` In Node.js: ```js var truncate = require('lodash.truncate'); ``` See the [documentation](https://lodash.com/docs#truncate) or [package source](https://github.com/lodash/lodash/blob/4.4.2-npm-packages/lodash.truncate) for more details. # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # function-bind <!-- [![build status][travis-svg]][travis-url] [![NPM version][npm-badge-svg]][npm-url] [![Coverage Status][5]][6] [![gemnasium Dependency Status][7]][8] [![Dependency status][deps-svg]][deps-url] [![Dev Dependency status][dev-deps-svg]][dev-deps-url] --> <!-- [![browser support][11]][12] --> Implementation of function.prototype.bind ## Example I mainly do this for unit tests I run on phantomjs. PhantomJS does not have Function.prototype.bind :( ```js Function.prototype.bind = require("function-bind") ``` ## Installation `npm install function-bind` ## Contributors - Raynos ## MIT Licenced [travis-svg]: https://travis-ci.org/Raynos/function-bind.svg [travis-url]: https://travis-ci.org/Raynos/function-bind [npm-badge-svg]: https://badge.fury.io/js/function-bind.svg [npm-url]: https://npmjs.org/package/function-bind [5]: https://coveralls.io/repos/Raynos/function-bind/badge.png [6]: https://coveralls.io/r/Raynos/function-bind [7]: https://gemnasium.com/Raynos/function-bind.png [8]: https://gemnasium.com/Raynos/function-bind [deps-svg]: https://david-dm.org/Raynos/function-bind.svg [deps-url]: https://david-dm.org/Raynos/function-bind [dev-deps-svg]: https://david-dm.org/Raynos/function-bind/dev-status.svg [dev-deps-url]: https://david-dm.org/Raynos/function-bind#info=devDependencies [11]: https://ci.testling.com/Raynos/function-bind.png [12]: https://ci.testling.com/Raynos/function-bind # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` ## Test Strategy - tests are copied from the [polyfill implementation](https://github.com/tc39/proposal-temporal/tree/main/polyfill/test) - tests should be removed if they relate to features that do not make sense for TS/AS, i.e. tests that validate the shape of an object do not make sense in a language with compile-time type checking - tests that fail because a feature has not been implemented yet should be left as failures. ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo: [demo](https://estools.github.io/esquery/) The following selectors are supported: * AST node type: `ForStatement` * [wildcard](http://dev.w3.org/csswg/selectors4/#universal-selector): `*` * [attribute existence](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr]` * [attribute value](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr="foo"]` or `[attr=123]` * attribute regex: `[attr=/foo.*/]` or (with flags) `[attr=/foo.*/is]` * attribute conditions: `[attr!="foo"]`, `[attr>2]`, `[attr<3]`, `[attr>=2]`, or `[attr<=3]` * nested attribute: `[attr.level2="foo"]` * field: `FunctionDeclaration > Identifier.id` * [First](http://dev.w3.org/csswg/selectors4/#the-first-child-pseudo) or [last](http://dev.w3.org/csswg/selectors4/#the-last-child-pseudo) child: `:first-child` or `:last-child` * [nth-child](http://dev.w3.org/csswg/selectors4/#the-nth-child-pseudo) (no ax+b support): `:nth-child(2)` * [nth-last-child](http://dev.w3.org/csswg/selectors4/#the-nth-last-child-pseudo) (no ax+b support): `:nth-last-child(1)` * [descendant](http://dev.w3.org/csswg/selectors4/#descendant-combinators): `ancestor descendant` * [child](http://dev.w3.org/csswg/selectors4/#child-combinators): `parent > child` * [following sibling](http://dev.w3.org/csswg/selectors4/#general-sibling-combinators): `node ~ sibling` * [adjacent sibling](http://dev.w3.org/csswg/selectors4/#adjacent-sibling-combinators): `node + adjacent` * [negation](http://dev.w3.org/csswg/selectors4/#negation-pseudo): `:not(ForStatement)` * [has](https://drafts.csswg.org/selectors-4/#has-pseudo): `:has(ForStatement)` * [matches-any](http://dev.w3.org/csswg/selectors4/#matches): `:matches([attr] > :first-child, :last-child)` * [subject indicator](http://dev.w3.org/csswg/selectors4/#subject): `!IfStatement > [name="foo"]` * class of AST node: `:statement`, `:expression`, `:declaration`, `:function`, or `:pattern` [![Build Status](https://travis-ci.org/estools/esquery.png?branch=master)](https://travis-ci.org/estools/esquery) ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) ## assemblyscript-temporal An implementation of temporal within AssemblyScript, with an initial focus on non-timezone-aware classes and functionality. ### Why? AssemblyScript has minimal `Date` support, however, the JS Date API itself is terrible and people tend not to use it that often. As a result libraries like moment / luxon have become staple replacements. However, there is now a [relatively mature TC39 proposal](https://github.com/tc39/proposal-temporal) that adds greatly improved date support to JS. The goal of this project is to implement Temporal for AssemblyScript. ### Usage This library currently supports the following types: #### `PlainDateTime` A `PlainDateTime` represents a calendar date and wall-clock time that does not carry time zone information, e.g. December 7th, 1995 at 3:00 PM (in the Gregorian calendar). For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindatetime.html), this implementation follows the specification as closely as possible. You can create a `PlainDateTime` from individual components, a string or an object literal: ```javascript datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.year; // 2019; datetime.month; // 11; // ... datetime.nanosecond; // 789; datetime = PlainDateTime.from("1976-11-18T12:34:56"); datetime.toString(); // "1976-11-18T12:34:56" datetime = PlainDateTime.from({ year: 1966, month: 3, day: 3 }); datetime.toString(); // "1966-03-03T00:00:00" ``` There are various ways you can manipulate a date: ```javascript // use 'with' to copy a date but with various property values overriden datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.with({ year: 2019 }).toString(); // "2019-11-18T15:23:30.123456789" // use 'add' or 'substract' to add / subtract a duration datetime = PlainDateTime.from("2020-01-12T15:00"); datetime.add({ months: 1 }).toString(); // "2020-02-12T15:00:00"); // add / subtract support Duration objects or object literals datetime.add(new Duration(1)).toString(); // "2021-01-12T15:00:00"); ``` You can compare dates and check for equality ```javascript dt1 = PlainDateTime.from("1976-11-18"); dt2 = PlainDateTime.from("2019-10-29"); PlainDateTime.compare(dt1, dt1); // 0 PlainDateTime.compare(dt1, dt2); // -1 dt1.equals(dt1); // true ``` Currently `PlainDateTime` only supports the ISO 8601 (Gregorian) calendar. #### `PlainDate` A `PlainDate` object represents a calendar date that is not associated with a particular time or time zone, e.g. August 24th, 2006. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindate.html), this implementation follows the specification as closely as possible. The `PlainDate` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainTime` A `PlainTime` object represents a wall-clock time that is not associated with a particular date or time zone, e.g. 7:39 PM. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaintime.html), this implementation follows the specification as closely as possible. The `PlainTime` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainMonthDay` A date without a year component. This is useful to express things like "Bastille Day is on the 14th of July". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainmonthday.html) , this implementation follows the specification as closely as possible. ```javascript const monthDay = PlainMonthDay.from({ month: 7, day: 14 }); // => 07-14 const date = monthDay.toPlainDate({ year: 2030 }); // => 2030-07-14 date.dayOfWeek; // => 7 ``` The `PlainMonthDay` API is almost identical to `PlainDateTime`, so see above for more API usage examples. #### `PlainYearMonth` A date without a day component. This is useful to express things like "the October 2020 meeting". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainyearmonth.html) , this implementation follows the specification as closely as possible. The `PlainYearMonth` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `now` The `now` object has several methods which give information about the current time and date. ```javascript dateTime = now.plainDateTimeISO(); dateTime.toString(); // 2021-04-01T12:05:47.357 ``` ## Contributing This project is open source, MIT licensed and your contributions are very much welcomed. There is a [brief document that outlines implementation progress and priorities](./development.md). bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). ### esutils [![Build Status](https://secure.travis-ci.org/estools/esutils.svg)](http://travis-ci.org/estools/esutils) esutils ([esutils](http://github.com/estools/esutils)) is utility box for ECMAScript language tools. ### API ### ast #### ast.isExpression(node) Returns true if `node` is an Expression as defined in ECMA262 edition 5.1 section [11](https://es5.github.io/#x11). #### ast.isStatement(node) Returns true if `node` is a Statement as defined in ECMA262 edition 5.1 section [12](https://es5.github.io/#x12). #### ast.isIterationStatement(node) Returns true if `node` is an IterationStatement as defined in ECMA262 edition 5.1 section [12.6](https://es5.github.io/#x12.6). #### ast.isSourceElement(node) Returns true if `node` is a SourceElement as defined in ECMA262 edition 5.1 section [14](https://es5.github.io/#x14). #### ast.trailingStatement(node) Returns `Statement?` if `node` has trailing `Statement`. ```js if (cond) consequent; ``` When taking this `IfStatement`, returns `consequent;` statement. #### ast.isProblematicIfStatement(node) Returns true if `node` is a problematic IfStatement. If `node` is a problematic `IfStatement`, `node` cannot be represented as an one on one JavaScript code. ```js { type: 'IfStatement', consequent: { type: 'WithStatement', body: { type: 'IfStatement', consequent: {type: 'EmptyStatement'} } }, alternate: {type: 'EmptyStatement'} } ``` The above node cannot be represented as a JavaScript code, since the top level `else` alternate belongs to an inner `IfStatement`. ### code #### code.isDecimalDigit(code) Return true if provided code is decimal digit. #### code.isHexDigit(code) Return true if provided code is hexadecimal digit. #### code.isOctalDigit(code) Return true if provided code is octal digit. #### code.isWhiteSpace(code) Return true if provided code is white space. White space characters are formally defined in ECMA262. #### code.isLineTerminator(code) Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262. #### code.isIdentifierStart(code) Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262. #### code.isIdentifierPart(code) Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262. ### keyword #### keyword.isKeywordES5(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections [7.6.1.1](http://es5.github.io/#x7.6.1.1) and [7.6.1.2](http://es5.github.io/#x7.6.1.2), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isKeywordES6(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections [11.6.2.1](http://ecma-international.org/ecma-262/6.0/#sec-keywords) and [11.6.2.2](http://ecma-international.org/ecma-262/6.0/#sec-future-reserved-words), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isReservedWordES5(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section [7.6.1](http://es5.github.io/#x7.6.1). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isReservedWordES6(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section [11.6.2](http://ecma-international.org/ecma-262/6.0/#sec-reserved-words). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isRestrictedWord(id) Returns `true` if provided identifier string is one of `eval` or `arguments`. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section [12.1.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers-static-semantics-early-errors). #### keyword.isIdentifierNameES5(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). #### keyword.isIdentifierNameES6(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section [11.6](http://ecma-international.org/ecma-262/6.0/#sec-names-and-keywords). #### keyword.isIdentifierES5(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. #### keyword.isIdentifierES6(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section [12.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. ### License Copyright (C) 2013 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # file-entry-cache > Super simple cache for file metadata, useful for process that work o a given series of files > and that only need to repeat the job on the changed ones since the previous run of the process — Edit [![NPM Version](http://img.shields.io/npm/v/file-entry-cache.svg?style=flat)](https://npmjs.org/package/file-entry-cache) [![Build Status](http://img.shields.io/travis/royriojas/file-entry-cache.svg?style=flat)](https://travis-ci.org/royriojas/file-entry-cache) ## install ```bash npm i --save file-entry-cache ``` ## Usage The module exposes two functions `create` and `createFromFile`. ## `create(cacheName, [directory, useCheckSum])` - **cacheName**: the name of the cache to be created - **directory**: Optional the directory to load the cache from - **usecheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ## `createFromFile(pathToCache, [useCheckSum])` - **pathToCache**: the path to the cache file (this combines the cache name and directory) - **useCheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ```js // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var fileEntryCache = require('file-entry-cache'); var cache = fileEntryCache.create('testCache'); var files = expand('../fixtures/*.txt'); // the first time this method is called, will return all the files var oFiles = cache.getUpdatedFiles(files); // this will persist this to disk checking each file stats and // updating the meta attributes `size` and `mtime`. // custom fields could also be added to the meta object and will be persisted // in order to retrieve them later cache.reconcile(); // use this if you want the non visited file entries to be kept in the cache // for more than one execution // // cache.reconcile( true /* noPrune */) // on a second run var cache2 = fileEntryCache.create('testCache'); // will return now only the files that were modified or none // if no files were modified previous to the execution of this function var oFiles = cache.getUpdatedFiles(files); // if you want to prevent a file from being considered non modified // something useful if a file failed some sort of validation // you can then remove the entry from the cache doing cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles` // that will effectively make the file to appear again as modified until the validation is passed. In that // case you should not remove it from the cache // if you need all the files, so you can determine what to do with the changed ones // you can call var oFiles = cache.normalizeEntries(files); // oFiles will be an array of objects like the following entry = { key: 'some/name/file', the path to the file changed: true, // if the file was changed since previous run meta: { size: 3242, // the size of the file mtime: 231231231, // the modification time of the file data: {} // some extra field stored for this file (useful to save the result of a transformation on the file } } ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistence (write-back cache) in order to make a script that will beautify files with `esformatter` to execute only on the files that were changed since the last run. In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second. This module uses [flat-cache](https://www.npmjs.com/package/flat-cache) a super simple `key/value` cache storage with optional file persistance. The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for `getChangedFiles` will return only the files that were modified. Making the process to end faster. This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the `meta` field, of the entries. Anything added to the meta field will be persisted. Those processes won't need to call `getChangedFiles` they will instead call `normalizeEntries` that will return the entries with a `changed` field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed. In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed. ## Important notes - The values set on the meta attribute of the entries should be `stringify-able` ones if possible, flat-cache uses `circular-json` to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular values - All the changes to the cache state are done to memory first and only persisted after reconcile. ## License MIT ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. # rechoir [![Build Status](https://secure.travis-ci.org/tkellen/js-rechoir.png)](http://travis-ci.org/tkellen/js-rechoir) > Require any supported file as a node module. [![NPM](https://nodei.co/npm/rechoir.png)](https://nodei.co/npm/rechoir/) ## What is it? This module, in conjunction with [interpret]-like objects can register any file type the npm ecosystem has a module loader for. This library is a dependency of [Liftoff]. ## API ### prepare(config, filepath, requireFrom) Look for a module loader associated with the provided file and attempt require it. If necessary, run any setup required to inject it into [require.extensions](http://nodejs.org/api/globals.html#globals_require_extensions). `config` An [interpret]-like configuration object. `filepath` A file whose type you'd like to register a module loader for. `requireFrom` An optional path to start searching for the module required to load the requested file. Defaults to the directory of `filepath`. If calling this method is successful (aka: it doesn't throw), you can now require files of the type you requested natively. An error with a `failures` property will be thrown if the module loader(s) configured for a given extension cannot be registered. If a loader is already registered, this will simply return `true`. **Note:** While rechoir will automatically load and register transpilers like `coffee-script`, you must provide a local installation. The transpilers are **not** bundled with this module. #### Usage ```js const config = require('interpret').extensions; const rechoir = require('rechoir'); rechoir.prepare(config, './test/fixtures/test.coffee'); rechoir.prepare(config, './test/fixtures/test.csv'); rechoir.prepare(config, './test/fixtures/test.toml'); console.log(require('./test/fixtures/test.coffee')); console.log(require('./test/fixtures/test.csv')); console.log(require('./test/fixtures/test.toml')); ``` [interpret]: http://github.com/tkellen/js-interpret [Liftoff]: http://github.com/tkellen/js-liftoff # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. # v8-compile-cache [![Build Status](https://travis-ci.org/zertosh/v8-compile-cache.svg?branch=master)](https://travis-ci.org/zertosh/v8-compile-cache) `v8-compile-cache` attaches a `require` hook to use [V8's code cache](https://v8project.blogspot.com/2015/07/code-caching.html) to speed up instantiation time. The "code cache" is the work of parsing and compiling done by V8. The ability to tap into V8 to produce/consume this cache was introduced in [Node v5.7.0](https://nodejs.org/en/blog/release/v5.7.0/). ## Usage 1. Add the dependency: ```sh $ npm install --save v8-compile-cache ``` 2. Then, in your entry module add: ```js require('v8-compile-cache'); ``` **Requiring `v8-compile-cache` in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by `v8-compile-cache`.** ## Options Set the environment variable `DISABLE_V8_COMPILE_CACHE=1` to disable the cache. Cache directory is defined by environment variable `V8_COMPILE_CACHE_CACHE_DIR` or defaults to `<os.tmpdir()>/v8-compile-cache-<V8_VERSION>`. ## Internals Cache files are suffixed `.BLOB` and `.MAP` corresponding to the entry module that required `v8-compile-cache`. The cache is _entry module specific_ because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis. ## Benchmarks See https://github.com/zertosh/v8-compile-cache/tree/master/bench. **Load Times:** | Module | Without Cache | With Cache | | ---------------- | -------------:| ----------:| | `babel-core` | `218ms` | `185ms` | | `yarn` | `153ms` | `113ms` | | `yarn` (bundled) | `228ms` | `105ms` | _^ Includes the overhead of loading the cache itself._ ## Acknowledgements * `FileSystemBlobStore` and `NativeCompileCache` are based on Atom's implementation of their v8 compile cache: - https://github.com/atom/atom/blob/b0d7a8a/src/file-system-blob-store.js - https://github.com/atom/atom/blob/b0d7a8a/src/native-compile-cache.js * `mkdirpSync` is based on: - https://github.com/substack/node-mkdirp/blob/f2003bb/index.js#L55-L98 # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. # prelude.ls [![Build Status](https://travis-ci.org/gkz/prelude-ls.png?branch=master)](https://travis-ci.org/gkz/prelude-ls) is a functionally oriented utility library. It is powerful and flexible. Almost all of its functions are curried. It is written in, and is the recommended base library for, <a href="http://livescript.net">LiveScript</a>. See **[the prelude.ls site](http://preludels.com)** for examples, a reference, and more. You can install via npm `npm install prelude-ls` ### Development `make test` to test `make build` to build `lib` from `src` `make build-browser` to build browser versions # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 10.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0,4.1,4.2?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <tannerntannern@gmail.com> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # is-core-module <sup>[![Version Badge][2]][1]</sup> [![github actions][actions-image]][actions-url] [![coverage][codecov-image]][codecov-url] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] Is this specifier a node.js core module? Optionally provide a node version to check; defaults to the current node version. ## Example ```js var isCore = require('is-core-module'); var assert = require('assert'); assert(isCore('fs')); assert(!isCore('butts')); ``` ## Tests Clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-core-module [2]: https://versionbadg.es/inspect-js/is-core-module.svg [5]: https://david-dm.org/inspect-js/is-core-module.svg [6]: https://david-dm.org/inspect-js/is-core-module [7]: https://david-dm.org/inspect-js/is-core-module/dev-status.svg [8]: https://david-dm.org/inspect-js/is-core-module#info=devDependencies [11]: https://nodei.co/npm/is-core-module.png?downloads=true&stars=true [license-image]: https://img.shields.io/npm/l/is-core-module.svg [license-url]: LICENSE [downloads-image]: https://img.shields.io/npm/dm/is-core-module.svg [downloads-url]: https://npm-stat.com/charts.html?package=is-core-module [codecov-image]: https://codecov.io/gh/inspect-js/is-core-module/branch/main/graphs/badge.svg [codecov-url]: https://app.codecov.io/gh/inspect-js/is-core-module/ [actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/inspect-js/is-core-module [actions-url]: https://github.com/inspect-js/is-core-module/actions functional-red-black-tree ========================= A [fully persistent](http://en.wikipedia.org/wiki/Persistent_data_structure) [red-black tree](http://en.wikipedia.org/wiki/Red%E2%80%93black_tree) written 100% in JavaScript. Works both in node.js and in the browser via [browserify](http://browserify.org/). Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy. Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module. # Install npm install functional-red-black-tree # Example Here is an example of some basic usage: ```javascript //Load the library var createTree = require("functional-red-black-tree") //Create a tree var t1 = createTree() //Insert some items into the tree var t2 = t1.insert(1, "foo") var t3 = t2.insert(2, "bar") //Remove something var t4 = t3.remove(1) ``` # API ```javascript var createTree = require("functional-red-black-tree") ``` ## Overview - [Tree methods](#tree-methods) - [`var tree = createTree([compare])`](#var-tree-=-createtreecompare) - [`tree.keys`](#treekeys) - [`tree.values`](#treevalues) - [`tree.length`](#treelength) - [`tree.get(key)`](#treegetkey) - [`tree.insert(key, value)`](#treeinsertkey-value) - [`tree.remove(key)`](#treeremovekey) - [`tree.find(key)`](#treefindkey) - [`tree.ge(key)`](#treegekey) - [`tree.gt(key)`](#treegtkey) - [`tree.lt(key)`](#treeltkey) - [`tree.le(key)`](#treelekey) - [`tree.at(position)`](#treeatposition) - [`tree.begin`](#treebegin) - [`tree.end`](#treeend) - [`tree.forEach(visitor(key,value)[, lo[, hi]])`](#treeforEachvisitorkeyvalue-lo-hi) - [`tree.root`](#treeroot) - [Node properties](#node-properties) - [`node.key`](#nodekey) - [`node.value`](#nodevalue) - [`node.left`](#nodeleft) - [`node.right`](#noderight) - [Iterator methods](#iterator-methods) - [`iter.key`](#iterkey) - [`iter.value`](#itervalue) - [`iter.node`](#iternode) - [`iter.tree`](#itertree) - [`iter.index`](#iterindex) - [`iter.valid`](#itervalid) - [`iter.clone()`](#iterclone) - [`iter.remove()`](#iterremove) - [`iter.update(value)`](#iterupdatevalue) - [`iter.next()`](#iternext) - [`iter.prev()`](#iterprev) - [`iter.hasNext`](#iterhasnext) - [`iter.hasPrev`](#iterhasprev) ## Tree methods ### `var tree = createTree([compare])` Creates an empty functional tree * `compare` is an optional comparison function, same semantics as array.sort() **Returns** An empty tree ordered by `compare` ### `tree.keys` A sorted array of all the keys in the tree ### `tree.values` An array array of all the values in the tree ### `tree.length` The number of items in the tree ### `tree.get(key)` Retrieves the value associated to the given key * `key` is the key of the item to look up **Returns** The value of the first node associated to `key` ### `tree.insert(key, value)` Creates a new tree with the new pair inserted. * `key` is the key of the item to insert * `value` is the value of the item to insert **Returns** A new tree with `key` and `value` inserted ### `tree.remove(key)` Removes the first item with `key` in the tree * `key` is the key of the item to remove **Returns** A new tree with the given item removed if it exists ### `tree.find(key)` Returns an iterator pointing to the first item in the tree with `key`, otherwise `null`. ### `tree.ge(key)` Find the first item in the tree whose key is `>= key` * `key` is the key to search for **Returns** An iterator at the given element. ### `tree.gt(key)` Finds the first item in the tree whose key is `> key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.lt(key)` Finds the last item in the tree whose key is `< key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.le(key)` Finds the last item in the tree whose key is `<= key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.at(position)` Finds an iterator starting at the given element * `position` is the index at which the iterator gets created **Returns** An iterator starting at position ### `tree.begin` An iterator pointing to the first element in the tree ### `tree.end` An iterator pointing to the last element in the tree ### `tree.forEach(visitor(key,value)[, lo[, hi]])` Walks a visitor function over the nodes of the tree in order. * `visitor(key,value)` is a callback that gets executed on each node. If a truthy value is returned from the visitor, then iteration is stopped. * `lo` is an optional start of the range to visit (inclusive) * `hi` is an optional end of the range to visit (non-inclusive) **Returns** The last value returned by the callback ### `tree.root` Returns the root node of the tree ## Node properties Each node of the tree has the following properties: ### `node.key` The key associated to the node ### `node.value` The value associated to the node ### `node.left` The left subtree of the node ### `node.right` The right subtree of the node ## Iterator methods ### `iter.key` The key of the item referenced by the iterator ### `iter.value` The value of the item referenced by the iterator ### `iter.node` The value of the node at the iterator's current position. `null` is iterator is node valid. ### `iter.tree` The tree associated to the iterator ### `iter.index` Returns the position of this iterator in the sequence. ### `iter.valid` Checks if the iterator is valid ### `iter.clone()` Makes a copy of the iterator ### `iter.remove()` Removes the item at the position of the iterator **Returns** A new binary search tree with `iter`'s item removed ### `iter.update(value)` Updates the value of the node in the tree at this iterator **Returns** A new binary search tree with the corresponding node updated ### `iter.next()` Advances the iterator to the next position ### `iter.prev()` Moves the iterator backward one element ### `iter.hasNext` If true, then the iterator is not at the end of the sequence ### `iter.hasPrev` If true, then the iterator is not at the beginning of the sequence # Credits (c) 2013 Mikola Lysenko. MIT License # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value = <JSON.Value>valueOrNull; // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let innerString = (<JSON.Str>value).valueOf(); let jsonString = (<JSON.Str>value).stringify(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.stringify(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # levn [![Build Status](https://travis-ci.org/gkz/levn.png)](https://travis-ci.org/gkz/levn) <a name="levn" /> __Light ECMAScript (JavaScript) Value Notation__ Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments). Levn aims to concisely describe JavaScript values in text, and allow for the extraction and validation of those values. Levn uses [type-check](https://github.com/gkz/type-check) for its type format, and to validate the results. MIT license. Version 0.4.1. __How is this different than JSON?__ levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose. npm install levn For updates on levn, [follow me on twitter](https://twitter.com/gkzahariev). ## Quick Examples ```js var parse = require('levn').parse; parse('Number', '2'); // 2 parse('String', '2'); // '2' parse('String', 'levn'); // 'levn' parse('String', 'a b'); // 'a b' parse('Boolean', 'true'); // true parse('Date', '#2011-11-11#'); // (Date object) parse('Date', '2011-11-11'); // (Date object) parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi parse('RegExp', 're'); // /re/ parse('Int', '2'); // 2 parse('Number | String', 'str'); // 'str' parse('Number | String', '2'); // 2 parse('[Number]', '[1,2,3]'); // [1,2,3] parse('(String, Boolean)', '(hi, false)'); // ['hi', false] parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2} // at the top level, you can ommit surrounding delimiters parse('[Number]', '1,2,3'); // [1,2,3] parse('(String, Boolean)', 'hi, false'); // ['hi', false] parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2} // wildcard - auto choose type parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}] ``` ## Usage `require('levn');` returns an object that exposes three properties. `VERSION` is the current version of the library as a string. `parse` and `parsedTypeParse` are functions. ```js // parse(type, input, options); parse('[Number]', '1,2,3'); // [1, 2, 3] // parsedTypeParse(parsedType, input, options); var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ### parse(type, input, options) `parse` casts the string `input` into a JavaScript value according to the specified `type` in the [type format](https://github.com/gkz/type-check#type-format) (and taking account the optional `options`) and returns the resulting JavaScript value. ##### arguments * type - `String` - the type written in the [type format](https://github.com/gkz/type-check#type-format) which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js parse('[Number]', '1,2,3'); // [1, 2, 3] ``` ### parsedTypeParse(parsedType, input, options) `parsedTypeParse` casts the string `input` into a JavaScript value according to the specified `type` which has already been parsed (and taking account the optional `options`) and returns the resulting JavaScript value. You can parse a type using the [type-check](https://github.com/gkz/type-check) library's `parseType` function. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ## Levn Format Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, `parse('Number', '2')` will produce the number `2`, but `parse('String', '2')` will produce the string `"2"`. If you do not provide type information, and simply use `*`, levn will parse the input according the unambiguous "explicit" mode, which we will now detail - you can also set the `explicit` option to true manually in the [options](#options). * `"string"`, `'string'` are parsed as a String, eg. `"a msg"` is `"a msg"` * `#date#` is parsed as a Date, eg. `#2011-11-11#` is `new Date('2011-11-11')` * `/regexp/flags` is parsed as a RegExp, eg. `/re/gi` is `/re/gi` * `undefined`, `null`, `NaN`, `true`, and `false` are all their JavaScript equivalents * `[element1, element2, etc]` is an Array, and the casting procedure is recursively applied to each element. Eg. `[1,2,3]` is `[1,2,3]`. * `(element1, element2, etc)` is an tuple, and the casting procedure is recursively applied to each element. Eg. `(1, a)` is `(1, a)` (is `[1, 'a']`). * `{key1: val1, key2: val2, ...}` is an Object, and the casting procedure is recursively applied to each property. Eg. `{a: 1, b: 2}` is `{a: 1, b: 2}`. * Any test which does not fall under the above, and which does not contain special characters (`[``]``(``)``{``}``:``,`) is a string, eg. `$12- blah` is `"$12- blah"`. If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the [type format](https://github.com/gkz/type-check#type-format) section of [type-check](https://github.com/gkz/type-check) for more information about how to specify types. There are some rules about what levn can do with the information: * If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. `[({})]` is `"[({})]"`, and `"hi"` is `'"hi"'`. * If a Date is expected, the surrounding `#` can be omitted from date literals. Eg. `2011-11-11` is `new Date('2011-11-11')`. * If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing `/` can be omitted - this will have the affect of setting the source of the regex to the input. Eg. `regex` is `/regex/`. * If an Array is expected, and it is the root node (at the top level), the opening `[` and closing `]` can be omitted. Eg. `1,2,3` is `[1,2,3]`. * If a tuple is expected, and it is the root node (at the top level), the opening `(` and closing `)` can be omitted. Eg. `1, a` is `(1, a)` (is `[1, 'a']`). * If an Object is expected, and it is the root node (at the top level), the opening `{` and closing `}` can be omitted. Eg `a: 1, b: 2` is `{a: 1, b: 2}`. If you list multiple types (eg. `Number | String`), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list. Whitespace between special characters and elements is inconsequential. ## Options Options is an object. It is an optional parameter to the `parse` and `parsedTypeParse` functions. ### Explicit A `Boolean`. By default it is `false`. __Example:__ ```js parse('RegExp', 're', {explicit: false}); // /re/ parse('RegExp', 're', {explicit: true}); // Error: ... does not type check... parse('RegExp | String', 're', {explicit: true}); // 're' ``` `explicit` sets whether to be in explicit mode or not. Using `*` automatically activates explicit mode. For more information, read the [levn format](#levn-format) section. ### customTypes An `Object`. Empty `{}` by default. __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function (x) { return x % 2 === 0; }, cast: function (x) { return {type: 'Just', value: parseInt(x)}; } } } } parse('Even', '2', options); // 2 parse('Even', '3', options); // Error: Value: "3" does not type check... ``` __Another Example:__ ```js function Person(name, age){ this.name = name; this.age = age; } var options = { customTypes: { Person: { typeOf: 'Object', validate: function (x) { x instanceof Person; }, cast: function (value, options, typesCast) { var name, age; if ({}.toString.call(value).slice(8, -1) !== 'Object') { return {type: 'Nothing'}; } name = typesCast(value.name, [{type: 'String'}], options); age = typesCast(value.age, [{type: 'Numger'}], options); return {type: 'Just', value: new Person(name, age)}; } } } parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25} ``` `customTypes` is an object whose keys are the name of the types, and whose values are an object with three properties, `typeOf`, `validate`, and `cast`. For more information about `typeOf` and `validate`, please see the [custom types](https://github.com/gkz/type-check#custom-types) section of type-check. `cast` is a function which receives three arguments, the value under question, options, and the typesCast function. In `cast`, attempt to cast the value into the specified type. If you are successful, return an object in the format `{type: 'Just', value: CAST-VALUE}`, if you know it won't work, return `{type: 'Nothing'}`. You can use the `typesCast` function to cast any child values. Remember to pass `options` to it. In your function you can also check for `options.explicit` and act accordingly. ## Technical About `levn` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [type-check](https://github.com/gkz/type-check) to both parse types and validate values. It also uses the [prelude.ls](http://preludels.com/) library. <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/images/ajv_logo.png"> # Ajv: Another JSON Schema Validator The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07. [![Build Status](https://travis-ci.org/ajv-validator/ajv.svg?branch=master)](https://travis-ci.org/ajv-validator/ajv) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm (beta)](https://img.shields.io/npm/v/ajv/beta)](https://www.npmjs.com/package/ajv/v/7.0.0-beta.0) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv v7 beta is released [Ajv version 7.0.0-beta.0](https://github.com/ajv-validator/ajv/tree/v7-beta) is released with these changes: - to reduce the mistakes in JSON schemas and unexpected validation results, [strict mode](./docs/strict-mode.md) is added - it prohibits ignored or ambiguous JSON Schema elements. - to make code injection from untrusted schemas impossible, [code generation](./docs/codegen.md) is fully re-written to be safe. - to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas. - schemas are compiled to ES6 code (ES5 code generation is supported with an option). - to improve reliability and maintainability the code is migrated to TypeScript. **Please note**: - the support for JSON-Schema draft-04 is removed - if you have schemas using "id" attributes you have to replace them with "\$id" (or continue using version 6 that will be supported until 02/28/2021). - all formats are separated to ajv-formats package - they have to be explicitely added if you use them. See [release notes](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) for the details. To install the new version: ```bash npm install ajv@beta ``` See [Getting started with v7](https://github.com/ajv-validator/ajv/tree/v7-beta#usage) for code example. ## Mozilla MOSS grant and OpenJS Foundation [<img src="https://www.poberezkin.com/images/mozilla.png" width="240" height="68">](https://www.mozilla.org/en-US/moss/) &nbsp;&nbsp;&nbsp; [<img src="https://www.poberezkin.com/images/openjs.png" width="220" height="68">](https://openjsf.org/blog/2020/08/14/ajv-joins-openjs-foundation-as-an-incubation-project/) Ajv has been awarded a grant from Mozilla’s [Open Source Support (MOSS) program](https://www.mozilla.org/en-US/moss/) in the “Foundational Technology” track! It will sponsor the development of Ajv support of [JSON Schema version 2019-09](https://tools.ietf.org/html/draft-handrews-json-schema-02) and of [JSON Type Definition](https://tools.ietf.org/html/draft-ucarion-json-type-definition-04). Ajv also joined [OpenJS Foundation](https://openjsf.org/) – having this support will help ensure the longevity and stability of Ajv for all its users. This [blog post](https://www.poberezkin.com/posts/2020-08-14-ajv-json-validator-mozilla-open-source-grant-openjs-foundation.html) has more details. I am looking for the long term maintainers of Ajv – working with [ReadySet](https://www.thereadyset.co/), also sponsored by Mozilla, to establish clear guidelines for the role of a "maintainer" and the contribution standards, and to encourage a wider, more inclusive, contribution from the community. ## Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Using version 6 [JSON Schema draft-07](http://json-schema.org/latest/json-schema-validation.html) is published. [Ajv version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0) that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes). __Please note__: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance: ```javascript ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json')); ``` To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId: ```javascript var ajv = new Ajv({schemaId: 'id'}); // If you want to use both draft-04 and draft-06/07 schemas: // var ajv = new Ajv({schemaId: 'auto'}); ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json')); ``` ## Contents - [Performance](#performance) - [Features](#features) - [Getting started](#getting-started) - [Frequently Asked Questions](https://github.com/ajv-validator/ajv/blob/master/FAQ.md) - [Using in browser](#using-in-browser) - [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) - [Command line interface](#command-line-interface) - Validation - [Keywords](#validation-keywords) - [Annotation keywords](#annotation-keywords) - [Formats](#formats) - [Combining schemas with $ref](#ref) - [$data reference](#data-reference) - NEW: [$merge and $patch keywords](#merge-and-patch-keywords) - [Defining custom keywords](#defining-custom-keywords) - [Asynchronous schema compilation](#asynchronous-schema-compilation) - [Asynchronous validation](#asynchronous-validation) - [Security considerations](#security-considerations) - [Security contact](#security-contact) - [Untrusted schemas](#untrusted-schemas) - [Circular references in objects](#circular-references-in-javascript-objects) - [Trusted schemas](#security-risks-of-trusted-schemas) - [ReDoS attack](#redos-attack) - Modifying data during validation - [Filtering data](#filtering-data) - [Assigning defaults](#assigning-defaults) - [Coercing data types](#coercing-data-types) - API - [Methods](#api) - [Options](#options) - [Validation errors](#validation-errors) - [Plugins](#plugins) - [Related packages](#related-packages) - [Some packages using Ajv](#some-packages-using-ajv) - [Tests, Contributing, Changes history](#tests) - [Support, Code of conduct, License](#open-source-software-support) ## Performance Ajv generates code using [doT templates](https://github.com/olado/doT) to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=32,4,1&chs=600x416&chxl=-1:|djv|ajv|json-schema-validator-generator|jsen|is-my-json-valid|themis|z-schema|jsck|skeemas|json-schema-library|tv4&chd=t:100,98,72.1,66.8,50.1,15.1,6.1,3.8,1.2,0.7,0.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements full JSON Schema [draft-06/07](http://json-schema.org/) and draft-04 standards: - all validation keywords (see [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md)) - full support of remote refs (remote schemas have to be added with `addSchema` or compiled to be available) - support of circular references between schemas - correct string lengths for strings with unicode pairs (can be turned off) - [formats](#formats) defined by JSON Schema draft-07 standard and custom formats (can be turned off) - [validates schemas against meta-schema](#api-validateschema) - supports [browsers](#using-in-browser) and Node.js 0.10-14.x - [asynchronous loading](#asynchronous-schema-compilation) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](#options) - [error messages with parameters](#validation-errors) describing error reasons to allow creating custom error messages - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [filtering data](#filtering-data) from additional properties - [assigning defaults](#assigning-defaults) to missing properties and items - [coercing data](#coercing-data-types) to the types specified in `type` keywords - [custom keywords](#defining-custom-keywords) - draft-06/07 keywords `const`, `contains`, `propertyNames` and `if/then/else` - draft-06 boolean schemas (`true`/`false` as a schema to always pass/fail). - keywords `switch`, `patternRequired`, `formatMaximum` / `formatMinimum` and `formatExclusiveMaximum` / `formatExclusiveMinimum` from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [$data reference](#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](#asynchronous-validation) of custom formats and keywords ## Install ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://tonicdev.com/npm/ajv The fastest validation call: ```javascript // Node.js require: var Ajv = require('ajv'); // or ESM/TypeScript import import Ajv from 'ajv'; var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true} var validate = ajv.compile(schema); var valid = validate(data); if (!valid) console.log(validate.errors); ``` or with less code ```javascript // ... var valid = ajv.validate(schema, data); if (!valid) console.log(ajv.errors); // ... ``` or ```javascript // ... var valid = ajv.addSchema(schema, 'mySchema') .validate('mySchema', data); if (!valid) console.log(ajv.errorsText()); // ... ``` See [API](#api) and [Options](#options) for more details. Ajv compiles schemas to functions and caches them in all cases (using schema serialized with [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again. The best performance is achieved when using compiled functions returned by `compile` or `getSchema` methods (there is no additional function call). __Please note__: every time a validation function or `ajv.validate` are called `errors` property is overwritten. You need to copy `errors` array reference to another variable if you want to use it later (e.g., in the callback). See [Validation errors](#validation-errors) __Note for TypeScript users__: `ajv` provides its own TypeScript declarations out of the box, so you don't need to install the deprecated `@types/ajv` module. ## Using in browser You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle. If you need to use Ajv in several bundles you can create a separate UMD bundle using `npm run bundle` script (thanks to [siddo420](https://github.com/siddo420)). Then you need to load Ajv in the browser: ```html <script src="ajv.min.js"></script> ``` This bundle can be used with different module systems; it creates global `Ajv` if no module system is found. The browser bundle is available on [cdnjs](https://cdnjs.com/libraries/ajv). Ajv is tested with these browsers: [![Sauce Test Status](https://saucelabs.com/browser-matrix/epoberezkin.svg)](https://saucelabs.com/u/epoberezkin) __Please note__: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue [#234](https://github.com/ajv-validator/ajv/issues/234)). ### Ajv and Content Security Policies (CSP) If you're using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a `script-src` directive that includes the value `'unsafe-eval'`. :warning: NOTE, however, that `unsafe-eval` is NOT recommended in a secure CSP[[1]](https://developer.chrome.com/extensions/contentSecurityPolicy#relaxing-eval), as it has the potential to open the document to cross-site scripting (XSS) attacks. In order to make use of Ajv without easing your CSP, you can [pre-compile a schema using the CLI](https://github.com/ajv-validator/ajv-cli#compile-schemas). This will transpile the schema JSON into a JavaScript file that exports a `validate` function that works simlarly to a schema compiled at runtime. Note that pre-compilation of schemas is performed using [ajv-pack](https://github.com/ajv-validator/ajv-pack) and there are [some limitations to the schema features it can compile](https://github.com/ajv-validator/ajv-pack#limitations). A successfully pre-compiled schema is equivalent to the same schema compiled at runtime. ## Command line interface CLI is available as a separate npm package [ajv-cli](https://github.com/ajv-validator/ajv-cli). It supports: - compiling JSON Schemas to test their validity - BETA: generating standalone module exporting a validation function to be used without Ajv (using [ajv-pack](https://github.com/ajv-validator/ajv-pack)) - migrate schemas to draft-07 (using [json-schema-migrate](https://github.com/epoberezkin/json-schema-migrate)) - validating data file(s) against JSON Schema - testing expected validity of data against JSON Schema - referenced schemas - custom meta-schemas - files in JSON, JSON5, YAML, and JavaScript format - all Ajv options - reporting changes in data after validation in [JSON-patch](https://tools.ietf.org/html/rfc6902) format ## Validation keywords Ajv supports all validation keywords from draft-07 of JSON Schema standard: - [type](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#type) - [for numbers](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-numbers) - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf - [for strings](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-strings) - maxLength, minLength, pattern, format - [for arrays](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-arrays) - maxItems, minItems, uniqueItems, items, additionalItems, [contains](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#contains) - [for objects](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-objects) - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, [propertyNames](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#propertynames) - [for all types](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-all-types) - enum, [const](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#const) - [compound keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#compound-keywords) - not, oneOf, anyOf, allOf, [if/then/else](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#ifthenelse) With [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package Ajv also supports validation keywords from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) for JSON Schema standard: - [patternRequired](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#patternrequired-proposed) - like `required` but with patterns that some property should match. - [formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#formatmaximum--formatminimum-and-exclusiveformatmaximum--exclusiveformatminimum-proposed) - setting limits for date, time, etc. See [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md) for more details. ## Annotation keywords JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation. - `title` and `description`: information about the data represented by that schema - `$comment` (NEW in draft-07): information for developers. With option `$comment` Ajv logs or passes the comment string to the user-supplied function. See [Options](#options). - `default`: a default value of the data instance, see [Assigning defaults](#assigning-defaults). - `examples` (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema. - `readOnly` and `writeOnly` (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.). - `contentEncoding`: [RFC 2045](https://tools.ietf.org/html/rfc2045#section-6.1 ), e.g., "base64". - `contentMediaType`: [RFC 2046](https://tools.ietf.org/html/rfc2046), e.g., "image/png". __Please note__: Ajv does not implement validation of the keywords `examples`, `contentEncoding` and `contentMediaType` but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance. ## Formats Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use "format" keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see [ReDoS attack](#redos-attack). __Please note__: if you need to use "format" keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios. The following formats are implemented for string validation with "format" keyword: - _date_: full-date according to [RFC3339](http://tools.ietf.org/html/rfc3339#section-5.6). - _time_: time with optional time-zone. - _date-time_: date-time from the same source (time-zone is mandatory). `date`, `time` and `date-time` validate ranges in `full` mode and only regexp in `fast` mode (see [options](#options)). - _uri_: full URI. - _uri-reference_: URI reference, including full and relative URIs. - _uri-template_: URI template according to [RFC6570](https://tools.ietf.org/html/rfc6570) - _url_ (deprecated): [URL record](https://url.spec.whatwg.org/#concept-url). - _email_: email address. - _hostname_: host name according to [RFC1034](http://tools.ietf.org/html/rfc1034#section-3.5). - _ipv4_: IP address v4. - _ipv6_: IP address v6. - _regex_: tests whether a string is a valid regular expression by passing it to RegExp constructor. - _uuid_: Universally Unique IDentifier according to [RFC4122](http://tools.ietf.org/html/rfc4122). - _json-pointer_: JSON-pointer according to [RFC6901](https://tools.ietf.org/html/rfc6901). - _relative-json-pointer_: relative JSON-pointer according to [this draft](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00). __Please note__: JSON Schema draft-07 also defines formats `iri`, `iri-reference`, `idn-hostname` and `idn-email` for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here. There are two modes of format validation: `fast` and `full`. This mode affects formats `date`, `time`, `date-time`, `uri`, `uri-reference`, and `email`. See [Options](#options) for details. You can add additional formats and replace any of the formats above using [addFormat](#api-addformat) method. The option `unknownFormats` allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See [Options](#options) for details. You can find regular expressions used for format validation and the sources that were used in [formats.js](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js). ## <a name="ref"></a>Combining schemas with $ref You can structure your validation logic across multiple schema files and have schemas reference each other using `$ref` keyword. Example: ```javascript var schema = { "$id": "http://example.com/schemas/schema.json", "type": "object", "properties": { "foo": { "$ref": "defs.json#/definitions/int" }, "bar": { "$ref": "defs.json#/definitions/str" } } }; var defsSchema = { "$id": "http://example.com/schemas/defs.json", "definitions": { "int": { "type": "integer" }, "str": { "type": "string" } } }; ``` Now to compile your schema you can either pass all schemas to Ajv instance: ```javascript var ajv = new Ajv({schemas: [schema, defsSchema]}); var validate = ajv.getSchema('http://example.com/schemas/schema.json'); ``` or use `addSchema` method: ```javascript var ajv = new Ajv; var validate = ajv.addSchema(defsSchema) .compile(schema); ``` See [Options](#options) and [addSchema](#api) method. __Please note__: - `$ref` is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of `addSchema` method or as a property name in `schemas` option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using `compileAsync` method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See [Asynchronous schema compilation](#asynchronous-schema-compilation). ## $data reference With `$data` option you can use values from the validated data as the values for the schema keywords. See [proposal](https://github.com/json-schema-org/json-schema-spec/issues/51) for more information about how it works. `$data` reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems. The value of "$data" should be a [JSON-pointer](https://tools.ietf.org/html/rfc6901) to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a [relative JSON-pointer](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00) (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema). Examples. This schema requires that the value in property `smaller` is less or equal than the value in the property larger: ```javascript var ajv = new Ajv({$data: true}); var schema = { "properties": { "smaller": { "type": "number", "maximum": { "$data": "1/larger" } }, "larger": { "type": "number" } } }; var validData = { smaller: 5, larger: 7 }; ajv.validate(schema, validData); // true ``` This schema requires that the properties have the same format as their field names: ```javascript var schema = { "additionalProperties": { "type": "string", "format": { "$data": "0#" } } }; var validData = { 'date-time': '1963-06-19T08:30:06.283185Z', email: 'joe.bloggs@example.com' } ``` `$data` reference is resolved safely - it won't throw even if some property is undefined. If `$data` resolves to `undefined` the validation succeeds (with the exclusion of `const` keyword). If `$data` resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails. ## $merge and $patch keywords With the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) you can use the keywords `$merge` and `$patch` that allow extending JSON Schemas with patches using formats [JSON Merge Patch (RFC 7396)](https://tools.ietf.org/html/rfc7396) and [JSON Patch (RFC 6902)](https://tools.ietf.org/html/rfc6902). To add keywords `$merge` and `$patch` to Ajv instance use this code: ```javascript require('ajv-merge-patch')(ajv); ``` Examples. Using `$merge`: ```json { "$merge": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": { "properties": { "q": { "type": "number" } } } } } ``` Using `$patch`: ```json { "$patch": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": [ { "op": "add", "path": "/properties/q", "value": { "type": "number" } } ] } } ``` The schemas above are equivalent to this schema: ```json { "type": "object", "properties": { "p": { "type": "string" }, "q": { "type": "number" } }, "additionalProperties": false } ``` The properties `source` and `with` in the keywords `$merge` and `$patch` can use absolute or relative `$ref` to point to other schemas previously added to the Ajv instance or to the fragments of the current schema. See the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) for more information. ## Defining custom keywords The advantages of using custom keywords are: - allow creating validation scenarios that cannot be expressed using JSON Schema - simplify your schemas - help bringing a bigger part of the validation logic to your schemas - make your schemas more expressive, less verbose and closer to your application domain - implement custom data processors that modify your data (`modifying` option MUST be used in keyword definition) and/or create side effects while the data is being validated If a keyword is used only for side-effects and its validation result is pre-defined, use option `valid: true/false` in keyword definition to simplify both generated code (no error handling in case of `valid: true`) and your keyword functions (no need to return any validation result). The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas. You can define custom keywords with [addKeyword](#api-addkeyword) method. Keywords are defined on the `ajv` instance level - new instances will not have previously defined keywords. Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema. Example. `range` and `exclusiveRange` keywords using compiled schema: ```javascript ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) { var min = sch[0]; var max = sch[1]; return parentSchema.exclusiveRange === true ? function (data) { return data > min && data < max; } : function (data) { return data >= min && data <= max; } } }); var schema = { "range": [2, 4], "exclusiveRange": true }; var validate = ajv.compile(schema); console.log(validate(2.01)); // true console.log(validate(3.99)); // true console.log(validate(2)); // false console.log(validate(4)); // false ``` Several custom keywords (typeof, instanceof, range and propertyNames) are defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - they can be used for your schemas and as a starting point for your own custom keywords. See [Defining custom keywords](https://github.com/ajv-validator/ajv/blob/master/CUSTOM.md) for more details. ## Asynchronous schema compilation During asynchronous compilation remote references are loaded using supplied function. See `compileAsync` [method](#api-compileAsync) and `loadSchema` [option](#options). Example: ```javascript var ajv = new Ajv({ loadSchema: loadSchema }); ajv.compileAsync(schema).then(function (validate) { var valid = validate(data); // ... }); function loadSchema(uri) { return request.json(uri).then(function (res) { if (res.statusCode >= 400) throw new Error('Loading error: ' + res.statusCode); return res.body; }); } ``` __Please note__: [Option](#options) `missingRefs` should NOT be set to `"ignore"` or `"fail"` for asynchronous compilation to work. ## Asynchronous validation Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add `async: true` in the keyword or format definition (see [addFormat](#api-addformat), [addKeyword](#api-addkeyword) and [Defining custom keywords](#defining-custom-keywords)). If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have `"$async": true` keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without `$async` keyword Ajv will throw an exception during schema compilation. __Please note__: all asynchronous subschemas that are referenced from the current or other schemas should have `"$async": true` keyword as well, otherwise the schema compilation will fail. Validation function for an asynchronous custom format/keyword should return a promise that resolves with `true` or `false` (or rejects with `new Ajv.ValidationError(errors)` if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to [es7 async functions](http://tc39.github.io/ecmascript-asyncawait/) that can optionally be transpiled with [nodent](https://github.com/MatAtBread/nodent). Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via `processCode` option. See [Options](#options). The compiled validation function has `$async: true` property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas. Validation result will be a promise that resolves with validated data or rejects with an exception `Ajv.ValidationError` that contains the array of validation errors in `errors` property. Example: ```javascript var ajv = new Ajv; // require('ajv-async')(ajv); ajv.addKeyword('idExists', { async: true, type: 'number', validate: checkIdExists }); function checkIdExists(schema, data) { return knex(schema.table) .select('id') .where('id', data) .then(function (rows) { return !!rows.length; // true if record is found }); } var schema = { "$async": true, "properties": { "userId": { "type": "integer", "idExists": { "table": "users" } }, "postId": { "type": "integer", "idExists": { "table": "posts" } } } }; var validate = ajv.compile(schema); validate({ userId: 1, postId: 19 }) .then(function (data) { console.log('Data is valid', data); // { userId: 1, postId: 19 } }) .catch(function (err) { if (!(err instanceof Ajv.ValidationError)) throw err; // data is invalid console.log('Validation errors:', err.errors); }); ``` ### Using transpilers with asynchronous validation functions. [ajv-async](https://github.com/ajv-validator/ajv-async) uses [nodent](https://github.com/MatAtBread/nodent) to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser). #### Using nodent ```javascript var ajv = new Ajv; require('ajv-async')(ajv); // in the browser if you want to load ajv-async bundle separately you can: // window.ajvAsync(ajv); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` #### Using other transpilers ```javascript var ajv = new Ajv({ processCode: transpileFunc }); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` See [Options](#options). ## Security considerations JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider. ##### Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ##### Untrusted schemas Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application. If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. [#557](https://github.com/ajv-validator/ajv/issues/557)) - validating certain data can be slow It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in `pattern` and `patternProperties` keywords. Regardless the measures you take, using untrusted schemas increases security risks. ##### Circular references in JavaScript objects Ajv does not support schemas and validated data that have circular references in objects. See [issue #802](https://github.com/ajv-validator/ajv/issues/802). An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references. ##### Security risks of trusted schemas Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to): - `pattern` and `format` for large strings - in some cases using `maxLength` can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see [ReDoS attack](#redos-attack)). - `patternProperties` for large property names - use `propertyNames` to mitigate, but some regular expressions can have exponential evaluation time as well. - `uniqueItems` for large non-scalar arrays - use `maxItems` to mitigate __Please note__: The suggestions above to prevent slow validation would only work if you do NOT use `allErrors: true` in production code (using it would continue validation after validation errors). You can validate your JSON schemas against [this meta-schema](https://github.com/ajv-validator/ajv/blob/master/lib/refs/json-schema-secure.json) to check that these recommendations are followed: ```javascript const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json')); const schema1 = {format: 'email'}; isSchemaSecure(schema1); // false const schema2 = {format: 'email', maxLength: MAX_LENGTH}; isSchemaSecure(schema2); // true ``` __Please note__: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results. ##### Content Security Policies (CSP) See [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) ## ReDoS attack Certain regular expressions can lead to the exponential evaluation time even with relatively short strings. Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see [safe-regex](https://github.com/substack/safe-regex), for example. __Please note__: some formats that Ajv implements use [regular expressions](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js) that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources __it is strongly recommended__ to consider the following: - making assessment of "format" implementations in Ajv. - using `format: 'fast'` option that simplifies some of the regular expressions (although it does not guarantee that they are safe). - replacing format implementations provided by Ajv with your own implementations of "format" keyword that either uses different regular expressions or another approach to format validation. Please see [addFormat](#api-addformat) method. - disabling format validation by ignoring "format" keyword with option `format: false` Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios. ## Filtering data With [option `removeAdditional`](#options) (added by [andyscott](https://github.com/andyscott)) you can filter data during the validation. This option modifies original data. Example: ```javascript var ajv = new Ajv({ removeAdditional: true }); var schema = { "additionalProperties": false, "properties": { "foo": { "type": "number" }, "bar": { "additionalProperties": { "type": "number" }, "properties": { "baz": { "type": "string" } } } } } var data = { "foo": 0, "additional1": 1, // will be removed; `additionalProperties` == false "bar": { "baz": "abc", "additional2": 2 // will NOT be removed; `additionalProperties` != false }, } var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 } ``` If `removeAdditional` option in the example above were `"all"` then both `additional1` and `additional2` properties would have been removed. If the option were `"failing"` then property `additional1` would have been removed regardless of its value and property `additional2` would have been removed only if its value were failing the schema in the inner `additionalProperties` (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed). __Please note__: If you use `removeAdditional` option with `additionalProperties` keyword inside `anyOf`/`oneOf` keywords your validation can fail with this schema, for example: ```json { "type": "object", "oneOf": [ { "properties": { "foo": { "type": "string" } }, "required": [ "foo" ], "additionalProperties": false }, { "properties": { "bar": { "type": "integer" } }, "required": [ "bar" ], "additionalProperties": false } ] } ``` The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties. With the option `removeAdditional: true` the validation will pass for the object `{ "foo": "abc"}` but will fail for the object `{"bar": 1}`. It happens because while the first subschema in `oneOf` is validated, the property `bar` is removed because it is an additional property according to the standard (because it is not included in `properties` keyword in the same schema). While this behaviour is unexpected (issues [#129](https://github.com/ajv-validator/ajv/issues/129), [#134](https://github.com/ajv-validator/ajv/issues/134)), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way: ```json { "type": "object", "properties": { "foo": { "type": "string" }, "bar": { "type": "integer" } }, "additionalProperties": false, "oneOf": [ { "required": [ "foo" ] }, { "required": [ "bar" ] } ] } ``` The schema above is also more efficient - it will compile into a faster function. ## Assigning defaults With [option `useDefaults`](#options) Ajv will assign values from `default` keyword in the schemas of `properties` and `items` (when it is the array of schemas) to the missing properties and items. With the option value `"empty"` properties and items equal to `null` or `""` (empty string) will be considered missing and assigned defaults. This option modifies original data. __Please note__: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema. Example 1 (`default` in `properties`): ```javascript var ajv = new Ajv({ useDefaults: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "string", "default": "baz" } }, "required": [ "foo", "bar" ] }; var data = { "foo": 1 }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": "baz" } ``` Example 2 (`default` in `items`): ```javascript var schema = { "type": "array", "items": [ { "type": "number" }, { "type": "string", "default": "foo" } ] } var data = [ 1 ]; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // [ 1, "foo" ] ``` `default` keywords in other cases are ignored: - not in `properties` or `items` subschemas - in schemas inside `anyOf`, `oneOf` and `not` (see [#42](https://github.com/ajv-validator/ajv/issues/42)) - in `if` subschema of `switch` keyword - in schemas generated by custom macro keywords The [`strictDefaults` option](#options) customizes Ajv's behavior for the defaults that Ajv ignores (`true` raises an error, and `"log"` outputs a warning). ## Coercing data types When you are validating user inputs all your data properties are usually strings. The option `coerceTypes` allows you to have your data types coerced to the types specified in your schema `type` keywords, both to pass the validation and to use the correctly typed data afterwards. This option modifies original data. __Please note__: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value. Example 1: ```javascript var ajv = new Ajv({ coerceTypes: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "boolean" } }, "required": [ "foo", "bar" ] }; var data = { "foo": "1", "bar": "false" }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": false } ``` Example 2 (array coercions): ```javascript var ajv = new Ajv({ coerceTypes: 'array' }); var schema = { "properties": { "foo": { "type": "array", "items": { "type": "number" } }, "bar": { "type": "boolean" } } }; var data = { "foo": "1", "bar": ["false"] }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": [1], "bar": false } ``` The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords). See [Coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md) for details. ## API ##### new Ajv(Object options) -&gt; Object Create Ajv instance. ##### .compile(Object schema) -&gt; Function&lt;Object data&gt; Generate validating function and cache the compiled schema for future use. Validating function returns a boolean value. This function has properties `errors` and `schema`. Errors encountered during the last validation are assigned to `errors` property (it is assigned `null` if there was no errors). `schema` property contains the reference to the original schema. The schema passed to this method will be validated against meta-schema unless `validateSchema` option is false. If schema is invalid, an error will be thrown. See [options](#options). ##### <a name="api-compileAsync"></a>.compileAsync(Object schema [, Boolean meta] [, Function callback]) -&gt; Promise Asynchronous version of `compile` method that loads missing remote schemas using asynchronous function in `options.loadSchema`. This function returns a Promise that resolves to a validation function. An optional callback passed to `compileAsync` will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when: - missing schema can't be loaded (`loadSchema` returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved. - schema (or some loaded/referenced schema) is invalid. The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded. You can asynchronously compile meta-schema by passing `true` as the second parameter. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### .validate(Object schema|String key|String ref, data) -&gt; Boolean Validate data using passed schema (it will be compiled and cached). Instead of the schema you can use the key that was previously passed to `addSchema`, the schema id if it was present in the schema or any previously resolved reference. Validation errors will be available in the `errors` property of Ajv instance (`null` if there were no errors). __Please note__: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later. If the schema is asynchronous (has `$async` keyword on the top level) this method returns a Promise. See [Asynchronous validation](#asynchronous-validation). ##### .addSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole. Array of schemas can be passed (schemas should have ids), the second parameter will be ignored. Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key. Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data. Although `addSchema` does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time. By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by `validateSchema` option. __Please note__: Ajv uses the [method chaining syntax](https://en.wikipedia.org/wiki/Method_chaining) for all methods with the prefix `add*` and `remove*`. This allows you to do nice things like the following. ```javascript var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri); ``` ##### .addMetaSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of `addSchema` because there may be instance options that would compile a meta schema incorrectly (at the moment it is `removeAdditional` option). There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option `meta` is set to `false`. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See `validateSchema`. ##### <a name="api-validateschema"></a>.validateSchema(Object schema) -&gt; Boolean Validates schema. This method should be used to validate schemas rather than `validate` due to the inconsistency of `uri` format in JSON Schema standard. By default this method is called automatically when the schema is added, so you rarely need to use it directly. If schema doesn't have `$schema` property, it is validated against draft 6 meta-schema (option `meta` should not be false). If schema has `$schema` property, then the schema with this id (that should be previously added) is used to validate passed schema. Errors will be available at `ajv.errors`. ##### .getSchema(String key) -&gt; Function&lt;Object data&gt; Retrieve compiled schema previously added with `addSchema` by the key passed to `addSchema` or by its full reference (id). The returned validating function has `schema` property with the reference to the original schema. ##### .removeSchema([Object schema|String key|String ref|RegExp pattern]) -&gt; Ajv Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references. Schema can be removed using: - key passed to `addSchema` - it's full reference (id) - RegExp that should match schema id or key (meta-schemas won't be removed) - actual schema object that will be stable-stringified to remove schema from cache If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared. ##### <a name="api-addformat"></a>.addFormat(String name, String|RegExp|Function|Object format) -&gt; Ajv Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance. Strings are converted to RegExp. Function should return validation result as `true` or `false`. If object is passed it should have properties `validate`, `compare` and `async`: - _validate_: a string, RegExp or a function as described above. - _compare_: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords `formatMaximum`/`formatMinimum` (defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package). It should return `1` if the first value is bigger than the second value, `-1` if it is smaller and `0` if it is equal. - _async_: an optional `true` value if `validate` is an asynchronous function; in this case it should return a promise that resolves with a value `true` or `false`. - _type_: an optional type of data that the format applies to. It can be `"string"` (default) or `"number"` (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass. Custom formats can be also added via `formats` option. ##### <a name="api-addkeyword"></a>.addKeyword(String keyword, Object definition) -&gt; Ajv Add custom validation keyword to Ajv instance. Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance. Keyword must start with a letter, `_` or `$`, and may continue with letters, numbers, `_`, `$`, or `-`. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions. Example Keywords: - `"xyz-example"`: valid, and uses prefix for the xyz project to avoid name collisions. - `"example"`: valid, but not recommended as it could collide with future versions of JSON Schema etc. - `"3-example"`: invalid as numbers are not allowed to be the first character in a keyword Keyword definition is an object with the following properties: - _type_: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types. - _validate_: validating function - _compile_: compiling function - _macro_: macro function - _inline_: compiling function that returns code (as string) - _schema_: an optional `false` value used with "validate" keyword to not pass schema - _metaSchema_: an optional meta-schema for keyword schema - _dependencies_: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation - _modifying_: `true` MUST be passed if keyword modifies data - _statements_: `true` can be passed in case inline keyword generates statements (as opposed to expression) - _valid_: pass `true`/`false` to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - _$data_: an optional `true` value to support [$data reference](#data-reference) as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - _async_: an optional `true` value if the validation function is asynchronous (whether it is compiled or passed in _validate_ property); in this case it should return a promise that resolves with a value `true` or `false`. This option is ignored in case of "macro" and "inline" keywords. - _errors_: an optional boolean or string `"full"` indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation. _compile_, _macro_ and _inline_ are mutually exclusive, only one should be used at a time. _validate_ can be used separately or in addition to them to support $data reference. __Please note__: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate `type` keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed. See [Defining custom keywords](#defining-custom-keywords) for more details. ##### .getKeyword(String keyword) -&gt; Object|Boolean Returns custom keyword definition, `true` for pre-defined keywords and `false` if the keyword is unknown. ##### .removeKeyword(String keyword) -&gt; Ajv Removes custom or pre-defined keyword so you can redefine them. While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results. __Please note__: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use `removeSchema` method and compile them again. ##### .errorsText([Array&lt;Object&gt; errors [, Object options]]) -&gt; String Returns the text with all errors in a String. Options can have properties `separator` (string used to separate errors, ", " by default) and `dataVar` (the variable name that dataPaths are prefixed with, "data" by default). ## Options Defaults: ```javascript { // validation and reporting options: $data: false, allErrors: false, verbose: false, $comment: false, // NEW in Ajv version 6.0 jsonPointers: false, uniqueItems: true, unicode: true, nullable: false, format: 'fast', formats: {}, unknownFormats: true, schemas: {}, logger: undefined, // referenced schema options: schemaId: '$id', missingRefs: true, extendRefs: 'ignore', // recommended 'fail' loadSchema: undefined, // function(uri: string): Promise {} // options to modify validated data: removeAdditional: false, useDefaults: false, coerceTypes: false, // strict mode options strictDefaults: false, strictKeywords: false, strictNumbers: false, // asynchronous validation options: transpile: undefined, // requires ajv-async package // advanced options: meta: true, validateSchema: true, addUsedSchema: true, inlineRefs: true, passContext: false, loopRequired: Infinity, ownProperties: false, multipleOfPrecision: false, errorDataPath: 'object', // deprecated messages: true, sourceCode: false, processCode: undefined, // function (str: string, schema: object): string {} cache: new Cache, serialize: undefined } ``` ##### Validation and reporting options - _$data_: support [$data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See [API](#api). - _allErrors_: check all rules collecting all errors. Default is to return after the first error. - _verbose_: include the reference to the part of the schema (`schema` and `parentSchema`) and validated data in errors (false by default). - _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$comment` keyword to a function. Option values: - `false` (default): ignore $comment keyword. - `true`: log the keyword value to console. - function: pass the keyword value, its schema path and root schema to the specified function - _jsonPointers_: set `dataPath` property of errors using [JSON Pointers](https://tools.ietf.org/html/rfc6901) instead of JavaScript property access notation. - _uniqueItems_: validate `uniqueItems` keyword (true by default). - _unicode_: calculate correct length of strings with unicode pairs (true by default). Pass `false` to use `.length` of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - _nullable_: support keyword "nullable" from [Open API 3 specification](https://swagger.io/docs/specification/data-models/data-types/). - _format_: formats validation mode. Option values: - `"fast"` (default) - simplified and fast validation (see [Formats](#formats) for details of which formats are available and affected by this option). - `"full"` - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - `false` - ignore all format keywords. - _formats_: an object with custom formats. Keys and values will be passed to `addFormat` method. - _keywords_: an object with custom keywords. Keys and values will be passed to `addKeyword` method. - _unknownFormats_: handling of unknown formats. Option values: - `true` (default) - if an unknown format is encountered the exception is thrown during schema compilation. If `format` keyword value is [$data reference](#data-reference) and it is unknown the validation will fail. - `[String]` - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. If `format` keyword value is [$data reference](#data-reference) and it is not in this array the validation will fail. - `"ignore"` - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification. - _schemas_: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method `addSchema(value, key)` will be called for each schema in this object. - _logger_: sets the logging method. Default is the global `console` object that should have methods `log`, `warn` and `error`. See [Error logging](#error-logging). Option values: - custom logger - it should have methods `log`, `warn` and `error`. If any of these methods is missing an exception will be thrown. - `false` - logging is disabled. ##### Referenced schema options - _schemaId_: this option defines which keywords are used as schema URI. Option value: - `"$id"` (default) - only use `$id` keyword as schema URI (as specified in JSON Schema draft-06/07), ignore `id` keyword (if it is present a warning will be logged). - `"id"` - only use `id` keyword as schema URI (as specified in JSON Schema draft-04), ignore `$id` keyword (if it is present a warning will be logged). - `"auto"` - use both `$id` and `id` keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation. - _missingRefs_: handling of missing referenced schemas. Option values: - `true` (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties `missingRef` (with hash fragment) and `missingSchema` (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). - `"ignore"` - to log error during compilation and always pass validation. - `"fail"` - to log error and successfully compile schema but fail validation if this rule is checked. - _extendRefs_: validation of other keywords when `$ref` is present in the schema. Option values: - `"ignore"` (default) - when `$ref` is used other keywords are ignored (as per [JSON Reference](https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03#section-3) standard). A warning will be logged during the schema compilation. - `"fail"` (recommended) - if other validation keywords are used together with `$ref` the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing. - `true` - validate all keywords in the schemas with `$ref` (the default behaviour in versions before 5.0.0). - _loadSchema_: asynchronous function that will be used to load remote schemas when `compileAsync` [method](#api-compileAsync) is used and some reference is missing (option `missingRefs` should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### Options to modify validated data - _removeAdditional_: remove additional properties - see example in [Filtering data](#filtering-data). This option is not used if schema is added with `addMetaSchema` method. Option values: - `false` (default) - not to remove additional properties - `"all"` - all additional properties are removed, regardless of `additionalProperties` keyword in schema (and no validation is made for them). - `true` - only additional properties with `additionalProperties` keyword equal to `false` are removed. - `"failing"` - additional properties that fail schema validation will be removed (where `additionalProperties` keyword is `false` or schema). - _useDefaults_: replace missing or undefined properties and items with the values from corresponding `default` keywords. Default behaviour is to ignore `default` keywords. This option is not used if schema is added with `addMetaSchema` method. See examples in [Assigning defaults](#assigning-defaults). Option values: - `false` (default) - do not use defaults - `true` - insert defaults by value (object literal is used). - `"empty"` - in addition to missing or undefined, use defaults for properties and items that are equal to `null` or `""` (an empty string). - `"shared"` (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well. - _coerceTypes_: change data type of data to match `type` keyword. See the example in [Coercing data types](#coercing-data-types) and [coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md). Option values: - `false` (default) - no type coercion. - `true` - coerce scalar data types. - `"array"` - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema). ##### Strict mode options - _strictDefaults_: report ignored `default` keywords in schemas. Option values: - `false` (default) - ignored defaults are not reported - `true` - if an ignored default is present, throw an error - `"log"` - if an ignored default is present, log warning - _strictKeywords_: report unknown keywords in schemas. Option values: - `false` (default) - unknown keywords are not reported - `true` - if an unknown keyword is present, throw an error - `"log"` - if an unknown keyword is present, log warning - _strictNumbers_: validate numbers strictly, failing validation for NaN and Infinity. Option values: - `false` (default) - NaN or Infinity will pass validation for numeric types - `true` - NaN or Infinity will not pass validation for numeric types ##### Asynchronous validation options - _transpile_: Requires [ajv-async](https://github.com/ajv-validator/ajv-async) package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values: - `undefined` (default) - transpile with [nodent](https://github.com/MatAtBread/nodent) if async functions are not supported. - `true` - always transpile with nodent. - `false` - do not transpile; if async functions are not supported an exception will be thrown. ##### Advanced options - _meta_: add [meta-schema](http://json-schema.org/documentation.html) so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no `$schema` keyword. This default meta-schema MUST have `$schema` keyword. - _validateSchema_: validate added/compiled schemas against meta-schema (true by default). `$schema` property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with `addMetaSchema` method. Option values: - `true` (default) - if the validation fails, throw the exception. - `"log"` - if the validation fails, log error. - `false` - skip schema validation. - _addUsedSchema_: by default methods `compile` and `validate` add schemas to the instance if they have `$id` (or `id`) property that doesn't start with "#". If `$id` is present and it is not unique the exception will be thrown. Set this option to `false` to skip adding schemas to the instance and the `$id` uniqueness check when these methods are used. This option does not affect `addSchema` method. - _inlineRefs_: Affects compilation of referenced schemas. Option values: - `true` (default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. - `false` - to not inline referenced schemas (they will be compiled as separate functions). - integer number - to limit the maximum number of keywords of the schema that will be inlined. - _passContext_: pass validation context to custom keyword functions. If this option is `true` and you pass some context to the compiled validation function with `validate.call(context, data)`, the `context` will be available as `this` in your custom keywords. By default `this` is Ajv instance. - _loopRequired_: by default `required` keyword is compiled into a single expression (or a sequence of statements in `allErrors` mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which `required` keyword will be validated in a loop - smaller validation function size but also worse performance. - _ownProperties_: by default Ajv iterates over all enumerable object properties; when this option is `true` only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - _multipleOfPrecision_: by default `multipleOf` keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue [#84](https://github.com/ajv-validator/ajv/issues/84)). If you need to use fractional dividers set this option to some positive integer N to have `multipleOf` validated using this formula: `Math.abs(Math.round(division) - division) < 1e-N` (it is slower but allows for float arithmetics deviations). - _errorDataPath_ (deprecated): set `dataPath` to point to 'object' (default) or to 'property' when validating keywords `required`, `additionalProperties` and `dependencies`. - _messages_: Include human-readable messages in errors. `true` by default. `false` can be passed when custom messages are used (e.g. with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n)). - _sourceCode_: add `sourceCode` property to validating function (for debugging; this code can be different from the result of toString call). - _processCode_: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options: - `beautify` that formatted the generated function using [js-beautify](https://github.com/beautify-web/js-beautify). If you want to beautify the generated code pass a function calling `require('js-beautify').js_beautify` as `processCode: code => js_beautify(code)`. - `transpile` that transpiled asynchronous validation function. You can still use `transpile` option with [ajv-async](https://github.com/ajv-validator/ajv-async) package. See [Asynchronous validation](#asynchronous-validation) for more information. - _cache_: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache [sacjs](https://github.com/epoberezkin/sacjs) can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods `put(key, value)`, `get(key)`, `del(key)` and `clear()`. - _serialize_: an optional function to serialize schema to cache key. Pass `false` to use schema itself as a key (e.g., if WeakMap used as a cache). By default [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) is used. ## Validation errors In case of validation failure, Ajv assigns the array of errors to `errors` property of validation function (or to `errors` property of Ajv instance when `validate` or `validateSchema` methods were called). In case of [asynchronous validation](#asynchronous-validation), the returned promise is rejected with exception `Ajv.ValidationError` that has `errors` property. ### Error objects Each error is an object with the following properties: - _keyword_: validation keyword. - _dataPath_: the path to the part of the data that was validated. By default `dataPath` uses JavaScript property access notation (e.g., `".prop[1].subProp"`). When the option `jsonPointers` is true (see [Options](#options)) `dataPath` will be set using JSON pointer standard (e.g., `"/prop/1/subProp"`). - _schemaPath_: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation. - _params_: the object with the additional information about error that can be used to create custom error messages (e.g., using [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package). See below for parameters set by all keywords. - _message_: the standard error message (can be excluded with option `messages` set to false). - _schema_: the schema of the keyword (added with `verbose` option). - _parentSchema_: the schema containing the keyword (added with `verbose` option) - _data_: the data validated by the keyword (added with `verbose` option). __Please note__: `propertyNames` keyword schema validation errors have an additional property `propertyName`, `dataPath` points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property `keyword` equal to `"propertyNames"`. ### Error parameters Properties of `params` object in errors depend on the keyword that failed validation. - `maxItems`, `minItems`, `maxLength`, `minLength`, `maxProperties`, `minProperties` - property `limit` (number, the schema of the keyword). - `additionalItems` - property `limit` (the maximum number of allowed items in case when `items` keyword is an array of schemas and `additionalItems` is false). - `additionalProperties` - property `additionalProperty` (the property not used in `properties` and `patternProperties` keywords). - `dependencies` - properties: - `property` (dependent property), - `missingProperty` (required missing dependency - only the first one is reported currently) - `deps` (required dependencies, comma separated list as a string), - `depsCount` (the number of required dependencies). - `format` - property `format` (the schema of the keyword). - `maximum`, `minimum` - properties: - `limit` (number, the schema of the keyword), - `exclusive` (boolean, the schema of `exclusiveMaximum` or `exclusiveMinimum`), - `comparison` (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=") - `multipleOf` - property `multipleOf` (the schema of the keyword) - `pattern` - property `pattern` (the schema of the keyword) - `required` - property `missingProperty` (required property that is missing). - `propertyNames` - property `propertyName` (an invalid property name). - `patternRequired` (in ajv-keywords) - property `missingPattern` (required pattern that did not match any property). - `type` - property `type` (required type(s), a string, can be a comma-separated list) - `uniqueItems` - properties `i` and `j` (indices of duplicate items). - `const` - property `allowedValue` pointing to the value (the schema of the keyword). - `enum` - property `allowedValues` pointing to the array of values (the schema of the keyword). - `$ref` - property `ref` with the referenced schema URI. - `oneOf` - property `passingSchemas` (array of indices of passing schemas, null if no schema passes). - custom keywords (in case keyword definition doesn't create errors) - property `keyword` (the keyword name). ### Error logging Using the `logger` option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - **Required Methods**: `log`, `warn`, `error` ```javascript var otherLogger = new OtherLogger(); var ajv = new Ajv({ logger: { log: console.log.bind(console), warn: function warn() { otherLogger.logWarn.apply(otherLogger, arguments); }, error: function error() { otherLogger.logError.apply(otherLogger, arguments); console.error.apply(console, arguments); } } }); ``` ## Plugins Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions: - it exports a function - this function accepts ajv instance as the first parameter and returns the same instance to allow chaining - this function can accept an optional configuration as the second parameter If you have published a useful plugin please submit a PR to add it to the next section. ## Related packages - [ajv-async](https://github.com/ajv-validator/ajv-async) - plugin to configure async validation mode - [ajv-bsontype](https://github.com/BoLaMN/ajv-bsontype) - plugin to validate mongodb's bsonType formats - [ajv-cli](https://github.com/jessedc/ajv-cli) - command line interface - [ajv-errors](https://github.com/ajv-validator/ajv-errors) - plugin for custom error messages - [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) - internationalised error messages - [ajv-istanbul](https://github.com/ajv-validator/ajv-istanbul) - plugin to instrument generated validation code to measure test coverage of your schemas - [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) - plugin with custom validation keywords (select, typeof, etc.) - [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) - plugin with keywords $merge and $patch - [ajv-pack](https://github.com/ajv-validator/ajv-pack) - produces a compact module exporting validation functions - [ajv-formats-draft2019](https://github.com/luzlab/ajv-formats-draft2019) - format validators for draft2019 that aren't already included in ajv (ie. `idn-hostname`, `idn-email`, `iri`, `iri-reference` and `duration`). ## Some packages using Ajv - [webpack](https://github.com/webpack/webpack) - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser - [jsonscript-js](https://github.com/JSONScript/jsonscript-js) - the interpreter for [JSONScript](http://www.jsonscript.org) - scripted processing of existing endpoints and services - [osprey-method-handler](https://github.com/mulesoft-labs/osprey-method-handler) - Express middleware for validating requests and responses based on a RAML method object, used in [osprey](https://github.com/mulesoft/osprey) - validating API proxy generated from a RAML definition - [har-validator](https://github.com/ahmadnassri/har-validator) - HTTP Archive (HAR) validator - [jsoneditor](https://github.com/josdejong/jsoneditor) - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org - [JSON Schema Lint](https://github.com/nickcmaynard/jsonschemalint) - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com - [objection](https://github.com/vincit/objection.js) - SQL-friendly ORM for Node.js - [table](https://github.com/gajus/table) - formats data into a string table - [ripple-lib](https://github.com/ripple/ripple-lib) - a JavaScript API for interacting with [Ripple](https://ripple.com) in Node.js and the browser - [restbase](https://github.com/wikimedia/restbase) - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content - [hippie-swagger](https://github.com/CacheControl/hippie-swagger) - [Hippie](https://github.com/vesln/hippie) wrapper that provides end to end API testing with swagger validation - [react-form-controlled](https://github.com/seeden/react-form-controlled) - React controlled form components with validation - [rabbitmq-schema](https://github.com/tjmehta/rabbitmq-schema) - a schema definition module for RabbitMQ graphs and messages - [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema - [chai-ajv-json-schema](https://github.com/peon374/chai-ajv-json-schema) - chai plugin to us JSON Schema with expect in mocha tests - [grunt-jsonschema-ajv](https://github.com/SignpostMarv/grunt-jsonschema-ajv) - Grunt plugin for validating files against JSON Schema - [extract-text-webpack-plugin](https://github.com/webpack-contrib/extract-text-webpack-plugin) - extract text from bundle into a file - [electron-builder](https://github.com/electron-userland/electron-builder) - a solution to package and build a ready for distribution Electron app - [addons-linter](https://github.com/mozilla/addons-linter) - Mozilla Add-ons Linter - [gh-pages-generator](https://github.com/epoberezkin/gh-pages-generator) - multi-page site generator converting markdown files to GitHub pages - [ESLint](https://github.com/eslint/eslint) - the pluggable linting utility for JavaScript and JSX ## Tests ``` npm install git submodule update --init npm test ``` ## Contributing All validation functions are generated using doT templates in [dot](https://github.com/ajv-validator/ajv/tree/master/lib/dot) folder. Templates are precompiled so doT is not a run-time dependency. `npm run build` - compiles templates to [dotjs](https://github.com/ajv-validator/ajv/tree/master/lib/dotjs) folder. `npm run watch` - automatically compiles templates when files in dot folder change Please see [Contributing guidelines](https://github.com/ajv-validator/ajv/blob/master/CONTRIBUTING.md) ## Changes history See https://github.com/ajv-validator/ajv/releases __Please note__: [Changes in version 7.0.0-beta](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](https://github.com/ajv-validator/ajv/blob/master/CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](https://github.com/ajv-validator/ajv/blob/master/LICENSE) long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` # assemblyscript-regex A regex engine for AssemblyScript. [AssemblyScript](https://www.assemblyscript.org/) is a new language, based on TypeScript, that runs on WebAssembly. AssemblyScript has a lightweight standard library, but lacks support for Regular Expression. The project fills that gap! This project exposes an API that mirrors the JavaScript [RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp) class: ```javascript const regex = new RegExp("fo*", "g"); const str = "table football, foul"; let match: Match | null = regex.exec(str); while (match != null) { // first iteration // match.index = 6 // match.matches[0] = "foo" // second iteration // match.index = 16 // match.matches[0] = "fo" match = regex.exec(str); } ``` ## Project status The initial focus of this implementation has been feature support and functionality over performance. It currently supports a sufficient number of regex features to be considered useful, including most character classes, common assertions, groups, alternations, capturing groups and quantifiers. The next phase of development will focussed on more extensive testing and performance. The project currently has reasonable unit test coverage, focussed on positive and negative test cases on a per-feature basis. It also includes a more exhaustive test suite with test cases borrowed from another regex library. ### Feature support Based on the classfication within the [MDN cheatsheet](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions/Cheatsheet) **Character sets** - [x] . - [x] \d - [x] \D - [x] \w - [x] \W - [x] \s - [x] \S - [x] \t - [x] \r - [x] \n - [x] \v - [x] \f - [ ] [\b] - [ ] \0 - [ ] \cX - [x] \xhh - [x] \uhhhh - [ ] \u{hhhh} or \u{hhhhh} - [x] \ **Assertions** - [x] ^ - [x] $ - [ ] \b - [ ] \B **Other assertions** - [ ] x(?=y) Lookahead assertion - [ ] x(?!y) Negative lookahead assertion - [ ] (?<=y)x Lookbehind assertion - [ ] (?<!y)x Negative lookbehind assertion **Groups and ranges** - [x] x|y - [x] [xyz][a-c] - [x] [^xyz][^a-c] - [x] (x) capturing group - [ ] \n back reference - [ ] (?<Name>x) named capturing group - [x] (?:x) Non-capturing group **Quantifiers** - [x] x\* - [x] x+ - [x] x? - [x] x{n} - [x] x{n,} - [x] x{n,m} - [ ] x\*? / x+? / ... **RegExp** - [x] global - [ ] sticky - [x] case insensitive - [x] multiline - [x] dotAll - [ ] unicode ### Development This project is open source, MIT licenced and your contributions are very much welcomed. To get started, check out the repository and install dependencies: ``` $ npm install ``` A few general points about the tools and processes this project uses: - This project uses prettier for code formatting and eslint to provide additional syntactic checks. These are both run on `npm test` and as part of the CI build. - The unit tests are executed using [as-pect](https://github.com/jtenner/as-pect) - a native AssemblyScript test runner - The specification tests are within the `spec` folder. The `npm run test:generate` target transforms these tests into as-pect tests which execute as part of the standard build / test cycle - In order to support improved debugging you can execute this library as TypeScript (rather than WebAssembly), via the `npm run tsrun` target. # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [Build]: http://img.shields.io/travis/litejs/natural-compare-lite.png [Coverage]: http://img.shields.io/coveralls/litejs/natural-compare-lite.png [1]: https://travis-ci.org/litejs/natural-compare-lite [2]: https://coveralls.io/r/litejs/natural-compare-lite [npm package]: https://npmjs.org/package/natural-compare-lite [GitHub repo]: https://github.com/litejs/natural-compare-lite @version 1.4.0 @date 2015-10-26 @stability 3 - Stable Natural Compare &ndash; [![Build][]][1] [![Coverage][]][2] =============== Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a "natural ordering". ```text Standard sorting: Natural order sorting: img1.png img1.png img10.png img2.png img12.png img10.png img2.png img12.png ``` String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function. ### Installation - In browser ```html <script src=min.natural-compare.js></script> ``` - In node.js: `npm install natural-compare-lite` ```javascript require("natural-compare-lite") ``` ### Usage ```javascript // Simple case sensitive example var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"]; a.sort(String.naturalCompare); // ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"] // Use wrapper function for case insensitivity a.sort(function(a, b){ return String.naturalCompare(a.toLowerCase(), b.toLowerCase()); }) // In most cases we want to sort an array of objects var a = [ {"street":"350 5th Ave", "room":"A-1021"} , {"street":"350 5th Ave", "room":"A-21046-b"} ]; // sort by street, then by room a.sort(function(a, b){ return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room); }) // When text transformation is needed (eg toLowerCase()), // it is best for performance to keep // transformed key in that object. // There are no need to do text transformation // on each comparision when sorting. var a = [ {"make":"Audi", "model":"A6"} , {"make":"Kia", "model":"Rio"} ]; // sort by make, then by model a.map(function(car){ car.sort_key = (car.make + " " + car.model).toLowerCase(); }) a.sort(function(a, b){ return String.naturalCompare(a.sort_key, b.sort_key); }) ``` - Works well with dates in ISO format eg "Rev 2012-07-26.doc". ### Custom alphabet It is possible to configure a custom alphabet to achieve a desired order. ```javascript // Estonian alphabet String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy" ["t", "z", "x", "õ"].sort(String.naturalCompare) // ["z", "t", "õ", "x"] // Russian alphabet String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя" ["Ё", "А", "Б"].sort(String.naturalCompare) // ["А", "Б", "Ё"] ``` External links -------------- - [GitHub repo][https://github.com/litejs/natural-compare-lite] - [jsperf test](http://jsperf.com/natural-sort-2/12) Licence ------- Copyright (c) 2012-2015 Lauri Rooden &lt;lauri@rooden.ee&gt; [The MIT License](http://lauri.rooden.ee/mit-license.txt) # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows command prompt notes ##### CMD On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Example: ```cmd set DEBUG=* & node app.js ``` ##### PowerShell (VS Code default) PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Example: ```cmd $env:DEBUG='app';node app.js ``` Then, run the program to be debugged as usual. npm script example: ```js "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js", ``` ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Extend You can simply extend debugger ```js const log = require('debug')('auth'); //creates new debug instance with extended namespace const logSign = log.extend('sign'); const logLogin = log.extend('login'); log('hello'); // auth hello logSign('hello'); //auth:sign hello logLogin('hello'); //auth:login hello ``` ## Set dynamically You can also enable debug dynamically by calling the `enable()` method : ```js let debug = require('debug'); console.log(1, debug.enabled('test')); debug.enable('test'); console.log(2, debug.enabled('test')); debug.disable(); console.log(3, debug.enabled('test')); ``` print : ``` 1 false 2 true 3 false ``` Usage : `enable(namespaces)` `namespaces` can include modes separated by a colon and wildcards. Note that calling `enable()` completely overrides previously set DEBUG variable : ``` $ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))' => false ``` `disable()` Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with. For example: ```js let debug = require('debug'); debug.enable('foo:*,-foo:bar'); let namespaces = debug.disable(); debug.enable(namespaces); ``` Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical. ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [![npm version](https://img.shields.io/npm/v/espree.svg)](https://www.npmjs.com/package/espree) [![Build Status](https://travis-ci.org/eslint/espree.svg?branch=master)](https://travis-ci.org/eslint/espree) [![npm downloads](https://img.shields.io/npm/dm/espree.svg)](https://www.npmjs.com/package/espree) [![Bountysource](https://www.bountysource.com/badge/tracker?tracker_id=9348450)](https://www.bountysource.com/trackers/9348450-eslint?utm_source=9348450&utm_medium=shield&utm_campaign=TRACKER_BADGE) # Espree Espree started out as a fork of [Esprima](http://esprima.org) v1.2.2, the last stable published released of Esprima before work on ECMAScript 6 began. Espree is now built on top of [Acorn](https://github.com/ternjs/acorn), which has a modular architecture that allows extension of core functionality. The goal of Espree is to produce output that is similar to Esprima with a similar API so that it can be used in place of Esprima. ## Usage Install: ``` npm i espree ``` And in your Node.js code: ```javascript const espree = require("espree"); const ast = espree.parse(code); ``` ## API ### `parse()` `parse` parses the given code and returns a abstract syntax tree (AST). It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). ```javascript const espree = require("espree"); const ast = espree.parse(code, options); ``` **Example :** ```js const ast = espree.parse('let foo = "bar"', { ecmaVersion: 6 }); console.log(ast); ``` <details><summary>Output</summary> <p> ``` Node { type: 'Program', start: 0, end: 15, body: [ Node { type: 'VariableDeclaration', start: 0, end: 15, declarations: [Array], kind: 'let' } ], sourceType: 'script' } ``` </p> </details> ### `tokenize()` `tokenize` returns the tokens of a given code. It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). Even if `options` is empty or undefined or `options.tokens` is `false`, it assigns it to `true` in order to get the `tokens` array **Example :** ```js const tokens = espree.tokenize('let foo = "bar"', { ecmaVersion: 6 }); console.log(tokens); ``` <details><summary>Output</summary> <p> ``` Token { type: 'Keyword', value: 'let', start: 0, end: 3 }, Token { type: 'Identifier', value: 'foo', start: 4, end: 7 }, Token { type: 'Punctuator', value: '=', start: 8, end: 9 }, Token { type: 'String', value: '"bar"', start: 10, end: 15 } ``` </p> </details> ### `version` Returns the current `espree` version ### `VisitorKeys` Returns all visitor keys for traversing the AST from [eslint-visitor-keys](https://github.com/eslint/eslint-visitor-keys) ### `latestEcmaVersion` Returns the latest ECMAScript supported by `espree` ### `supportedEcmaVersions` Returns an array of all supported ECMAScript versions ## Options ```js const options = { // attach range information to each node range: false, // attach line/column location information to each node loc: false, // create a top-level comments array containing all comments comment: false, // create a top-level tokens array containing all tokens tokens: false, // Set to 3, 5 (default), 6, 7, 8, 9, 10, 11, or 12 to specify the version of ECMAScript syntax you want to use. // You can also set to 2015 (same as 6), 2016 (same as 7), 2017 (same as 8), 2018 (same as 9), 2019 (same as 10), 2020 (same as 11), or 2021 (same as 12) to use the year-based naming. ecmaVersion: 5, // specify which type of script you're parsing ("script" or "module") sourceType: "script", // specify additional language features ecmaFeatures: { // enable JSX parsing jsx: false, // enable return in global scope globalReturn: false, // enable implied strict mode (if ecmaVersion >= 5) impliedStrict: false } } ``` ## Esprima Compatibility Going Forward The primary goal is to produce the exact same AST structure and tokens as Esprima, and that takes precedence over anything else. (The AST structure being the [ESTree](https://github.com/estree/estree) API with JSX extensions.) Separate from that, Espree may deviate from what Esprima outputs in terms of where and how comments are attached, as well as what additional information is available on AST nodes. That is to say, Espree may add more things to the AST nodes than Esprima does but the overall AST structure produced will be the same. Espree may also deviate from Esprima in the interface it exposes. ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/espree/issues). Espree is licensed under a permissive BSD 2-clause license. ## Security Policy We work hard to ensure that Espree is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting * `npm run browserify` - creates a version of Espree that is usable in a browser ## Differences from Espree 2.x * The `tokenize()` method does not use `ecmaFeatures`. Any string will be tokenized completely based on ECMAScript 6 semantics. * Trailing whitespace no longer is counted as part of a node. * `let` and `const` declarations are no longer parsed by default. You must opt-in by using an `ecmaVersion` newer than `5` or setting `sourceType` to `module`. * The `esparse` and `esvalidate` binary scripts have been removed. * There is no `tolerant` option. We will investigate adding this back in the future. ## Known Incompatibilities In an effort to help those wanting to transition from other parsers to Espree, the following is a list of noteworthy incompatibilities with other parsers. These are known differences that we do not intend to change. ### Esprima 1.2.2 * Esprima counts trailing whitespace as part of each AST node while Espree does not. In Espree, the end of a node is where the last token occurs. * Espree does not parse `let` and `const` declarations by default. * Error messages returned for parsing errors are different. * There are two addition properties on every node and token: `start` and `end`. These represent the same data as `range` and are used internally by Acorn. ### Esprima 2.x * Esprima 2.x uses a different comment attachment algorithm that results in some comments being added in different places than Espree. The algorithm Espree uses is the same one used in Esprima 1.2.2. ## Frequently Asked Questions ### Why another parser [ESLint](http://eslint.org) had been relying on Esprima as its parser from the beginning. While that was fine when the JavaScript language was evolving slowly, the pace of development increased dramatically and Esprima had fallen behind. ESLint, like many other tools reliant on Esprima, has been stuck in using new JavaScript language features until Esprima updates, and that caused our users frustration. We decided the only way for us to move forward was to create our own parser, bringing us inline with JSHint and JSLint, and allowing us to keep implementing new features as we need them. We chose to fork Esprima instead of starting from scratch in order to move as quickly as possible with a compatible API. With Espree 2.0.0, we are no longer a fork of Esprima but rather a translation layer between Acorn and Esprima syntax. This allows us to put work back into a community-supported parser (Acorn) that is continuing to grow and evolve while maintaining an Esprima-compatible parser for those utilities still built on Esprima. ### Have you tried working with Esprima? Yes. Since the start of ESLint, we've regularly filed bugs and feature requests with Esprima and will continue to do so. However, there are some different philosophies around how the projects work that need to be worked through. The initial goal was to have Espree track Esprima and eventually merge the two back together, but we ultimately decided that building on top of Acorn was a better choice due to Acorn's plugin support. ### Why don't you just use Acorn? Acorn is a great JavaScript parser that produces an AST that is compatible with Esprima. Unfortunately, ESLint relies on more than just the AST to do its job. It relies on Esprima's tokens and comment attachment features to get a complete picture of the source code. We investigated switching to Acorn, but the inconsistencies between Esprima and Acorn created too much work for a project like ESLint. We are building on top of Acorn, however, so that we can contribute back and help make Acorn even better. ### What ECMAScript features do you support? Espree supports all ECMAScript 2020 features and partially supports ECMAScript 2021 features. Because ECMAScript 2021 is still under development, we are implementing features as they are finalized. Currently, Espree supports: * [Logical Assignment Operators](https://github.com/tc39/proposal-logical-assignment) * [Numeric Separators](https://github.com/tc39/proposal-numeric-separator) See [finished-proposals.md](https://github.com/tc39/proposals/blob/master/finished-proposals.md) to know what features are finalized. ### How do you determine which experimental features to support? In general, we do not support experimental JavaScript features. We may make exceptions from time to time depending on the maturity of the features. JS-YAML - YAML 1.2 parser / writer for JavaScript ================================================= [![Build Status](https://travis-ci.org/nodeca/js-yaml.svg?branch=master)](https://travis-ci.org/nodeca/js-yaml) [![NPM version](https://img.shields.io/npm/v/js-yaml.svg)](https://www.npmjs.org/package/js-yaml) __[Online Demo](http://nodeca.github.com/js-yaml/)__ This is an implementation of [YAML](http://yaml.org/), a human-friendly data serialization language. Started as [PyYAML](http://pyyaml.org/) port, it was completely rewritten from scratch. Now it's very fast, and supports 1.2 spec. Installation ------------ ### YAML module for node.js ``` npm install js-yaml ``` ### CLI executable If you want to inspect your YAML files from CLI, install js-yaml globally: ``` npm install -g js-yaml ``` #### Usage ``` usage: js-yaml [-h] [-v] [-c] [-t] file Positional arguments: file File with YAML document(s) Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -c, --compact Display errors in compact mode -t, --trace Show stack trace on error ``` ### Bundled YAML library for browsers ``` html <!-- esprima required only for !!js/function --> <script src="esprima.js"></script> <script src="js-yaml.min.js"></script> <script type="text/javascript"> var doc = jsyaml.load('greeting: hello\nname: world'); </script> ``` Browser support was done mostly for the online demo. If you find any errors - feel free to send pull requests with fixes. Also note, that IE and other old browsers needs [es5-shims](https://github.com/kriskowal/es5-shim) to operate. Notes: 1. We have no resources to support browserified version. Don't expect it to be well tested. Don't expect fast fixes if something goes wrong there. 2. `!!js/function` in browser bundle will not work by default. If you really need it - load `esprima` parser first (via amd or directly). 3. `!!bin` in browser will return `Array`, because browsers do not support node.js `Buffer` and adding Buffer shims is completely useless on practice. API --- Here we cover the most 'useful' methods. If you need advanced details (creating your own tags), see [wiki](https://github.com/nodeca/js-yaml/wiki) and [examples](https://github.com/nodeca/js-yaml/tree/master/examples) for more info. ``` javascript const yaml = require('js-yaml'); const fs = require('fs'); // Get document, or throw exception on error try { const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8')); console.log(doc); } catch (e) { console.log(e); } ``` ### safeLoad (string [ , options ]) **Recommended loading way.** Parses `string` as single YAML document. Returns either a plain object, a string or `undefined`, or throws `YAMLException` on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data. options: - `filename` _(default: null)_ - string to be used as a file path in error/warning messages. - `onWarning` _(default: null)_ - function to call on warning messages. Loader will call this function with an instance of `YAMLException` for each warning. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ - specifies a schema to use. - `FAILSAFE_SCHEMA` - only strings, arrays and plain objects: http://www.yaml.org/spec/1.2/spec.html#id2802346 - `JSON_SCHEMA` - all JSON-supported types: http://www.yaml.org/spec/1.2/spec.html#id2803231 - `CORE_SCHEMA` - same as `JSON_SCHEMA`: http://www.yaml.org/spec/1.2/spec.html#id2804923 - `DEFAULT_SAFE_SCHEMA` - all supported YAML types, without unsafe ones (`!!js/undefined`, `!!js/regexp` and `!!js/function`): http://yaml.org/type/ - `DEFAULT_FULL_SCHEMA` - all supported YAML types. - `json` _(default: false)_ - compatibility with JSON.parse behaviour. If true, then duplicate keys in a mapping will override values rather than throwing an error. NOTE: This function **does not** understand multi-document sources, it throws exception on those. NOTE: JS-YAML **does not** support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use `Null` and `NULL` as `null`, etc. The core schema also has no such restrictions. It allows binary notation for integers. ### load (string [ , options ]) **Use with care with untrusted sources**. The same as `safeLoad()` but uses `DEFAULT_FULL_SCHEMA` by default - adds some JavaScript-specific types: `!!js/function`, `!!js/regexp` and `!!js/undefined`. For untrusted sources, you must additionally validate object structure to avoid injections: ``` javascript const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"'; // I'm just converting that string, what could possibly go wrong? require('js-yaml').load(untrusted_code) + '' ``` ### safeLoadAll (string [, iterator] [, options ]) Same as `safeLoad()`, but understands multi-document sources. Applies `iterator` to each document if specified, or returns array of documents. ``` javascript const yaml = require('js-yaml'); yaml.safeLoadAll(data, function (doc) { console.log(doc); }); ``` ### loadAll (string [, iterator] [ , options ]) Same as `safeLoadAll()` but uses `DEFAULT_FULL_SCHEMA` by default. ### safeDump (object [ , options ]) Serializes `object` as a YAML document. Uses `DEFAULT_SAFE_SCHEMA`, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the `skipInvalid` option to `true`. options: - `indent` _(default: 2)_ - indentation width to use (in spaces). - `noArrayIndent` _(default: false)_ - when true, will not add an indentation level to array elements - `skipInvalid` _(default: false)_ - do not throw on invalid types (like function in the safe schema) and skip pairs and single values with such types. - `flowLevel` (default: -1) - specifies level of nesting, when to switch from block to flow style for collections. -1 means block style everwhere - `styles` - "tag" => "style" map. Each tag may have own set of styles. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ specifies a schema to use. - `sortKeys` _(default: `false`)_ - if `true`, sort keys when dumping YAML. If a function, use the function to sort the keys. - `lineWidth` _(default: `80`)_ - set max line width. - `noRefs` _(default: `false`)_ - if `true`, don't convert duplicate objects into references - `noCompatMode` _(default: `false`)_ - if `true` don't try to be compatible with older yaml versions. Currently: don't quote "yes", "no" and so on, as required for YAML 1.1 - `condenseFlow` _(default: `false`)_ - if `true` flow sequences will be condensed, omitting the space between `a, b`. Eg. `'[a,b]'`, and omitting the space between `key: value` and quoting the key. Eg. `'{"a":b}'` Can be useful when using yaml for pretty URL query params as spaces are %-encoded. The following table show availlable styles (e.g. "canonical", "binary"...) available for each tag (.e.g. !!null, !!int ...). Yaml output is shown on the right side after `=>` (default setting) or `->`: ``` none !!null "canonical" -> "~" "lowercase" => "null" "uppercase" -> "NULL" "camelcase" -> "Null" !!int "binary" -> "0b1", "0b101010", "0b1110001111010" "octal" -> "01", "052", "016172" "decimal" => "1", "42", "7290" "hexadecimal" -> "0x1", "0x2A", "0x1C7A" !!bool "lowercase" => "true", "false" "uppercase" -> "TRUE", "FALSE" "camelcase" -> "True", "False" !!float "lowercase" => ".nan", '.inf' "uppercase" -> ".NAN", '.INF' "camelcase" -> ".NaN", '.Inf' ``` Example: ``` javascript safeDump (object, { 'styles': { '!!null': 'canonical' // dump null as ~ }, 'sortKeys': true // sort object keys }); ``` ### dump (object [ , options ]) Same as `safeDump()` but without limits (uses `DEFAULT_FULL_SCHEMA` by default). Supported YAML types -------------------- The list of standard YAML tags and corresponding JavaScipt types. See also [YAML tag discussion](http://pyyaml.org/wiki/YAMLTagDiscussion) and [YAML types repository](http://yaml.org/type/). ``` !!null '' # null !!bool 'yes' # bool !!int '3...' # number !!float '3.14...' # number !!binary '...base64...' # buffer !!timestamp 'YYYY-...' # date !!omap [ ... ] # array of key-value pairs !!pairs [ ... ] # array or array pairs !!set { ... } # array of objects with given keys and null values !!str '...' # string !!seq [ ... ] # array !!map { ... } # object ``` **JavaScript-specific tags** ``` !!js/regexp /pattern/gim # RegExp !!js/undefined '' # Undefined !!js/function 'function () {...}' # Function ``` Caveats ------- Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling `toString()` method) them at the moment of adding them. ``` yaml --- ? [ foo, bar ] : - baz ? { foo: bar } : - baz - baz ``` ``` javascript { "foo,bar": ["baz"], "[object Object]": ["baz", "baz"] } ``` Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded. ``` yaml &anchor foo: foo: bar *anchor: duplicate key baz: bat *anchor: duplicate key ``` js-yaml for enterprise ---------------------- Available as part of the Tidelift Subscription The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-js-yaml?utm_source=npm-js-yaml&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. <p align="center"> <a href="http://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # interpret [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] A dictionary of file extensions and associated module loaders. ## What is it This is used by [Liftoff](http://github.com/tkellen/node-liftoff) to automatically require dependencies for configuration files, and by [rechoir](http://github.com/tkellen/node-rechoir) for registering module loaders. ## API ### extensions Map file types to modules which provide a [require.extensions] loader. ```js { '.babel.js': [ { module: '@babel/register', register: function(hook) { // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353 // which only captures the final extension (.babel.js -> .js) hook({ extensions: '.js' }); }, }, { module: 'babel-register', register: function(hook) { hook({ extensions: '.js' }); }, }, { module: 'babel-core/register', register: function(hook) { hook({ extensions: '.js' }); }, }, { module: 'babel/register', register: function(hook) { hook({ extensions: '.js' }); }, }, ], '.babel.ts': [ { module: '@babel/register', register: function(hook) { hook({ extensions: '.ts' }); }, }, ], '.buble.js': 'buble/register', '.cirru': 'cirru-script/lib/register', '.cjsx': 'node-cjsx/register', '.co': 'coco', '.coffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.coffee.md': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.csv': 'require-csv', '.eg': 'earlgrey/register', '.esm.js': { module: 'esm', register: function(hook) { // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353 // which only captures the final extension (.babel.js -> .js) var esmLoader = hook(module); require.extensions['.js'] = esmLoader('module')._extensions['.js']; }, }, '.iced': ['iced-coffee-script/register', 'iced-coffee-script'], '.iced.md': 'iced-coffee-script/register', '.ini': 'require-ini', '.js': null, '.json': null, '.json5': 'json5/lib/require', '.jsx': [ { module: '@babel/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel-register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel-core/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'node-jsx', register: function(hook) { hook.install({ extension: '.jsx', harmony: true }); }, }, ], '.litcoffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.liticed': 'iced-coffee-script/register', '.ls': ['livescript', 'LiveScript'], '.mjs': '/absolute/path/to/interpret/mjs-stub.js', '.node': null, '.toml': { module: 'toml-require', register: function(hook) { hook.install(); }, }, '.ts': [ 'ts-node/register', 'typescript-node/register', 'typescript-register', 'typescript-require', 'sucrase/register/ts', { module: '@babel/register', register: function(hook) { hook({ extensions: '.ts' }); }, }, ], '.tsx': [ 'ts-node/register', 'typescript-node/register', 'sucrase/register', { module: '@babel/register', register: function(hook) { hook({ extensions: '.tsx' }); }, }, ], '.wisp': 'wisp/engine/node', '.xml': 'require-xml', '.yaml': 'require-yaml', '.yml': 'require-yaml', } ``` ### jsVariants Same as above, but only include the extensions which are javascript variants. ## How to use it Consumers should use the exported `extensions` or `jsVariants` object to determine which module should be loaded for a given extension. If a matching extension is found, consumers should do the following: 1. If the value is null, do nothing. 2. If the value is a string, try to require it. 3. If the value is an object, try to require the `module` property. If successful, the `register` property (a function) should be called with the module passed as the first argument. 4. If the value is an array, iterate over it, attempting step #2 or #3 until one of the attempts does not throw. [require.extensions]: http://nodejs.org/api/globals.html#globals_require_extensions [downloads-image]: http://img.shields.io/npm/dm/interpret.svg [npm-url]: https://www.npmjs.com/package/interpret [npm-image]: http://img.shields.io/npm/v/interpret.svg [travis-url]: https://travis-ci.org/gulpjs/interpret [travis-image]: http://img.shields.io/travis/gulpjs/interpret.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/interpret [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/interpret.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/interpret [coveralls-image]: http://img.shields.io/coveralls/gulpjs/interpret/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors [![npm version](https://img.shields.io/npm/v/eslint.svg)](https://www.npmjs.com/package/eslint) [![Downloads](https://img.shields.io/npm/dm/eslint.svg)](https://www.npmjs.com/package/eslint) [![Build Status](https://github.com/eslint/eslint/workflows/CI/badge.svg)](https://github.com/eslint/eslint/actions) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_shield) <br /> [![Open Collective Backers](https://img.shields.io/opencollective/backers/eslint)](https://opencollective.com/eslint) [![Open Collective Sponsors](https://img.shields.io/opencollective/sponsors/eslint)](https://opencollective.com/eslint) [![Follow us on Twitter](https://img.shields.io/twitter/follow/geteslint?label=Follow&style=social)](https://twitter.com/intent/user?screen_name=geteslint) # ESLint [Website](https://eslint.org) | [Configuring](https://eslint.org/docs/user-guide/configuring) | [Rules](https://eslint.org/docs/rules/) | [Contributing](https://eslint.org/docs/developer-guide/contributing) | [Reporting Bugs](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) | [Code of Conduct](https://eslint.org/conduct) | [Twitter](https://twitter.com/geteslint) | [Mailing List](https://groups.google.com/group/eslint) | [Chat Room](https://eslint.org/chat) ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions: * ESLint uses [Espree](https://github.com/eslint/espree) for JavaScript parsing. * ESLint uses an AST to evaluate patterns in code. * ESLint is completely pluggable, every single rule is a plugin and you can add more at runtime. ## Table of Contents 1. [Installation and Usage](#installation-and-usage) 2. [Configuration](#configuration) 3. [Code of Conduct](#code-of-conduct) 4. [Filing Issues](#filing-issues) 5. [Frequently Asked Questions](#faq) 6. [Releases](#releases) 7. [Security Policy](#security-policy) 8. [Semantic Versioning Policy](#semantic-versioning-policy) 9. [Stylistic Rule Updates](#stylistic-rule-updates) 10. [License](#license) 11. [Team](#team) 12. [Sponsors](#sponsors) 13. [Technology Sponsors](#technology-sponsors) ## <a name="installation-and-usage"></a>Installation and Usage Prerequisites: [Node.js](https://nodejs.org/) (`^10.12.0`, or `>=12.0.0`) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.) You can install ESLint using npm: ``` $ npm install eslint --save-dev ``` You should then set up a configuration file: ``` $ ./node_modules/.bin/eslint --init ``` After that, you can run ESLint on any file or directory like this: ``` $ ./node_modules/.bin/eslint yourfile.js ``` ## <a name="configuration"></a>Configuration After running `eslint --init`, you'll have a `.eslintrc` file in your directory. In it, you'll see some rules configured like this: ```json { "rules": { "semi": ["error", "always"], "quotes": ["error", "double"] } } ``` The names `"semi"` and `"quotes"` are the names of [rules](https://eslint.org/docs/rules) in ESLint. The first value is the error level of the rule and can be one of these values: * `"off"` or `0` - turn the rule off * `"warn"` or `1` - turn the rule on as a warning (doesn't affect exit code) * `"error"` or `2` - turn the rule on as an error (exit code will be 1) The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the [configuration docs](https://eslint.org/docs/user-guide/configuring)). ## <a name="code-of-conduct"></a>Code of Conduct ESLint adheres to the [JS Foundation Code of Conduct](https://eslint.org/conduct). ## <a name="filing-issues"></a>Filing Issues Before filing an issue, please be sure to read the guidelines for what you're reporting: * [Bug Report](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) * [Propose a New Rule](https://eslint.org/docs/developer-guide/contributing/new-rules) * [Proposing a Rule Change](https://eslint.org/docs/developer-guide/contributing/rule-changes) * [Request a Change](https://eslint.org/docs/developer-guide/contributing/changes) ## <a name="faq"></a>Frequently Asked Questions ### I'm using JSCS, should I migrate to ESLint? Yes. [JSCS has reached end of life](https://eslint.org/blog/2016/07/jscs-end-of-life) and is no longer supported. We have prepared a [migration guide](https://eslint.org/docs/user-guide/migrating-from-jscs) to help you convert your JSCS settings to an ESLint configuration. We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process. ### Does Prettier replace ESLint? No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors. ### Why can't ESLint find my plugins? * Make sure your plugins (and ESLint) are both in your project's `package.json` as devDependencies (or dependencies, if your project uses ESLint at runtime). * Make sure you have run `npm install` and all your dependencies are installed. * Make sure your plugins' peerDependencies have been installed as well. You can use `npm view eslint-plugin-myplugin peerDependencies` to see what peer dependencies `eslint-plugin-myplugin` has. ### Does ESLint support JSX? Yes, ESLint natively supports parsing JSX syntax (this must be enabled in [configuration](https://eslint.org/docs/user-guide/configuring)). Please note that supporting JSX syntax *is not* the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn't recognize. We recommend using [eslint-plugin-react](https://www.npmjs.com/package/eslint-plugin-react) if you are using React and want React semantics. ### What ECMAScript versions does ESLint support? ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through [configuration](https://eslint.org/docs/user-guide/configuring). ### What about experimental features? ESLint's parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis. In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the [babel-eslint](https://github.com/babel/babel-eslint) parser and [eslint-plugin-babel](https://github.com/babel/eslint-plugin-babel) to use any option available in Babel. Once a language feature has been adopted into the ECMAScript standard (stage 4 according to the [TC39 process](https://tc39.github.io/process-document/)), we will accept issues and pull requests related to the new feature, subject to our [contributing guidelines](https://eslint.org/docs/developer-guide/contributing). Until then, please use the appropriate parser and plugin(s) for your experimental feature. ### Where to ask for help? Join our [Mailing List](https://groups.google.com/group/eslint) or [Chatroom](https://eslint.org/chat). ### Why doesn't ESLint lock dependency versions? Lock files like `package-lock.json` are helpful for deployed applications. They ensure that dependencies are consistent between environments and across deployments. Packages like `eslint` that get published to the npm registry do not include lock files. `npm install eslint` as a user will respect version constraints in ESLint's `package.json`. ESLint and its dependencies will be included in the user's lock file if one exists, but ESLint's own lock file would not be used. We intentionally don't lock dependency versions so that we have the latest compatible dependency versions in development and CI that our users get when installing ESLint in a project. The Twilio blog has a [deeper dive](https://www.twilio.com/blog/lockfiles-nodejs) to learn more. ## <a name="releases"></a>Releases We have scheduled releases every two weeks on Friday or Saturday. You can follow a [release issue](https://github.com/eslint/eslint/issues?q=is%3Aopen+is%3Aissue+label%3Arelease) for updates about the scheduling of any particular release. ## <a name="security-policy"></a>Security Policy ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## <a name="semantic-versioning-policy"></a>Semantic Versioning Policy ESLint follows [semantic versioning](https://semver.org). However, due to the nature of ESLint as a code quality tool, it's not always clear when a minor or major version bump occurs. To help clarify this for everyone, we've defined the following semantic versioning policy for ESLint: * Patch release (intended to not break your lint build) * A bug fix in a rule that results in ESLint reporting fewer linting errors. * A bug fix to the CLI or core (including formatters). * Improvements to documentation. * Non-user-facing changes such as refactoring code, adding, deleting, or modifying tests, and increasing test coverage. * Re-releasing after a failed release (i.e., publishing a release that doesn't work for anyone). * Minor release (might break your lint build) * A bug fix in a rule that results in ESLint reporting more linting errors. * A new rule is created. * A new option to an existing rule that does not result in ESLint reporting more linting errors by default. * A new addition to an existing rule to support a newly-added language feature (within the last 12 months) that will result in ESLint reporting more linting errors by default. * An existing rule is deprecated. * A new CLI capability is created. * New capabilities to the public API are added (new classes, new methods, new arguments to existing methods, etc.). * A new formatter is created. * `eslint:recommended` is updated and will result in strictly fewer linting errors (e.g., rule removals). * Major release (likely to break your lint build) * `eslint:recommended` is updated and may result in new linting errors (e.g., rule additions, most rule option updates). * A new option to an existing rule that results in ESLint reporting more linting errors by default. * An existing formatter is removed. * Part of the public API is removed or changed in an incompatible way. The public API includes: * Rule schemas * Configuration schema * Command-line options * Node.js API * Rule, formatter, parser, plugin APIs According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (`~`) in `package.json` e.g. `"eslint": "~3.1.0"` to guarantee the results of your builds. ## <a name="stylistic-rule-updates"></a>Stylistic Rule Updates Stylistic rules are frozen according to [our policy](https://eslint.org/blog/2020/05/changes-to-rules-policies) on how we evaluate new rules and rule changes. This means: * **Bug fixes**: We will still fix bugs in stylistic rules. * **New ECMAScript features**: We will also make sure stylistic rules are compatible with new ECMAScript features. * **New options**: We will **not** add any new options to stylistic rules unless an option is the only way to fix a bug or support a newly-added ECMAScript feature. ## <a name="license"></a>License [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=large)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_large) ## <a name="team"></a>Team These folks keep the project moving and are resources for help. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--teamstart--> ### Technical Steering Committee (TSC) The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/nzakas"> <img src="https://github.com/nzakas.png?s=75" width="75" height="75"><br /> Nicholas C. Zakas </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/btmills"> <img src="https://github.com/btmills.png?s=75" width="75" height="75"><br /> Brandon Mills </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/mdjermanovic"> <img src="https://github.com/mdjermanovic.png?s=75" width="75" height="75"><br /> Milos Djermanovic </a> </td></tr></tbody></table> ### Reviewers The people who review and implement new features. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/mysticatea"> <img src="https://github.com/mysticatea.png?s=75" width="75" height="75"><br /> Toru Nagashima </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/aladdin-add"> <img src="https://github.com/aladdin-add.png?s=75" width="75" height="75"><br /> 薛定谔的猫 </a> </td></tr></tbody></table> ### Committers The people who review and fix bugs and help triage issues. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/brettz9"> <img src="https://github.com/brettz9.png?s=75" width="75" height="75"><br /> Brett Zamir </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/bmish"> <img src="https://github.com/bmish.png?s=75" width="75" height="75"><br /> Bryan Mishkin </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/g-plane"> <img src="https://github.com/g-plane.png?s=75" width="75" height="75"><br /> Pig Fang </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/anikethsaha"> <img src="https://github.com/anikethsaha.png?s=75" width="75" height="75"><br /> Anix </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/yeonjuan"> <img src="https://github.com/yeonjuan.png?s=75" width="75" height="75"><br /> YeonJuan </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/snitin315"> <img src="https://github.com/snitin315.png?s=75" width="75" height="75"><br /> Nitin Kumar </a> </td></tr></tbody></table> <!--teamend--> ## <a name="sponsors"></a>Sponsors The following companies, organizations, and individuals support ESLint's ongoing maintenance and development. [Become a Sponsor](https://opencollective.com/eslint) to get your logo on our README and website. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--sponsorsstart--> <h3>Platinum Sponsors</h3> <p><a href="https://automattic.com"><img src="https://images.opencollective.com/photomatt/d0ef3e1/logo.png" alt="Automattic" height="undefined"></a></p><h3>Gold Sponsors</h3> <p><a href="https://nx.dev"><img src="https://images.opencollective.com/nx/0efbe42/logo.png" alt="Nx (by Nrwl)" height="96"></a> <a href="https://google.com/chrome"><img src="https://images.opencollective.com/chrome/dc55bd4/logo.png" alt="Chrome's Web Framework & Tools Performance Fund" height="96"></a> <a href="https://www.salesforce.com"><img src="https://images.opencollective.com/salesforce/ca8f997/logo.png" alt="Salesforce" height="96"></a> <a href="https://www.airbnb.com/"><img src="https://images.opencollective.com/airbnb/d327d66/logo.png" alt="Airbnb" height="96"></a> <a href="https://coinbase.com"><img src="https://avatars.githubusercontent.com/u/1885080?v=4" alt="Coinbase" height="96"></a> <a href="https://substack.com/"><img src="https://avatars.githubusercontent.com/u/53023767?v=4" alt="Substack" height="96"></a></p><h3>Silver Sponsors</h3> <p><a href="https://retool.com/"><img src="https://images.opencollective.com/retool/98ea68e/logo.png" alt="Retool" height="64"></a> <a href="https://liftoff.io/"><img src="https://images.opencollective.com/liftoff/5c4fa84/logo.png" alt="Liftoff" height="64"></a></p><h3>Bronze Sponsors</h3> <p><a href="https://www.crosswordsolver.org/anagram-solver/"><img src="https://images.opencollective.com/anagram-solver/2666271/logo.png" alt="Anagram Solver" height="32"></a> <a href="null"><img src="https://images.opencollective.com/bugsnag-stability-monitoring/c2cef36/logo.png" alt="Bugsnag Stability Monitoring" height="32"></a> <a href="https://mixpanel.com"><img src="https://images.opencollective.com/mixpanel/cd682f7/logo.png" alt="Mixpanel" height="32"></a> <a href="https://www.vpsserver.com"><img src="https://images.opencollective.com/vpsservercom/logo.png" alt="VPS Server" height="32"></a> <a href="https://icons8.com"><img src="https://images.opencollective.com/icons8/7fa1641/logo.png" alt="Icons8: free icons, photos, illustrations, and music" height="32"></a> <a href="https://discord.com"><img src="https://images.opencollective.com/discordapp/f9645d9/logo.png" alt="Discord" height="32"></a> <a href="https://themeisle.com"><img src="https://images.opencollective.com/themeisle/d5592fe/logo.png" alt="ThemeIsle" height="32"></a> <a href="https://www.firesticktricks.com"><img src="https://images.opencollective.com/fire-stick-tricks/b8fbe2c/logo.png" alt="Fire Stick Tricks" height="32"></a> <a href="https://www.practiceignition.com"><img src="https://avatars.githubusercontent.com/u/5753491?v=4" alt="Practice Ignition" height="32"></a></p> <!--sponsorsend--> ## <a name="technology-sponsors"></a>Technology Sponsors * Site search ([eslint.org](https://eslint.org)) is sponsored by [Algolia](https://www.algolia.com) * Hosting for ([eslint.org](https://eslint.org)) is sponsored by [Netlify](https://www.netlify.com) * Password management is sponsored by [1Password](https://www.1password.com) # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] [![Downloads][downloads-image]][downloads-url] [![Join the chat at https://gitter.im/eslint/doctrine](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/eslint/doctrine?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # Doctrine Doctrine is a [JSDoc](http://usejsdoc.org) parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file). ## Installation You can install Doctrine using [npm](https://npmjs.com): ``` $ npm install doctrine --save-dev ``` Doctrine can also be used in web browsers using [Browserify](http://browserify.org). ## Usage Require doctrine inside of your JavaScript: ```js var doctrine = require("doctrine"); ``` ### parse() The primary method is `parse()`, which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are: * `unwrap` - set to `true` to delete the leading `/**`, any `*` that begins a line, and the trailing `*/` from the source text. Default: `false`. * `tags` - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if `tags` is `["param"]`, then only `@param` tags will be returned. Default: `null`. * `recoverable` - set to `true` to keep parsing even when syntax errors occur. Default: `false`. * `sloppy` - set to `true` to allow optional parameters to be specified in brackets (`@param {string} [foo]`). Default: `false`. * `lineNumbers` - set to `true` to add `lineNumber` to each node, specifying the line on which the node is found in the source. Default: `false`. * `range` - set to `true` to add `range` to each node, specifying the start and end index of the node in the original comment. Default: `false`. Here's a simple example: ```js var ast = doctrine.parse( [ "/**", " * This function comment is parsed by doctrine", " * @param {{ok:String}} userName", "*/" ].join('\n'), { unwrap: true }); ``` This example returns the following AST: { "description": "This function comment is parsed by doctrine", "tags": [ { "title": "param", "description": null, "type": { "type": "RecordType", "fields": [ { "type": "FieldType", "key": "ok", "value": { "type": "NameExpression", "name": "String" } } ] }, "name": "userName" } ] } See the [demo page](http://eslint.org/doctrine/demo/) more detail. ## Team These folks keep the project moving and are resources for help: * Nicholas C. Zakas ([@nzakas](https://github.com/nzakas)) - project lead * Yusuke Suzuki ([@constellation](https://github.com/constellation)) - reviewer ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/doctrine/issues). ## Frequently Asked Questions ### Can I pass a whole JavaScript file to Doctrine? No. Doctrine can only parse JSDoc comments, so you'll need to pass just the JSDoc comment to Doctrine in order to work. ### License #### doctrine Copyright JS Foundation and other contributors, https://js.foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. #### esprima some of functions is derived from esprima Copyright (C) 2012, 2011 [Ariya Hidayat](http://ariya.ofilabs.com/about) (twitter: [@ariyahidayat](http://twitter.com/ariyahidayat)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #### closure-compiler some of extensions is derived from closure-compiler Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ ### Where to ask for help? Join our [Chatroom](https://gitter.im/eslint/doctrine) [npm-image]: https://img.shields.io/npm/v/doctrine.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/doctrine [travis-image]: https://img.shields.io/travis/eslint/doctrine/master.svg?style=flat-square [travis-url]: https://travis-ci.org/eslint/doctrine [coveralls-image]: https://img.shields.io/coveralls/eslint/doctrine/master.svg?style=flat-square [coveralls-url]: https://coveralls.io/r/eslint/doctrine?branch=master [downloads-image]: http://img.shields.io/npm/dm/doctrine.svg?style=flat-square [downloads-url]: https://www.npmjs.com/package/doctrine # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree VotingApp Smart Contract ================== A [smart contract] written in [AssemblyScript] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install [Node.js] ≥ 12 Exploring The Code ================== 1. The main smart contract code lives in `assembly/index.ts`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard AssemblyScript tests using [as-pect]. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [AssemblyScript]: https://www.assemblyscript.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [as-pect]: https://www.npmjs.com/package/@as-pect/cli # Optionator <a name="optionator" /> Optionator is a JavaScript/Node.js option parsing and help generation library used by [eslint](http://eslint.org), [Grasp](http://graspjs.com), [LiveScript](http://livescript.net), [esmangle](https://github.com/estools/esmangle), [escodegen](https://github.com/estools/escodegen), and [many more](https://www.npmjs.com/browse/depended/optionator). For an online demo, check out the [Grasp online demo](http://www.graspjs.com/#demo). [About](#about) &middot; [Usage](#usage) &middot; [Settings Format](#settings-format) &middot; [Argument Format](#argument-format) ## Why? The problem with other option parsers, such as `yargs` or `minimist`, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application. $ cmd --halp Invalid option '--halp' - perhaps you meant '--help'? $ cmd --count str Invalid value for option 'count' - expected type Int, received value: str. Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier. ## About Optionator uses [type-check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) behind the scenes to cast and verify input according the specified types. MIT license. Version 0.9.1 npm install optionator For updates on Optionator, [follow me on twitter](https://twitter.com/gkzahariev). Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify. ## Usage `require('optionator');` returns a function. It has one property, `VERSION`, the current version of the library as a string. This function is called with an object specifying your options and other information, see the [settings format section](#settings-format). This in turn returns an object with three properties, `parse`, `parseArgv`, `generateHelp`, and `generateHelpForOption`, which are all functions. ```js var optionator = require('optionator')({ prepend: 'Usage: cmd [options]', append: 'Version 1.0.0', options: [{ option: 'help', alias: 'h', type: 'Boolean', description: 'displays help' }, { option: 'count', alias: 'c', type: 'Int', description: 'number of things', example: 'cmd --count 2' }] }); var options = optionator.parseArgv(process.argv); if (options.help) { console.log(optionator.generateHelp()); } ... ``` ### parse(input, parseOptions) `parse` processes the `input` according to your settings, and returns an object with the results. ##### arguments * input - `[String] | Object | String` - the input you wish to parse * parseOptions - `{slice: Int}` - all options optional - `slice` specifies how much to slice away from the beginning if the input is an array or string - by default `0` for string, `2` for array (works with `process.argv`) ##### returns `Object` - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the `_` key. ##### example ```js parse(['node', 't.js', '--count', '2', 'positional']); // {count: 2, _: ['positional']} parse('--count 2 positional'); // {count: 2, _: ['positional']} parse({count: 2, _:['positional']}); // {count: 2, _: ['positional']} ``` ### parseArgv(input) `parseArgv` works exactly like `parse`, but only for array input and it slices off the first two elements. ##### arguments * input - `[String]` - the input you wish to parse ##### returns See "returns" section in "parse" ##### example ```js parseArgv(process.argv); ``` ### generateHelp(helpOptions) `generateHelp` produces help text based on your settings. ##### arguments * helpOptions - `{showHidden: Boolean, interpolate: Object}` - all options optional - `showHidden` specifies whether to show options with `hidden: true` specified, by default it is `false` - `interpolate` specify data to be interpolated in `prepend` and `append` text, `{{key}}` is the format - eg. `generateHelp({interpolate:{version: '0.4.2'}})`, will change this `append` text: `Version {{version}}` to `Version 0.4.2` ##### returns `String` - the generated help text ##### example ```js generateHelp(); /* "Usage: cmd [options] positional -h, --help displays help -c, --count Int number of things Version 1.0.0 "*/ ``` ### generateHelpForOption(optionName) `generateHelpForOption` produces expanded help text for the specified with `optionName` option. If an `example` was specified for the option, it will be displayed, and if a `longDescription` was specified, it will display that instead of the `description`. ##### arguments * optionName - `String` - the name of the option to display ##### returns `String` - the generated help text for the option ##### example ```js generateHelpForOption('count'); /* "-c, --count Int description: number of things example: cmd --count 2 "*/ ``` ## Settings Format When your `require('optionator')`, you get a function that takes in a settings object. This object has the type: { prepend: String, append: String, options: [{heading: String} | { option: String, alias: [String] | String, type: String, enum: [String], default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: [String] | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: [String] | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | [String]]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object } All of the properties are optional (the `Maybe` has been excluded for brevities sake), except for having either `heading: String` or `option: String` in each object in the `options` array. ### Top Level Properties * `prepend` is an optional string to be placed before the options in the help text * `append` is an optional string to be placed after the options in the help text * `options` is a required array specifying your options and headings, the options and headings will be displayed in the order specified * `helpStyle` is an optional object which enables you to change the default appearance of some aspects of the help text * `mutuallyExclusive` is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is present * `concatRepeatedArrays` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `mergeRepeatedObjects` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `positionalAnywhere` is an optional boolean (defaults to `true`) - when `true` it allows positional arguments anywhere, when `false`, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with `positionalAnywhere: false`, the arguments `--flag --boom 12 --crack` would have two positional arguments: `12` and `--crack` * `typeAliases` is an optional object, it allows you to set aliases for types, eg. `{Path: 'String'}` would allow you to use the type `Path` as an alias for the type `String` * `defaults` is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do `default: { type: "String" }` to set the default type of all options to `String`, and then override that default in an individual option by setting the `type` property #### Heading Properties * `heading` a required string, the name of the heading #### Option Properties * `option` the required name of the option - use dash-case, without the leading dashes * `alias` is an optional string or array of strings which specify any aliases for the option * `type` is a required string in the [type check](https://github.com/gkz/type-check) [format](https://github.com/gkz/type-check#type-format), this will be used to cast the inputted value and validate it * `enum` is an optional array of strings, each string will be parsed by [levn](https://github.com/gkz/levn) - the argument value must be one of the resulting values - each potential value must validate against the specified `type` * `default` is a optional string, which will be parsed by [levn](https://github.com/gkz/levn) and used as the default value if none is set - the value must validate against the specified `type` * `restPositional` is an optional boolean - if set to `true`, everything after the option will be taken to be a positional argument, even if it looks like a named argument * `required` is an optional boolean - if set to `true`, the option parsing will fail if the option is not defined * `overrideRequired` is a optional boolean - if set to `true` and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use `--help` or `--version` flags * `concatRepeatedArrays` is an optional boolean or tuple with boolean and options object (defaults to `false`) - when set to `true` and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option `g` of type `[String]`: `-g a -g b -g c,d` will result in `['a','b','c','d']` You can supply an options object by giving the following value: `[true, options]`. The one currently supported option is `oneValuePerFlag`, this only allows one array value per flag. This is useful if your potential values contain a comma. * `mergeRepeatedObjects` is an optional boolean (defaults to `false`) - when set to `true` and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option `g` of type `Object`: `-g a:1 -g b:2 -g c:3,d:4` will result in `{a: 1, b: 2, c: 3, d: 4}` * `dependsOn` is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether `'and'` or `'or'` is first, it will either check whether all (`['and', 'option-a', 'option-b']`), or at least one (`['or', 'option-a', 'option-b']`) other options are set * `description` is an optional string, which will be displayed next to the option in the help text * `longDescription` is an optional string, it will be displayed instead of the `description` when `generateHelpForOption` is used * `example` is an optional string or array of strings with example(s) for the option - these will be displayed when `generateHelpForOption` is used #### Help Style Properties * `aliasSeparator` is an optional string, separates multiple names from each other - default: ' ,' * `typeSeparator` is an optional string, separates the type from the names - default: ' ' * `descriptionSeparator` is an optional string , separates the description from the padded name and type - default: ' ' * `initialIndent` is an optional int - the amount of indent for options - default: 2 * `secondaryIndent` is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4 * `maxPadFactor` is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5 ## Argument Format At the highest level there are two types of arguments: named, and positional. Name arguments of any length are prefixed with `--` (eg. `--go`), and those of one character may be prefixed with either `--` or `-` (eg. `-g`). There are two types of named arguments: boolean flags (eg. `--problemo`, `-p`) which take no value and result in a `true` if they are present, the falsey `undefined` if they are not present, or `false` if present and explicitly prefixed with `no` (eg. `--no-problemo`). Named arguments with values (eg. `--tseries 800`, `-t 800`) are the other type. If the option has a type `Boolean` it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value. For more information about how to properly set types to get the value you want, take a look at the [type check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) pages. You can group single character arguments that use a single `-`, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. `-ba 2` is equivalent to `-b -a 2`. Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in `cmd -b one -a 2 two` where `b` is a boolean flag, and `a` has the type `Number`, there are two positional arguments, `one` and `two`. Everything after an `--` is positional, even if it looks like a named argument. You may optionally use `=` to separate option names from values, for example: `--count=2`. If you specify the option `NUM`, then any argument using a single `-` followed by a number will be valid and will set the value of `NUM`. Eg. `-2` will be parsed into `NUM: 2`. If duplicate named arguments are present, the last one will be taken. ## Technical About `optionator` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [levn](https://github.com/gkz/levn) to cast arguments to their specified type, and uses [type-check](https://github.com/gkz/type-check) to validate values. It also uses the [prelude.ls](http://preludels.com/) library. # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # type-check [![Build Status](https://travis-ci.org/gkz/type-check.png?branch=master)](https://travis-ci.org/gkz/type-check) <a name="type-check" /> `type-check` is a library which allows you to check the types of JavaScript values at runtime with a Haskell like type syntax. It is great for checking external input, for testing, or even for adding a bit of safety to your internal code. It is a major component of [levn](https://github.com/gkz/levn). MIT license. Version 0.4.0. Check out the [demo](http://gkz.github.io/type-check/). For updates on `type-check`, [follow me on twitter](https://twitter.com/gkzahariev). npm install type-check ## Quick Examples ```js // Basic types: var typeCheck = require('type-check').typeCheck; typeCheck('Number', 1); // true typeCheck('Number', 'str'); // false typeCheck('Error', new Error); // true typeCheck('Undefined', undefined); // true // Comment typeCheck('count::Number', 1); // true // One type OR another type: typeCheck('Number | String', 2); // true typeCheck('Number | String', 'str'); // true // Wildcard, matches all types: typeCheck('*', 2) // true // Array, all elements of a single type: typeCheck('[Number]', [1, 2, 3]); // true typeCheck('[Number]', [1, 'str', 3]); // false // Tuples, or fixed length arrays with elements of different types: typeCheck('(String, Number)', ['str', 2]); // true typeCheck('(String, Number)', ['str']); // false typeCheck('(String, Number)', ['str', 2, 5]); // false // Object properties: typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true typeCheck('{x: Number, y: Boolean}', {x: 2}); // false typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true // A particular type AND object properties: typeCheck('RegExp{source: String, ...}', /re/i); // true typeCheck('RegExp{source: String, ...}', {source: 're'}); // false // Custom types: var opt = {customTypes: {Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}}; typeCheck('Even', 2, opt); // true // Nested: var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}' typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true ``` Check out the [type syntax format](#syntax) and [guide](#guide). ## Usage `require('type-check');` returns an object that exposes four properties. `VERSION` is the current version of the library as a string. `typeCheck`, `parseType`, and `parsedTypeCheck` are functions. ```js // typeCheck(type, input, options); typeCheck('Number', 2); // true // parseType(type); var parsedType = parseType('Number'); // object // parsedTypeCheck(parsedType, input, options); parsedTypeCheck(parsedType, 2); // true ``` ### typeCheck(type, input, options) `typeCheck` checks a JavaScript value `input` against `type` written in the [type format](#type-format) (and taking account the optional `options`) and returns whether the `input` matches the `type`. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js typeCheck('Number', 2); // true ``` ### parseType(type) `parseType` parses string `type` written in the [type format](#type-format) into an object representing the parsed type. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to parse ##### returns `Object` - an object in the parsed type format representing the parsed type ##### example ```js parseType('Number'); // [{type: 'Number'}] ``` ### parsedTypeCheck(parsedType, input, options) `parsedTypeCheck` checks a JavaScript value `input` against parsed `type` in the parsed type format (and taking account the optional `options`) and returns whether the `input` matches the `type`. Use this in conjunction with `parseType` if you are going to use a type more than once. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js parsedTypeCheck([{type: 'Number'}], 2); // true var parsedType = parseType('String'); parsedTypeCheck(parsedType, 'str'); // true ``` <a name="type-format" /> ## Type Format ### Syntax White space is ignored. The root node is a __Types__. * __Identifier__ = `[\$\w]+` - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. `String` * __Type__ = an `Identifier`, an `Identifier` followed by a `Structure`, just a `Structure`, or a wildcard `*` - eg. `String`, `Object{x: Number}`, `{x: Number}`, `Array{0: String, 1: Boolean, length: Number}`, `*` * __Types__ = optionally a comment (an `Identifier` followed by a `::`), optionally the identifier `Maybe`, one or more `Type`, separated by `|` - eg. `Number`, `String | Date`, `Maybe Number`, `Maybe Boolean | String` * __Structure__ = `Fields`, or a `Tuple`, or an `Array` - eg. `{x: Number}`, `(String, Number)`, `[Date]` * __Fields__ = a `{`, followed one or more `Field` separated by a comma `,` (trailing comma `,` is permitted), optionally an `...` (always preceded by a comma `,`), followed by a `}` - eg. `{x: Number, y: String}`, `{k: Function, ...}` * __Field__ = an `Identifier`, followed by a colon `:`, followed by `Types` - eg. `x: Date | String`, `y: Boolean` * __Tuple__ = a `(`, followed by one or more `Types` separated by a comma `,` (trailing comma `,` is permitted), followed by a `)` - eg `(Date)`, `(Number, Date)` * __Array__ = a `[` followed by exactly one `Types` followed by a `]` - eg. `[Boolean]`, `[Boolean | Null]` ### Guide `type-check` uses `Object.toString` to find out the basic type of a value. Specifically, ```js {}.toString.call(VALUE).slice(8, -1) {}.toString.call(true).slice(8, -1) // 'Boolean' ``` A basic type, eg. `Number`, uses this check. This is much more versatile than using `typeof` - for example, with `document`, `typeof` produces `'object'` which isn't that useful, and our technique produces `'HTMLDocument'`. You may check for multiple types by separating types with a `|`. The checker proceeds from left to right, and passes if the value is any of the types - eg. `String | Boolean` first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false. Adding a `Maybe` in front of a list of multiple types is the same as also checking for `Null` and `Undefined` - eg. `Maybe String` is equivalent to `Undefined | Null | String`. You may add a comment to remind you of what the type is for by following an identifier with a `::` before a type (or multiple types). The comment is simply thrown out. The wildcard `*` matches all types. There are three types of structures for checking the contents of a value: 'fields', 'tuple', and 'array'. If used by itself, a 'fields' structure will pass with any type of object as long as it is an instance of `Object` and the properties pass - this allows for duck typing - eg. `{x: Boolean}`. To check if the properties pass, and the value is of a certain type, you can specify the type - eg. `Error{message: String}`. If you want to make a field optional, you can simply use `Maybe` - eg. `{x: Boolean, y: Maybe String}` will still pass if `y` is undefined (or null). If you don't care if the value has properties beyond what you have specified, you can use the 'etc' operator `...` - eg. `{x: Boolean, ...}` will match an object with an `x` property that is a boolean, and with zero or more other properties. For an array, you must specify one or more types (separated by `|`) - it will pass for something of any length as long as each element passes the types provided - eg. `[Number]`, `[Number | String]`. A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. `(String, Number)`. An array and tuple structure check that the value is of type `Array` by default, but if another type is specified, they will check for that instead - eg. `Int32Array[Number]`. You can use the wildcard `*` to search for any type at all. Check out the [type precedence](https://github.com/zaboco/type-precedence) library for type-check. ## Options Options is an object. It is an optional parameter to the `typeCheck` and `parsedTypeCheck` functions. The only current option is `customTypes`. <a name="custom-types" /> ### Custom Types __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; } } } }; typeCheck('Even', 2, options); // true typeCheck('Even', 3, options); // false ``` `customTypes` allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a `typeOf` property - a string, and `validate` property - a function. The `typeOf` property is the type the value should be (optional - if not set only `validate` will be used), and `validate` is a function which should return true if the value is of that type. `validate` receives one parameter, which is the value that we are checking. ## Technical About `type-check` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It also uses the [prelude.ls](http://preludels.com/) library. discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # lodash.merge v4.6.2 The [Lodash](https://lodash.com/) method `_.merge` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.merge ``` In Node.js: ```js var merge = require('lodash.merge'); ``` See the [documentation](https://lodash.com/docs#merge) or [package source](https://github.com/lodash/lodash/blob/4.6.2-npm-packages/lodash.merge) for more details. # ESLint Scope ESLint Scope is the [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) scope analyzer used in ESLint. It is a fork of [escope](http://github.com/estools/escope). ## Usage Install: ``` npm i eslint-scope --save ``` Example: ```js var eslintScope = require('eslint-scope'); var espree = require('espree'); var estraverse = require('estraverse'); var ast = espree.parse(code); var scopeManager = eslintScope.analyze(ast); var currentScope = scopeManager.acquire(ast); // global scope estraverse.traverse(ast, { enter: function(node, parent) { // do stuff if (/Function/.test(node.type)) { currentScope = scopeManager.acquire(node); // get current function scope } }, leave: function(node, parent) { if (/Function/.test(node.type)) { currentScope = currentScope.upper; // set to parent scope } // do stuff } }); ``` ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/eslint-scope/issues). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting ## License ESLint Scope is licensed under a permissive BSD 2-clause license. # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # is-glob [![NPM version](https://img.shields.io/npm/v/is-glob.svg?style=flat)](https://www.npmjs.com/package/is-glob) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![NPM total downloads](https://img.shields.io/npm/dt/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![Build Status](https://img.shields.io/github/workflow/status/micromatch/is-glob/dev)](https://github.com/micromatch/is-glob/actions) > Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-glob ``` You might also be interested in [is-valid-glob](https://github.com/jonschlinkert/is-valid-glob) and [has-glob](https://github.com/jonschlinkert/has-glob). ## Usage ```js var isGlob = require('is-glob'); ``` ### Default behavior **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js'); isGlob('*.js'); isGlob('**/abc.js'); isGlob('abc/*.js'); isGlob('abc/(aaa|bbb).js'); isGlob('abc/[a-z].js'); isGlob('abc/{a,b}.js'); //=> true ``` Extglobs ```js isGlob('abc/@(a).js'); isGlob('abc/!(a).js'); isGlob('abc/+(a).js'); isGlob('abc/*(a).js'); isGlob('abc/?(a).js'); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('abc/\\@(a).js'); isGlob('abc/\\!(a).js'); isGlob('abc/\\+(a).js'); isGlob('abc/\\*(a).js'); isGlob('abc/\\?(a).js'); isGlob('\\!foo.js'); isGlob('\\*.js'); isGlob('\\*\\*/abc.js'); isGlob('abc/\\*.js'); isGlob('abc/\\(aaa|bbb).js'); isGlob('abc/\\[a-z].js'); isGlob('abc/\\{a,b}.js'); //=> false ``` Patterns that do not have glob patterns return `false`: ```js isGlob('abc.js'); isGlob('abc/def/ghi.js'); isGlob('foo.js'); isGlob('abc/@.js'); isGlob('abc/+.js'); isGlob('abc/?.js'); isGlob(); isGlob(null); //=> false ``` Arrays are also `false` (If you want to check if an array has a glob pattern, use [has-glob](https://github.com/jonschlinkert/has-glob)): ```js isGlob(['**/*.js']); isGlob(['foo.js']); //=> false ``` ### Option strict When `options.strict === false` the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return `false` may return `true`. This is done so that matching libraries like [micromatch](https://github.com/micromatch/micromatch) have a chance at determining if the pattern is a glob or not. **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js', {strict: false}); isGlob('*.js', {strict: false}); isGlob('**/abc.js', {strict: false}); isGlob('abc/*.js', {strict: false}); isGlob('abc/(aaa|bbb).js', {strict: false}); isGlob('abc/[a-z].js', {strict: false}); isGlob('abc/{a,b}.js', {strict: false}); //=> true ``` Extglobs ```js isGlob('abc/@(a).js', {strict: false}); isGlob('abc/!(a).js', {strict: false}); isGlob('abc/+(a).js', {strict: false}); isGlob('abc/*(a).js', {strict: false}); isGlob('abc/?(a).js', {strict: false}); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('\\!foo.js', {strict: false}); isGlob('\\*.js', {strict: false}); isGlob('\\*\\*/abc.js', {strict: false}); isGlob('abc/\\*.js', {strict: false}); isGlob('abc/\\(aaa|bbb).js', {strict: false}); isGlob('abc/\\[a-z].js', {strict: false}); isGlob('abc/\\{a,b}.js', {strict: false}); //=> false ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [assemble](https://www.npmjs.com/package/assemble): Get the rocks out of your socks! Assemble makes you fast at creating web projects… [more](https://github.com/assemble/assemble) | [homepage](https://github.com/assemble/assemble "Get the rocks out of your socks! Assemble makes you fast at creating web projects. Assemble is used by thousands of projects for rapid prototyping, creating themes, scaffolds, boilerplates, e-books, UI components, API documentation, blogs, building websit") * [base](https://www.npmjs.com/package/base): Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks | [homepage](https://github.com/node-base/base "Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks") * [update](https://www.npmjs.com/package/update): Be scalable! Update is a new, open source developer framework and CLI for automating updates… [more](https://github.com/update/update) | [homepage](https://github.com/update/update "Be scalable! Update is a new, open source developer framework and CLI for automating updates of any kind in code projects.") * [verb](https://www.npmjs.com/package/verb): Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used… [more](https://github.com/verbose/verb) | [homepage](https://github.com/verbose/verb "Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used on hundreds of projects of all sizes to generate everything from API docs to readmes.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 47 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [doowb](https://github.com/doowb) | | 1 | [phated](https://github.com/phated) | | 1 | [danhper](https://github.com/danhper) | | 1 | [paulmillr](https://github.com/paulmillr) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 27, 2019._ ### Esrecurse [![Build Status](https://travis-ci.org/estools/esrecurse.svg?branch=master)](https://travis-ci.org/estools/esrecurse) Esrecurse ([esrecurse](https://github.com/estools/esrecurse)) is [ECMAScript](https://www.ecma-international.org/publications/standards/Ecma-262.htm) recursive traversing functionality. ### Example Usage The following code will output all variables declared at the root of a file. ```javascript esrecurse.visit(ast, { XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); ``` We can use `Visitor` instance. ```javascript var visitor = new esrecurse.Visitor({ XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); visitor.visit(ast); ``` We can inherit `Visitor` instance easily. ```javascript class Derived extends esrecurse.Visitor { constructor() { super(null); } XXXStatement(node) { } } ``` ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { this.visit(node.left); // do something... this.visit(node.right); }; ``` And you can invoke default visiting operation inside custom visit operation. ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { // do something... this.visitChildren(node); }; ``` The `childVisitorKeys` option does customize the behaviour of `this.visitChildren(node)`. We can use user-defined node types. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { // Extending the existing traversing rules. childVisitorKeys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } } ); ``` We can use the `fallback` option as well. If the `fallback` option is `"iteration"`, `esrecurse` would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: 'iteration' } ); ``` If the `fallback` option is a function, `esrecurse` calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: function (node) { return Object.keys(node).filter(function(key) { return key !== 'argument' }); } } ); ``` ### License Copyright (C) 2014 [Yusuke Suzuki](https://github.com/Constellation) (twitter: [@Constellation](https://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
jeremiah66_NFT-surety
CONTRACT cargo.toml src lib.rs tests sim main.rs PLAN.md Package-lock.json README.md near dev dev account.env shared test test.near.json shared-test-staging test.near.json package.json src config.js main.js test-setup.js test.js
Counter example in Rust ================================= ``` near keys contract.spread.testnet - если удалить тут ключ то контракт будет залоченным и уже не изменяемым export CONTRACT_NAME=contract.spread.testnet echo $CONTRACT_NAME near view contract.spread.testnet get_num '{}' near call contract.spread.testnet increment '{}' --accountId spread.testnet --amount 2 near view contract.spread.testnet get_users '{}' near dev-deploy -f out/main.wasm - деплой нового контракта чтоб не делать миграцию export CONTRACT_NAME=dev-1644433678195-58699302756227 near view dev-1644433678195-58699302756227 get_users '{}' near call dev-1644433678195-58699302756227 new '{}' --accountId spread.testnet near call dev-1644433678195-58699302756227 make_new_insurance '{"contract_address": "contract_addressABC", "nft_id": "nft_id123", "image_hash": "HGJGJFGHF"}' --accountId spread.testnet --amount 1.5 near call dev-1644433678195-58699302756227 new'{}' --accountId spread.testnet near view dev-1644875835620-48624210077592 get_insurance_data '{"contract_address": "contract_addressABC", "nft_id": "nft_id123"}' --accountId spread.testnet near view dev-1644875835620-48624210077592 get_hash_image_nft '{"contract_address": "contract_addressABC", "nft_id": "nft_id123"}' --accountId spread.testnet ``` [![Open in Gitpod!](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/near-examples/rust-counter) <!-- MAGIC COMMENT: DO NOT DELETE! Everything above this line is hidden on NEAR Examples page --> ## Description This contract implements simple counter backed by storage on blockchain. Contract in `contract/src/lib.rs` provides methods to increment / decrement counter and get it's current value or reset. Plus and minus buttons increase and decrease value correspondingly. When button L is toggled, a little light turns on, just for fun. RS button is for reset. LE and RE buttons to let the robot wink at you. ## To Run Open in the Gitpod link above or clone the repository. ``` git clone https://github.com/near-examples/rust-counter ``` ## Setup [Or skip to Login if in Gitpod](#login) Install dependencies: ``` yarn ``` If you don't have `Rust` installed, complete the following 3 steps: 1) Install Rustup by running: ``` curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` ([Taken from official installation guide](https://www.rust-lang.org/tools/install)) 2) Configure your current shell by running: ``` source $HOME/.cargo/env ``` 3) Add wasm target to your toolchain by running: ``` rustup target add wasm32-unknown-unknown ``` Next, make sure you have `near-cli` by running: ``` near --version ``` If you need to install `near-cli`: ``` npm install near-cli -g ``` ## Login If you do not have a NEAR account, please create one with [NEAR Wallet](https://wallet.testnet.near.org). In the project root, login with `near-cli` by following the instructions after this command: ``` near login ``` Modify the top of `src/config.js`, changing the `CONTRACT_NAME` to be the NEAR account that was just used to log in. ```javascript … const CONTRACT_NAME = 'YOUR_ACCOUNT_NAME_HERE'; /* TODO: fill this in! */ … ``` Start the example! ``` yarn start ``` ## To Test ``` cd contract cargo test -- --nocapture ``` ## To Explore - `contract/src/lib.rs` for the contract code - `src/index.html` for the front-end HTML - `src/main.js` for the JavaScript front-end code and how to integrate contracts - `src/test.js` for the JS tests for the contract ## To Build the Documentation ``` cd contract cargo doc --no-deps --open ``` ## sample https://1234-nearexample-rustcounter-dw48kfk1bl1.ws-eu47.gitpod.io/
MetacraftDAO_build-token-contract
Cargo.toml README.md build.sh dev_deploy.sh src lib.rs test.sh
# Commands Use `./build.sh` to build [optional] if you want to deploy a new dev contract, delele neardev/ folder. Then use `source .dev_deploy.sh` to deploy dev contract Example intialization `near call $CONTRACT_NAME new '{"owner_id": "katesona6.testnet", "total_supply": "10000000000000"}' --accountId $CONTRACT_NAME` to initiate the dev contract.
makerst_sample--nearly-neighbors
Cargo.toml README.md _config.yml as-pect.config.js asconfig.json package.json simulation Cargo.toml src factory.rs lib.rs project.rs proposal.rs src as-pect.d.ts as_types.d.ts factory __tests__ index.unit.spec.ts asconfig.json assembly index.ts project __tests__ index.unit.spec.ts asconfig.json assembly index.ts proposal __tests__ index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
# Nearly Neighbors A family of smart contracts developed for NEAR Protocol to enable crowd-sourced civic development. Think Kickstarter for neighborhood projects. ## Concept The contracts provided here enable users to propose neighborhood development projects and crowd-source funding for them. Think of it like Kickstarter, but instead of funding your roommate's sister's math rock band, you'd propose and fund projects like a new local park, grocery store, or community center. And the whole thing is powered by the NEAR protocol, so identity and financial tools are built in. ### Example Story For the sake of this explanation, we'll assume three users: Alice, Bob, and Carol. 1. Alice notices that there isn't a good grocery store in her neighborhood, so she creates a new [proposal](#proposal) and sets a target funding goal of 10 NEAR tokens. 2. Bob lives nearby and also would like to have fresh produce, so he pledges 5 NEAR tokens to Alice's proposal with a geographic radius of 1km from his home. 3. Carol lives farther away, but she would still like to have a grocery store even if it is a longer walk, so she pledges another 5 NEAR to Alice's proposal with an allowed radius of 5km. 4. Now that the proposal is _fully funded_, it is transformed into a [project](#project). A new project account is created, and Bob and Carol's pledged NEAR tokens are transferred over. This project's geographic location is set to the area of overlap between Bob and Carol's specified radii. 5. Alice, as the project owner, now has access to the project funds to hire a contractor and build her grocery store! - [Getting Started](#getting-started) - [Installation](#installation) - [Commands](#commands) - [Who This Is For](#who-this-is-for) - [UI Wireframes](#ui-wireframes) - [File Structure](#file-structure) - [Contracts](#contracts) - [Proposal](#proposal) - [Project](#project) - [Factory](#factory) - [Deploying](#deploying) - [Contributing](#contributing) - [Future Development](#future-development) - [Key Contributors](#key-contributors) --- ## Getting Started This repository is an example of a **dApp seed** project. **dApp seed** projects provide a stable foundation for developers to build a distributed application on top of. This includes: - One or more [smart contracts](https://docs.near.org/docs/roles/developer/contracts/intro) - [Unit tests](https://docs.near.org/docs/roles/developer/contracts/test-contracts#unit-tests) and [simulation tests](https://docs.near.org/docs/roles/developer/contracts/test-contracts#simulation-tests) for the contract(s) - Wireframes and/or mockups for a potential dApp UI - Utilities for building, testing, and deploying contracts (facilitated by the [NEAR CLI](https://docs.near.org/docs/development/near-cli)) ### Installation 1. clone this repo 2. run `yarn install` (or `npm install`) 3. run `yarn build` (or `npm run build`) 4. run `yarn test` (or `npm run test`) 5. explore the contents of `src/` See below for more convenience scripts ... ### Commands **Compile source to WebAssembly** ```sh yarn build # asb --target debug yarn build:release # asb ``` **Run unit tests** ```sh yarn test:unit # asp --verbose --nologo -f unit.spec ``` **Run simulation tests** These tests can be run from within VSCode (or any Rust-compatible IDE) or from the command line. _NOTE: Rust is required_ ```sh yarn test:simulate # yarn build:release && cargo test -- --nocapture ``` **Run all tests** ```sh yarn test # yarn test:unit && test:simulate ``` ### Who This Is For - Novice/intermediate Web3 devs looking for projects to practice on - Developers new to the NEAR Protocol looking for a learning sandbox - NEAR developers looking for inspiration ## UI Wireframes More wireframes can be found in the `wireframes/` folder. Here are some examples showing how we envision the basic user interface elements. **Create a Proposal** ![create-proposal](wireframes/create_proposal.png) **Supporting a Proposal** ![support-project-proposal](wireframes/support_proposal_modal.png) **Map of Projects** ![project-map](wireframes/project_map.png) ## File Structure This contract is designed to be self-contained and so may be extracted into your own projects and used as a starting point. If you do decide to use this code, please pay close attention to all top level files including: - NodeJS artifacts - `package.json`: JavaScript project dependencies and several useful scripts - AssemblyScript artifacts - `asconfig.json`: AssemblyScript project (and per contract) configuration including workspace configuration - `as-pect.config.js`: as-pect unit testing dependency - `src/tsconfig.json`: load TypeScript types - `src/as_types.ts`: AssemblyScript types header file - `src/as-pect.d.ts`: as-pect unit testing types header file - Rust artifacts - `Cargo.toml`: Rust project dependencies and configuration - `Cargo.lock`: version-locked list of Rust project dependencies The core file structure: ``` nearly-neighbors ├── README.md <-- this file ├── build <-- compiled contracts (WASM) │   ├── debug │   └── release ├── simulation │   ├── Cargo.toml <-- simulation test config │   └── src <-- simulation tests │   ├── factory.rs │   ├── lib.rs │   ├── project.rs │   └── proposal.rs ├── src │   ├── factory <-- factory contract with: │   │   ├── asconfig.json │   │   ├── assembly <-- source code │   │   │   └── index.ts │   │   └── __tests__ <-- unit tests │   │   └── index.unit.spec.ts │   ├── project <-- project contract with: │   │   ├── asconfig.json │   │   ├── assembly <-- source code │   │   │   └── index.ts │   │   └── __tests__ <-- unit tests │   │   └── index.unit.spec.ts │   ├── proposal <-- proposal contract with: │   │   ├── asconfig.json │   │   ├── assembly <-- source code │   │   │   └── index.ts │   │   └── __tests__ <-- unit tests │   │   └── index.unit.spec.ts │   └── utils.ts └── wireframes <-- wireframe images ``` ## Contracts There are three contracts that make up this project. By breaking out the logic into multiple contracts, we are employing NEAR development best practices which will make the code more secure (through rigorous testing of separated concerns) and robust (enabling complex features through [cross-contract calls](https://docs.near.org/docs/tutorials/how-to-write-contracts-that-talk-to-each-other)). ### Proposal The proposal contract represents a user's proposed idea for a development project. Proposals are created by users (mediated by the [factory](#factory)) and hold data like: - Project details (what, where, why) - Funding parameters (target amount, minimum pledge, due date) The proposal accepts funding from _supporters_. If proposals are fully funded by their due date, then they are closed and converted to a [project](#project) (with all funds transferred to the new project's account). If proposals do not meet their funding goals, then they are closed and all funds are returned to the supporters. ### Project The project contract represents a fully-funded proposal. It is managed by a _project owner_, who is authorized to access the project's NEAR tokens so that they can put those funds into use by actually executing on the real-world project. Projects are created automatically by the [factory](#factory) from a fully-funded [proposal](#proposal). Projects maintain a reference to their original proposal for proper record-keeping. Projects track their own real-world progress by reporting on key stats like: - Amount of funds used - % progress towards completion ### Factory The factory is a behind-the-scenes contract which takes care of the creation and setup of [proposals](#proposal) and [projects](#project). Instead of human users creating proposal and project contracts directly, they instead send requests to the factory which handles the necessary tasks for them. This is a pattern you'll see frequently in NEAR (and other blockchain) development: designating a contract with the responsibility for managing the lifecycle of other contracts. It helps abstract out the routine tasks of contract initialization and setup, limiting tedious user interactions and thus avoiding potential for user error. ## Deploying TODO: Add referral to resources for deploying ## Contributing There are two main ways you can contribute to this project: 1. **Build off of it**: we made this so that developers like you can build dApps more quickly and easily. Try building out a Web3 app on top of the provided [contracts](#contracts), using the wireframes as your guide. 2. **Enhance this dApp seed**: if you find a bug or an opportunity to enhance this repository, please submit an [issue](https://github.com/Learn-NEAR/nearly-neighbors/issues) and/or open a [pull request](https://github.com/Learn-NEAR/nearly-neighbors/pulls). Interested in creating your own **dApp seed** and earning rewards for your efforts? Learn more: [TODO: ADD LINK / MORE COPY]. ### Future Development Some ideas for future feature development: - Heatmaps showing the concentration of funding in particular geographic areas - Notifications for proposal/project owners and supporters - Algorithm for identifying ideal locations for a project, weighting the locations specified by supporters with their funding amount (i.e. more funding == more likely to use specified location) ### Key Contributors - [Sherif Abushadi - @amgando](https://github.com/amgando) - [Tanner Welsh - @tannerwelsh](https://github.com/tannerwelsh)
near_wasi-stub
.github ISSUE_TEMPLATE BOUNTY.yml README.md build.sh run.sh wasi-stub.c
# WASI-STUB This tool takes a wasm file, replace all `wasi_snapshot_preview1` import to (stub) functions defines in the same module. This is useful when executing wasm in sandbox enviroment where wasi is not available. ## Build First build binaryen with `cmake . && make -j`. Then: ``` ./build.sh ``` ## Use ``` ./run.sh file.wasm ``` The tool will write file.wasm inplace.
phoenix-token_pnx_token
Cargo.toml README.md node_modules .package-lock.json base-x LICENSE.md README.md package.json src index.d.ts index.js bn.js CHANGELOG.md README.md lib bn.js package.json borsh .eslintrc.yml .travis.yml LICENSE-MIT.txt README.md borsh-ts .eslintrc.yml index.ts test .eslintrc.yml fuzz borsh-roundtrip.js transaction-example enums.d.ts enums.js key_pair.d.ts key_pair.js serialize.d.ts serialize.js signer.d.ts signer.js transaction.d.ts transaction.js serialize.test.js lib index.d.ts index.js package.json tsconfig.json bs58 CHANGELOG.md README.md index.js package.json capability Array.prototype.forEach.js Array.prototype.map.js Error.captureStackTrace.js Error.prototype.stack.js Function.prototype.bind.js Object.create.js Object.defineProperties.js Object.defineProperty.js Object.prototype.hasOwnProperty.js README.md arguments.callee.caller.js es5.js index.js lib CapabilityDetector.js definitions.js index.js package.json strict mode.js depd History.md Readme.md index.js lib browser index.js package.json error-polyfill README.md index.js lib index.js non-v8 Frame.js FrameStringParser.js FrameStringSource.js index.js prepareStackTrace.js unsupported.js v8.js package.json http-errors HISTORY.md README.md index.js node_modules depd History.md Readme.md index.js lib browser index.js compat callsite-tostring.js event-listener-count.js index.js package.json package.json inherits README.md inherits.js inherits_browser.js package.json js-sha256 CHANGELOG.md LICENSE.txt README.md build sha256.min.js index.d.ts package.json src sha256.js mustache CHANGELOG.md README.md mustache.js mustache.min.js package.json near-api-js README.md browser-exports.js dist near-api-js.js near-api-js.min.js lib account.d.ts account.js account_creator.d.ts account_creator.js account_multisig.d.ts account_multisig.js browser-connect.d.ts browser-connect.js browser-index.d.ts browser-index.js common-index.d.ts common-index.js connect.d.ts connect.js connection.d.ts connection.js constants.d.ts constants.js contract.d.ts contract.js generated rpc_error_schema.json index.d.ts index.js key_stores browser-index.d.ts browser-index.js browser_local_storage_key_store.d.ts browser_local_storage_key_store.js in_memory_key_store.d.ts in_memory_key_store.js index.d.ts index.js keystore.d.ts keystore.js merge_key_store.d.ts merge_key_store.js unencrypted_file_system_keystore.d.ts unencrypted_file_system_keystore.js near.d.ts near.js providers index.d.ts index.js json-rpc-provider.d.ts json-rpc-provider.js provider.d.ts provider.js res error_messages.d.ts error_messages.json signer.d.ts signer.js transaction.d.ts transaction.js utils enums.d.ts enums.js errors.d.ts errors.js exponential-backoff.d.ts exponential-backoff.js format.d.ts format.js index.d.ts index.js key_pair.d.ts key_pair.js network.d.ts network.js rpc_errors.d.ts rpc_errors.js serialize.d.ts serialize.js setup-node-fetch.d.ts setup-node-fetch.js web.d.ts web.js validators.d.ts validators.js wallet-account.d.ts wallet-account.js package.json node-fetch LICENSE.md README.md browser.js lib index.es.js index.js package.json o3 README.md index.js lib Class.js abstractMethod.js index.js package.json safe-buffer README.md index.d.ts index.js package.json setprototypeof README.md index.d.ts index.js package.json test index.js statuses HISTORY.md README.md codes.json index.js package.json text-encoding-utf-8 LICENSE.md README.md lib encoding.js encoding.lib.js package.json src encoding.js polyfill.js toidentifier HISTORY.md README.md index.js package.json tr46 index.js lib mappingTable.json package.json tweetnacl AUTHORS.md CHANGELOG.md PULL_REQUEST_TEMPLATE.md README.md nacl-fast.js nacl-fast.min.js nacl.d.ts nacl.js nacl.min.js package.json u3 README.md index.js lib cache.js eachCombination.js index.js package.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js public-api.js url-state-machine.js utils.js package.json | package-lock.json package.json src lib.rs target .rustc_info.json debug .fingerprint Inflector-d2148e4a37c14682 lib-inflector.json ahash-d65fcef89a4d232e lib-ahash.json ahash-e48ca6696c531141 lib-ahash.json aho-corasick-4814d8e10bca2401 lib-aho_corasick.json aho-corasick-a0937382ff8ed835 lib-aho_corasick.json autocfg-e3e3fa18282d92e3 lib-autocfg.json base64-1d1fa0a77822fd50 lib-base64.json base64-33f73ae567a16c19 lib-base64.json block-buffer-926e958c9ca14572 lib-block-buffer.json block-buffer-9a86c169c28aa746 lib-block-buffer.json block-padding-835a1d826697cd1a lib-block-padding.json block-padding-88cdfec2d062b79f lib-block-padding.json borsh-172f55070885e3ed lib-borsh.json borsh-cbe247b820451179 lib-borsh.json borsh-derive-38597ce61522308d lib-borsh-derive.json borsh-derive-internal-27359216778bfbb1 lib-borsh-derive-internal.json borsh-schema-derive-internal-40d07ef60f46a592 lib-borsh-schema-derive-internal.json bs58-7c02f0245922d07d lib-bs58.json bs58-edb79d29363168f0 lib-bs58.json byteorder-a8009d40ace471f8 lib-byteorder.json byteorder-f49457a38753d563 lib-byteorder.json cfg-if-5b9b3cb1bf3d00dc lib-cfg-if.json cfg-if-b5167c67a271ba5c lib-cfg-if.json cfg-if-c3580a45fb38904f lib-cfg-if.json cfg-if-d79d4230b67dff33 lib-cfg-if.json convert_case-82ddf0742915dbc2 lib-convert_case.json cpufeatures-73ca1b41945b5e9c lib-cpufeatures.json cpufeatures-976ecce42899b7ae lib-cpufeatures.json derive_more-31536f792cc1bba9 lib-derive_more.json digest-26920ea2be9bdf8d lib-digest.json digest-8a7e21b8ab925d62 lib-digest.json generic-array-555cf76d40225b69 lib-generic_array.json generic-array-8f511ee942ee3faf run-build-script-build-script-build.json generic-array-9f420af43724f360 lib-generic_array.json generic-array-fdd115df1579780f build-script-build-script-build.json hashbrown-8a40706889e5a369 lib-hashbrown.json hashbrown-a0751d02fb4b8484 lib-hashbrown.json hashbrown-b300a7d0fe31bfa6 lib-hashbrown.json hex-1370f99b551c04ee lib-hex.json hex-c1bceeff1683446c lib-hex.json indexmap-161c5a672f633d51 lib-indexmap.json indexmap-4d0734029afd1012 run-build-script-build-script-build.json indexmap-a1e5c78203d7ae4d build-script-build-script-build.json itoa-4d5d7f7e186c0f94 lib-itoa.json itoa-cfa4024c9bdc9940 lib-itoa.json keccak-4adafa28f298be01 lib-keccak.json keccak-95902e7fa835e531 lib-keccak.json lazy_static-54f0ec464d8b1ca3 lib-lazy_static.json lazy_static-bc2f47adaad103e9 lib-lazy_static.json libc-54efb337ac82c463 lib-libc.json libc-653d85e13f7f56c5 run-build-script-build-script-build.json libc-af5ccdb0c4b1e8a8 lib-libc.json libc-d6efd6de7f1db008 build-script-build-script-build.json memchr-58968f99e2942a99 run-build-script-build-script-build.json memchr-5af045cbff601e05 lib-memchr.json memchr-7a85b35aceb91ef0 lib-memchr.json memchr-e91501c3225c1377 build-script-build-script-build.json memory_units-4514568d388b0353 lib-memory_units.json memory_units-f62e7862f8968c91 lib-memory_units.json near-contract-standards-daacb699874e00c0 lib-near-contract-standards.json near-contract-standards-ffcf86d46bf1a703 lib-near-contract-standards.json near-primitives-core-7e48627dcedfdc6b lib-near-primitives-core.json near-primitives-core-cf32b04a1fa152e9 lib-near-primitives-core.json near-rpc-error-core-e23c66902120cdaf lib-near-rpc-error-core.json near-rpc-error-macro-f58fcad0bb65f9d6 lib-near-rpc-error-macro.json near-runtime-utils-473bca9262479918 lib-near-runtime-utils.json near-runtime-utils-b256469e1d458d8e lib-near-runtime-utils.json near-sdk-63e96e6dcb49f60f lib-near-sdk.json near-sdk-core-b78606d0d7cf3770 lib-near-sdk-core.json near-sdk-f9618951547e1100 lib-near-sdk.json near-sdk-macros-9ba44f85c3a43d3f lib-near-sdk-macros.json near-vm-errors-8557c2c03348f9d2 lib-near-vm-errors.json near-vm-errors-c8f0a7136695982d lib-near-vm-errors.json near-vm-logic-6a65c77594853982 lib-near-vm-logic.json near-vm-logic-d24ef7ae728ddd5f lib-near-vm-logic.json num-bigint-101aef2716dad5ee lib-num-bigint.json num-bigint-938bc37a47123fc3 build-script-build-script-build.json num-bigint-a4a25dceb60a6853 lib-num-bigint.json num-bigint-ec107ef8a4cf9153 run-build-script-build-script-build.json num-integer-41fd79b62aab9f0b run-build-script-build-script-build.json num-integer-5a0bb3acc13cb585 build-script-build-script-build.json num-integer-608e849467a16cb2 lib-num-integer.json num-integer-c593d2c37ca0a010 lib-num-integer.json num-rational-986f111fafd2978a run-build-script-build-script-build.json num-rational-bdced73fb01c8e5f lib-num-rational.json num-rational-c5ce5f8f936c3d71 lib-num-rational.json num-rational-dd2d1147d204ab3b build-script-build-script-build.json num-traits-3c47a0ba9679ff1b lib-num-traits.json num-traits-3d83e71a22558e2d build-script-build-script-build.json num-traits-90303ab66257ddd1 lib-num-traits.json num-traits-90d8d8116c45a755 run-build-script-build-script-build.json opaque-debug-0d7e2328cebd6cca lib-opaque-debug.json opaque-debug-535f570a2b9e895f lib-opaque-debug.json phoenix_token-02c79373934f323f lib-phoenix_token.json phoenix_token-469568d40aed30dc lib-phoenix_token.json phoenix_token-6d7d4aeffaf854a7 test-bin-phoenix_token.json phoenix_token-6d83260fa6cba77f test-lib-phoenix_token.json phoenix_token-72bba978fd8007db test-lib-phoenix_token.json phoenix_token-8274cb401ccf7929 lib-phoenix_token.json phoenix_token-83948a66b60dd734 test-lib-phoenix_token.json phoenix_token-b39b433484447e6e test-bin-phoenix_token.json phoenix_token-d84cd6229607f3b3 bin-phoenix_token.json proc-macro-crate-3698f2278bc06923 lib-proc-macro-crate.json proc-macro2-0687e631b58b584a run-build-script-build-script-build.json proc-macro2-a43938dac13af6ab build-script-build-script-build.json proc-macro2-f8441f330c6f5955 lib-proc-macro2.json quote-0241397e45631a95 lib-quote.json regex-567d3bb5e25ff96b lib-regex.json regex-d0fa6d6ad1ccfd8c lib-regex.json regex-syntax-218be71617600397 lib-regex-syntax.json regex-syntax-256a0851c3b5d35e lib-regex-syntax.json ryu-050e098a2dda45ef lib-ryu.json ryu-64132e495642f4c7 lib-ryu.json serde-264b2ef6a13b611e lib-serde.json serde-30b2c95a2593f43b lib-serde.json serde-b102d1ea1a7f59e2 build-script-build-script-build.json serde-e0bd6cf9ad2bdf0f run-build-script-build-script-build.json serde_derive-0bb40d09e1e36c41 build-script-build-script-build.json serde_derive-2d8841a8192883ce run-build-script-build-script-build.json serde_derive-6b1a8a3ea7e1055b lib-serde_derive.json serde_json-03d831d8946844db lib-serde_json.json serde_json-6f6dd21a11d94b84 run-build-script-build-script-build.json serde_json-787a23e99b6ef226 build-script-build-script-build.json serde_json-9a66588637712747 lib-serde_json.json serde_json-cb513bd4d27b117e build-script-build-script-build.json serde_json-eb6013eef5903132 run-build-script-build-script-build.json serde_json-ff5640a399c19887 lib-serde_json.json sha2-b47be6867c72bc4d lib-sha2.json sha2-b94118c7d0a64ef5 lib-sha2.json sha3-566c0976f168a288 lib-sha3.json sha3-6e27d347bc0a5009 lib-sha3.json syn-6bde7c3493323213 run-build-script-build-script-build.json syn-9af385bdbff15c52 lib-syn.json syn-d6cbdee0883f2c5c build-script-build-script-build.json toml-0d6f4ee19cc46595 lib-toml.json typenum-30844e8a3a841e5c lib-typenum.json typenum-35a6aa278c267896 lib-typenum.json typenum-c5f92657527e1b97 build-script-build-script-main.json typenum-e8acf6f6539df927 run-build-script-build-script-main.json unicode-xid-49139481c474d2cb lib-unicode-xid.json version_check-69ef4aa67c5ca43a lib-version_check.json wee_alloc-2a043c5a3b82fa5a run-build-script-build-script-build.json wee_alloc-3c5de2d60bf3159f lib-wee_alloc.json wee_alloc-57a318be28717cee build-script-build-script-build.json wee_alloc-bb0fb8a695a7ee51 lib-wee_alloc.json build num-bigint-ec107ef8a4cf9153 out radix_bases.rs typenum-e8acf6f6539df927 out consts.rs op.rs tests.rs wee_alloc-2a043c5a3b82fa5a out wee_alloc_static_array_backend_size_bytes.txt release .fingerprint Inflector-7ef32d70afaac09f lib-inflector.json autocfg-063ee82a28e313a5 lib-autocfg.json borsh-derive-ce2ce45afe7dae76 lib-borsh-derive.json borsh-derive-internal-65baf608b9651b7d lib-borsh-derive-internal.json borsh-schema-derive-internal-8b3ff5bafefac407 lib-borsh-schema-derive-internal.json convert_case-eb727afdf64078c8 lib-convert_case.json derive_more-b91d31e275b067d2 lib-derive_more.json generic-array-9a874abd13932cba build-script-build-script-build.json hashbrown-d9aadeafccd6bb9b lib-hashbrown.json indexmap-6d475ca9463402fa run-build-script-build-script-build.json indexmap-8857ac678bb5b23e build-script-build-script-build.json indexmap-fcc43699fbdeb348 lib-indexmap.json itoa-3382ef129a5f06d6 lib-itoa.json memchr-c552c9c807dc2ab3 build-script-build-script-build.json near-rpc-error-core-96afc390c4263cff lib-near-rpc-error-core.json near-rpc-error-macro-8d8e6a95df5a8bf1 lib-near-rpc-error-macro.json near-sdk-core-cb29fd0d119c8746 lib-near-sdk-core.json near-sdk-macros-7ba890a9e924b0c6 lib-near-sdk-macros.json num-bigint-64434fbed5478adb build-script-build-script-build.json num-integer-36b5945a971f6f8c build-script-build-script-build.json num-rational-ed7f61b64a593150 build-script-build-script-build.json num-traits-0fe77a0b25a7a2bd build-script-build-script-build.json proc-macro-crate-d35a63b7a7a18d04 lib-proc-macro-crate.json proc-macro2-14808c021c0427f8 lib-proc-macro2.json proc-macro2-1af71dc6ccc05f4c build-script-build-script-build.json proc-macro2-315136781c052803 run-build-script-build-script-build.json quote-478c7a2a00e07a41 lib-quote.json ryu-094f6d1456fb40b7 lib-ryu.json serde-32e2a799259f70f5 run-build-script-build-script-build.json serde-46c9393e3ca17d32 build-script-build-script-build.json serde-895e60e02424cdae lib-serde.json serde_derive-3e768a35a971a387 build-script-build-script-build.json serde_derive-4c44dba5ad7e9678 run-build-script-build-script-build.json serde_derive-88790b70bd592490 lib-serde_derive.json serde_json-3bb725b520e6ecdf run-build-script-build-script-build.json serde_json-605e50a200bef0fa lib-serde_json.json serde_json-82585ccecce718db build-script-build-script-build.json serde_json-aa37d93633ae9636 build-script-build-script-build.json syn-1ac3ec2b01300caf run-build-script-build-script-build.json syn-4f74f4309bbdd437 lib-syn.json syn-5aa48816ae344832 build-script-build-script-build.json toml-95031f9fd80e2b29 lib-toml.json typenum-99aae5e1e60938da build-script-build-script-main.json unicode-xid-07e6b47a0058b970 lib-unicode-xid.json version_check-cb15d5121a73e237 lib-version_check.json wee_alloc-9bedca6040460c03 build-script-build-script-build.json rls .rustc_info.json debug .fingerprint Inflector-d2148e4a37c14682 lib-inflector.json ahash-d65fcef89a4d232e lib-ahash.json aho-corasick-4814d8e10bca2401 lib-aho_corasick.json autocfg-e3e3fa18282d92e3 lib-autocfg.json base64-1d1fa0a77822fd50 lib-base64.json block-buffer-9a86c169c28aa746 lib-block-buffer.json block-padding-88cdfec2d062b79f lib-block-padding.json borsh-172f55070885e3ed lib-borsh.json borsh-derive-38597ce61522308d lib-borsh-derive.json borsh-derive-internal-27359216778bfbb1 lib-borsh-derive-internal.json borsh-schema-derive-internal-40d07ef60f46a592 lib-borsh-schema-derive-internal.json bs58-7c02f0245922d07d lib-bs58.json byteorder-f49457a38753d563 lib-byteorder.json cfg-if-5b9b3cb1bf3d00dc lib-cfg-if.json cfg-if-b5167c67a271ba5c lib-cfg-if.json convert_case-82ddf0742915dbc2 lib-convert_case.json cpufeatures-976ecce42899b7ae lib-cpufeatures.json derive_more-31536f792cc1bba9 lib-derive_more.json digest-26920ea2be9bdf8d lib-digest.json generic-array-555cf76d40225b69 lib-generic_array.json generic-array-8f511ee942ee3faf run-build-script-build-script-build.json generic-array-fdd115df1579780f build-script-build-script-build.json hashbrown-a0751d02fb4b8484 lib-hashbrown.json hashbrown-b300a7d0fe31bfa6 lib-hashbrown.json hex-1370f99b551c04ee lib-hex.json indexmap-161c5a672f633d51 lib-indexmap.json indexmap-4d0734029afd1012 run-build-script-build-script-build.json indexmap-a1e5c78203d7ae4d build-script-build-script-build.json itoa-4d5d7f7e186c0f94 lib-itoa.json itoa-cfa4024c9bdc9940 lib-itoa.json keccak-95902e7fa835e531 lib-keccak.json lazy_static-bc2f47adaad103e9 lib-lazy_static.json libc-54efb337ac82c463 lib-libc.json libc-653d85e13f7f56c5 run-build-script-build-script-build.json libc-d6efd6de7f1db008 build-script-build-script-build.json memchr-58968f99e2942a99 run-build-script-build-script-build.json memchr-5af045cbff601e05 lib-memchr.json memchr-e91501c3225c1377 build-script-build-script-build.json memory_units-f62e7862f8968c91 lib-memory_units.json near-contract-standards-ffcf86d46bf1a703 lib-near-contract-standards.json near-primitives-core-7e48627dcedfdc6b lib-near-primitives-core.json near-rpc-error-core-e23c66902120cdaf lib-near-rpc-error-core.json near-rpc-error-macro-f58fcad0bb65f9d6 lib-near-rpc-error-macro.json near-runtime-utils-473bca9262479918 lib-near-runtime-utils.json near-sdk-core-b78606d0d7cf3770 lib-near-sdk-core.json near-sdk-f9618951547e1100 lib-near-sdk.json near-sdk-macros-9ba44f85c3a43d3f lib-near-sdk-macros.json near-vm-errors-8557c2c03348f9d2 lib-near-vm-errors.json near-vm-logic-6a65c77594853982 lib-near-vm-logic.json num-bigint-938bc37a47123fc3 build-script-build-script-build.json num-bigint-a4a25dceb60a6853 lib-num-bigint.json num-bigint-ec107ef8a4cf9153 run-build-script-build-script-build.json num-integer-41fd79b62aab9f0b run-build-script-build-script-build.json num-integer-5a0bb3acc13cb585 build-script-build-script-build.json num-integer-c593d2c37ca0a010 lib-num-integer.json num-rational-986f111fafd2978a run-build-script-build-script-build.json num-rational-bdced73fb01c8e5f lib-num-rational.json num-rational-dd2d1147d204ab3b build-script-build-script-build.json num-traits-3d83e71a22558e2d build-script-build-script-build.json num-traits-90303ab66257ddd1 lib-num-traits.json num-traits-90d8d8116c45a755 run-build-script-build-script-build.json opaque-debug-535f570a2b9e895f lib-opaque-debug.json phoenix_token-02c79373934f323f lib-phoenix_token.json phoenix_token-6d7d4aeffaf854a7 test-bin-phoenix_token.json phoenix_token-83948a66b60dd734 test-lib-phoenix_token.json phoenix_token-d84cd6229607f3b3 bin-phoenix_token.json proc-macro-crate-3698f2278bc06923 lib-proc-macro-crate.json proc-macro2-0687e631b58b584a run-build-script-build-script-build.json proc-macro2-a43938dac13af6ab build-script-build-script-build.json proc-macro2-f8441f330c6f5955 lib-proc-macro2.json quote-0241397e45631a95 lib-quote.json regex-567d3bb5e25ff96b lib-regex.json regex-syntax-256a0851c3b5d35e lib-regex-syntax.json ryu-050e098a2dda45ef lib-ryu.json ryu-64132e495642f4c7 lib-ryu.json serde-264b2ef6a13b611e lib-serde.json serde-30b2c95a2593f43b lib-serde.json serde-b102d1ea1a7f59e2 build-script-build-script-build.json serde-e0bd6cf9ad2bdf0f run-build-script-build-script-build.json serde_derive-0bb40d09e1e36c41 build-script-build-script-build.json serde_derive-2d8841a8192883ce run-build-script-build-script-build.json serde_derive-6b1a8a3ea7e1055b lib-serde_derive.json serde_json-03d831d8946844db lib-serde_json.json serde_json-6f6dd21a11d94b84 run-build-script-build-script-build.json serde_json-787a23e99b6ef226 build-script-build-script-build.json serde_json-9a66588637712747 lib-serde_json.json serde_json-cb513bd4d27b117e build-script-build-script-build.json serde_json-eb6013eef5903132 run-build-script-build-script-build.json sha2-b94118c7d0a64ef5 lib-sha2.json sha3-6e27d347bc0a5009 lib-sha3.json syn-6bde7c3493323213 run-build-script-build-script-build.json syn-9af385bdbff15c52 lib-syn.json syn-d6cbdee0883f2c5c build-script-build-script-build.json toml-0d6f4ee19cc46595 lib-toml.json typenum-30844e8a3a841e5c lib-typenum.json typenum-c5f92657527e1b97 build-script-build-script-main.json typenum-e8acf6f6539df927 run-build-script-build-script-main.json unicode-xid-49139481c474d2cb lib-unicode-xid.json version_check-69ef4aa67c5ca43a lib-version_check.json wee_alloc-2a043c5a3b82fa5a run-build-script-build-script-build.json wee_alloc-3c5de2d60bf3159f lib-wee_alloc.json wee_alloc-57a318be28717cee build-script-build-script-build.json build generic-array-fdd115df1579780f save-analysis build_script_build-fdd115df1579780f.json indexmap-a1e5c78203d7ae4d save-analysis build_script_build-a1e5c78203d7ae4d.json libc-d6efd6de7f1db008 save-analysis build_script_build-d6efd6de7f1db008.json memchr-e91501c3225c1377 save-analysis build_script_build-e91501c3225c1377.json num-bigint-938bc37a47123fc3 save-analysis build_script_build-938bc37a47123fc3.json num-bigint-ec107ef8a4cf9153 out radix_bases.rs num-integer-5a0bb3acc13cb585 save-analysis build_script_build-5a0bb3acc13cb585.json num-rational-dd2d1147d204ab3b save-analysis build_script_build-dd2d1147d204ab3b.json num-traits-3d83e71a22558e2d save-analysis build_script_build-3d83e71a22558e2d.json proc-macro2-a43938dac13af6ab save-analysis build_script_build-a43938dac13af6ab.json serde-b102d1ea1a7f59e2 save-analysis build_script_build-b102d1ea1a7f59e2.json serde_derive-0bb40d09e1e36c41 save-analysis build_script_build-0bb40d09e1e36c41.json serde_json-787a23e99b6ef226 save-analysis build_script_build-787a23e99b6ef226.json serde_json-cb513bd4d27b117e save-analysis build_script_build-cb513bd4d27b117e.json syn-d6cbdee0883f2c5c save-analysis build_script_build-d6cbdee0883f2c5c.json typenum-c5f92657527e1b97 save-analysis build_script_main-c5f92657527e1b97.json typenum-e8acf6f6539df927 out consts.rs op.rs tests.rs wee_alloc-2a043c5a3b82fa5a out wee_alloc_static_array_backend_size_bytes.txt wee_alloc-57a318be28717cee save-analysis build_script_build-57a318be28717cee.json deps save-analysis libahash-d65fcef89a4d232e.json libaho_corasick-4814d8e10bca2401.json libautocfg-e3e3fa18282d92e3.json libbase64-1d1fa0a77822fd50.json libblock_buffer-9a86c169c28aa746.json libblock_padding-88cdfec2d062b79f.json libborsh-172f55070885e3ed.json libbs58-7c02f0245922d07d.json libbyteorder-f49457a38753d563.json libderive_more-31536f792cc1bba9.json libdigest-26920ea2be9bdf8d.json libgeneric_array-555cf76d40225b69.json libhashbrown-b300a7d0fe31bfa6.json libhex-1370f99b551c04ee.json libindexmap-161c5a672f633d51.json libinflector-d2148e4a37c14682.json libitoa-cfa4024c9bdc9940.json libkeccak-95902e7fa835e531.json liblazy_static-bc2f47adaad103e9.json libmemchr-5af045cbff601e05.json libmemory_units-f62e7862f8968c91.json libnear_contract_standards-ffcf86d46bf1a703.json libnear_primitives_core-7e48627dcedfdc6b.json libnear_rpc_error_core-e23c66902120cdaf.json libnear_rpc_error_macro-f58fcad0bb65f9d6.json libnear_runtime_utils-473bca9262479918.json libnear_sdk-f9618951547e1100.json libnear_vm_errors-8557c2c03348f9d2.json libnear_vm_logic-6a65c77594853982.json libnum_bigint-a4a25dceb60a6853.json libnum_integer-c593d2c37ca0a010.json libnum_rational-bdced73fb01c8e5f.json libnum_traits-90303ab66257ddd1.json libopaque_debug-535f570a2b9e895f.json libphoenix_token-02c79373934f323f.json libproc_macro2-f8441f330c6f5955.json libproc_macro_crate-3698f2278bc06923.json libquote-0241397e45631a95.json libregex-567d3bb5e25ff96b.json libryu-050e098a2dda45ef.json libryu-64132e495642f4c7.json libsha2-b94118c7d0a64ef5.json libsha3-6e27d347bc0a5009.json libtoml-0d6f4ee19cc46595.json libunicode_xid-49139481c474d2cb.json libversion_check-69ef4aa67c5ca43a.json libwee_alloc-3c5de2d60bf3159f.json phoenix_token-83948a66b60dd734.json wasm32-unknown-unknown release .fingerprint ahash-f7cf4fa38a3fe7ac lib-ahash.json aho-corasick-4a2deca6c9d6e8d7 lib-aho_corasick.json base64-d9a8e5be8074f405 lib-base64.json block-buffer-7460282723dcfcb9 lib-block-buffer.json block-padding-838bf39b113201cb lib-block-padding.json borsh-e3b845d30258ccf9 lib-borsh.json bs58-fb8c4396ed2add6b lib-bs58.json byteorder-b4509dfe817643d2 lib-byteorder.json cfg-if-581a25a5acfbc7c2 lib-cfg-if.json cfg-if-d96c7cfb71e39939 lib-cfg-if.json digest-7a0e666c57fa34e6 lib-digest.json generic-array-0c1546d850513fd6 lib-generic_array.json generic-array-bd1b60a2c458ffed run-build-script-build-script-build.json hashbrown-89d3525e20464ba1 lib-hashbrown.json hex-209af3d672d5dafc lib-hex.json itoa-16e93f7218c37459 lib-itoa.json keccak-3c68828eb1306c36 lib-keccak.json lazy_static-250d9b7bc6d38c4c lib-lazy_static.json memchr-3288b1ea61cc18c9 lib-memchr.json memchr-4a314dcb4356c013 run-build-script-build-script-build.json memory_units-a7996f65bfa6a771 lib-memory_units.json near-contract-standards-9de7fb8f19b7f3d1 lib-near-contract-standards.json near-primitives-core-12f9ae50d11dd579 lib-near-primitives-core.json near-runtime-utils-bc03a9444fab4bc0 lib-near-runtime-utils.json near-sdk-f4c476c21dbe2bb6 lib-near-sdk.json near-vm-errors-cfa2db1b15d970b7 lib-near-vm-errors.json near-vm-logic-122b96f9569266ab lib-near-vm-logic.json num-bigint-26a098bcb87debb2 run-build-script-build-script-build.json num-bigint-76fac97ba6a41e2c lib-num-bigint.json num-integer-2860c1bab4f8d95f run-build-script-build-script-build.json num-integer-b9077f327002dc3c lib-num-integer.json num-rational-7e1b3330e3fa93b8 lib-num-rational.json num-rational-fcefa60d862bc991 run-build-script-build-script-build.json num-traits-60bd151c241c246c run-build-script-build-script-build.json num-traits-b3af09d0925d7440 lib-num-traits.json opaque-debug-8265b81f32182cc7 lib-opaque-debug.json phoenix_token-8274cb401ccf7929 lib-phoenix_token.json regex-473645421583abbf lib-regex.json regex-syntax-9c869f621bdd564e lib-regex-syntax.json ryu-8974a2c0ace42834 lib-ryu.json serde-23ce7ee901f74f97 run-build-script-build-script-build.json serde-2797b94f2c31d73e lib-serde.json serde_json-0fcb9402df71b79e run-build-script-build-script-build.json serde_json-c7b59ef02c928b52 lib-serde_json.json sha2-ebc981c96fcab093 lib-sha2.json sha3-fa94fd034dad119a lib-sha3.json typenum-72aaf0e7bb570c67 run-build-script-build-script-main.json typenum-fba795c8bbd94c2f lib-typenum.json wee_alloc-0514bf2b7f5806ee run-build-script-build-script-build.json wee_alloc-49cf625686a89275 lib-wee_alloc.json build num-bigint-26a098bcb87debb2 out radix_bases.rs typenum-72aaf0e7bb570c67 out consts.rs op.rs tests.rs wee_alloc-0514bf2b7f5806ee out wee_alloc_static_array_backend_size_bytes.txt |","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1038,"byte_end":1133,"line_start":22,"line_end":22,"column_start":1,"column_end":96}},{"value":" | `encode` | Returns a new `String` | Always |","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1134,"byte_end":1229,"line_start":23,"line_end":23,"column_start":1,"column_end":96}},{"value":" | `encode_config` | Returns a new `String` | Always |","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1230,"byte_end":1325,"line_start":24,"line_end":24,"column_start":1,"column_end":96}},{"value":" | `encode_config_buf` | Appends to provided `String` | Only if `String` needs to grow |","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1326,"byte_end":1421,"line_start":25,"line_end":25,"column_start":1,"column_end":96}},{"value":" | `encode_config_slice` | Writes to provided `&[u8]` | Never |","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1422,"byte_end":1517,"line_start":26,"line_end":26,"column_start":1,"column_end":96}},{"value":" ","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1518,"byte_end":1521,"line_start":27,"line_end":27,"column_start":1,"column_end":4}},{"value":" All of the encoding functions that take a `Config` will pad as per the config.","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1522,"byte_end":1604,"line_start":28,"line_end":28,"column_start":1,"column_end":83}},{"value":" ","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1605,"byte_end":1608,"line_start":29,"line_end":29,"column_start":1,"column_end":4}},{"value":" # Decoding","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1609,"byte_end":1623,"line_start":30,"line_end":30,"column_start":1,"column_end":15}},{"value":" ","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1624,"byte_end":1627,"line_start":31,"line_end":31,"column_start":1,"column_end":4}},{"value":" Just as for encoding, there are different decoding functions available.","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1628,"byte_end":1703,"line_start":32,"line_end":32,"column_start":1,"column_end":76}},{"value":" ","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1704,"byte_end":1707,"line_start":33,"line_end":33,"column_start":1,"column_end":4}},{"value":" | Function | Output | Allocates |","span":{"file_name":" home ken .cargo registry src github.com-1ecc6299db9ec823 base64-0.13.0 src lib.rs","byte_start":1708,"byte_end":1804,"line_start":34,"line_end":34,"column_start":1,"column_end":97}},{"value":" |
# u3 - Utility Functions This lib contains utility functions for e3, dataflower and other projects. ## Documentation ### Installation ```bash npm install u3 ``` ```bash bower install u3 ``` #### Usage In this documentation I used the lib as follows: ```js var u3 = require("u3"), cache = u3.cache, eachCombination = u3.eachCombination; ``` ### Function wrappers #### cache The `cache(fn)` function caches the fn results, so by the next calls it will return the result of the first call. You can use different arguments, but they won't affect the return value. ```js var a = cache(function fn(x, y, z){ return x + y + z; }); console.log(a(1, 2, 3)); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` It is possible to cache a value too. ```js var a = cache(1 + 2 + 3); console.log(a()); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` ### Math #### eachCombination The `eachCombination(alternativesByDimension, callback)` calls the `callback(a,b,c,...)` on each combination of the `alternatives[a[],b[],c[],...]`. ```js eachCombination([ [1, 2, 3], ["a", "b"] ], console.log); /* 1, "a" 1, "b" 2, "a" 2, "b" 3, "a" 3, "b" */ ``` You can use any dimension and number of alternatives. In the current example we used 2 dimensions. By the first dimension we used 3 alternatives: `[1, 2, 3]` and by the second dimension we used 2 alternatives: `["a", "b"]`. ## License MIT - 2016 Jánszky László Lajos # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 67 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime TweetNaCl.js ============ Port of [TweetNaCl](http://tweetnacl.cr.yp.to) / [NaCl](http://nacl.cr.yp.to/) to JavaScript for modern browsers and Node.js. Public domain. [![Build Status](https://travis-ci.org/dchest/tweetnacl-js.svg?branch=master) ](https://travis-ci.org/dchest/tweetnacl-js) Demo: <https://dchest.github.io/tweetnacl-js/> Documentation ============= * [Overview](#overview) * [Audits](#audits) * [Installation](#installation) * [Examples](#examples) * [Usage](#usage) * [Public-key authenticated encryption (box)](#public-key-authenticated-encryption-box) * [Secret-key authenticated encryption (secretbox)](#secret-key-authenticated-encryption-secretbox) * [Scalar multiplication](#scalar-multiplication) * [Signatures](#signatures) * [Hashing](#hashing) * [Random bytes generation](#random-bytes-generation) * [Constant-time comparison](#constant-time-comparison) * [System requirements](#system-requirements) * [Development and testing](#development-and-testing) * [Benchmarks](#benchmarks) * [Contributors](#contributors) * [Who uses it](#who-uses-it) Overview -------- The primary goal of this project is to produce a translation of TweetNaCl to JavaScript which is as close as possible to the original C implementation, plus a thin layer of idiomatic high-level API on top of it. There are two versions, you can use either of them: * `nacl.js` is the port of TweetNaCl with minimum differences from the original + high-level API. * `nacl-fast.js` is like `nacl.js`, but with some functions replaced with faster versions. (Used by default when importing NPM package.) Audits ------ TweetNaCl.js has been audited by [Cure53](https://cure53.de/) in January-February 2017 (audit was sponsored by [Deletype](https://deletype.com)): > The overall outcome of this audit signals a particularly positive assessment > for TweetNaCl-js, as the testing team was unable to find any security > problems in the library. It has to be noted that this is an exceptionally > rare result of a source code audit for any project and must be seen as a true > testament to a development proceeding with security at its core. > > To reiterate, the TweetNaCl-js project, the source code was found to be > bug-free at this point. > > [...] > > In sum, the testing team is happy to recommend the TweetNaCl-js project as > likely one of the safer and more secure cryptographic tools among its > competition. [Read full audit report](https://cure53.de/tweetnacl.pdf) Installation ------------ You can install TweetNaCl.js via a package manager: [Yarn](https://yarnpkg.com/): $ yarn add tweetnacl [NPM](https://www.npmjs.org/): $ npm install tweetnacl or [download source code](https://github.com/dchest/tweetnacl-js/releases). Examples -------- You can find usage examples in our [wiki](https://github.com/dchest/tweetnacl-js/wiki/Examples). Usage ----- All API functions accept and return bytes as `Uint8Array`s. If you need to encode or decode strings, use functions from <https://github.com/dchest/tweetnacl-util-js> or one of the more robust codec packages. In Node.js v4 and later `Buffer` objects are backed by `Uint8Array`s, so you can freely pass them to TweetNaCl.js functions as arguments. The returned objects are still `Uint8Array`s, so if you need `Buffer`s, you'll have to convert them manually; make sure to convert using copying: `Buffer.from(array)` (or `new Buffer(array)` in Node.js v4 or earlier), instead of sharing: `Buffer.from(array.buffer)` (or `new Buffer(array.buffer)` Node 4 or earlier), because some functions return subarrays of their buffers. ### Public-key authenticated encryption (box) Implements *x25519-xsalsa20-poly1305*. #### nacl.box.keyPair() Generates a new random key pair for box and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 32-byte secret key } #### nacl.box.keyPair.fromSecretKey(secretKey) Returns a key pair for box with public key corresponding to the given secret key. #### nacl.box(message, nonce, theirPublicKey, mySecretKey) Encrypts and authenticates message using peer's public key, our secret key, and the given nonce, which must be unique for each distinct message for a key pair. Returns an encrypted and authenticated message, which is `nacl.box.overheadLength` longer than the original message. #### nacl.box.open(box, nonce, theirPublicKey, mySecretKey) Authenticates and decrypts the given box with peer's public key, our secret key, and the given nonce. Returns the original message, or `null` if authentication fails. #### nacl.box.before(theirPublicKey, mySecretKey) Returns a precomputed shared key which can be used in `nacl.box.after` and `nacl.box.open.after`. #### nacl.box.after(message, nonce, sharedKey) Same as `nacl.box`, but uses a shared key precomputed with `nacl.box.before`. #### nacl.box.open.after(box, nonce, sharedKey) Same as `nacl.box.open`, but uses a shared key precomputed with `nacl.box.before`. #### Constants ##### nacl.box.publicKeyLength = 32 Length of public key in bytes. ##### nacl.box.secretKeyLength = 32 Length of secret key in bytes. ##### nacl.box.sharedKeyLength = 32 Length of precomputed shared key in bytes. ##### nacl.box.nonceLength = 24 Length of nonce in bytes. ##### nacl.box.overheadLength = 16 Length of overhead added to box compared to original message. ### Secret-key authenticated encryption (secretbox) Implements *xsalsa20-poly1305*. #### nacl.secretbox(message, nonce, key) Encrypts and authenticates message using the key and the nonce. The nonce must be unique for each distinct message for this key. Returns an encrypted and authenticated message, which is `nacl.secretbox.overheadLength` longer than the original message. #### nacl.secretbox.open(box, nonce, key) Authenticates and decrypts the given secret box using the key and the nonce. Returns the original message, or `null` if authentication fails. #### Constants ##### nacl.secretbox.keyLength = 32 Length of key in bytes. ##### nacl.secretbox.nonceLength = 24 Length of nonce in bytes. ##### nacl.secretbox.overheadLength = 16 Length of overhead added to secret box compared to original message. ### Scalar multiplication Implements *x25519*. #### nacl.scalarMult(n, p) Multiplies an integer `n` by a group element `p` and returns the resulting group element. #### nacl.scalarMult.base(n) Multiplies an integer `n` by a standard group element and returns the resulting group element. #### Constants ##### nacl.scalarMult.scalarLength = 32 Length of scalar in bytes. ##### nacl.scalarMult.groupElementLength = 32 Length of group element in bytes. ### Signatures Implements [ed25519](http://ed25519.cr.yp.to). #### nacl.sign.keyPair() Generates new random key pair for signing and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 64-byte secret key } #### nacl.sign.keyPair.fromSecretKey(secretKey) Returns a signing key pair with public key corresponding to the given 64-byte secret key. The secret key must have been generated by `nacl.sign.keyPair` or `nacl.sign.keyPair.fromSeed`. #### nacl.sign.keyPair.fromSeed(seed) Returns a new signing key pair generated deterministically from a 32-byte seed. The seed must contain enough entropy to be secure. This method is not recommended for general use: instead, use `nacl.sign.keyPair` to generate a new key pair from a random seed. #### nacl.sign(message, secretKey) Signs the message using the secret key and returns a signed message. #### nacl.sign.open(signedMessage, publicKey) Verifies the signed message and returns the message without signature. Returns `null` if verification failed. #### nacl.sign.detached(message, secretKey) Signs the message using the secret key and returns a signature. #### nacl.sign.detached.verify(message, signature, publicKey) Verifies the signature for the message and returns `true` if verification succeeded or `false` if it failed. #### Constants ##### nacl.sign.publicKeyLength = 32 Length of signing public key in bytes. ##### nacl.sign.secretKeyLength = 64 Length of signing secret key in bytes. ##### nacl.sign.seedLength = 32 Length of seed for `nacl.sign.keyPair.fromSeed` in bytes. ##### nacl.sign.signatureLength = 64 Length of signature in bytes. ### Hashing Implements *SHA-512*. #### nacl.hash(message) Returns SHA-512 hash of the message. #### Constants ##### nacl.hash.hashLength = 64 Length of hash in bytes. ### Random bytes generation #### nacl.randomBytes(length) Returns a `Uint8Array` of the given length containing random bytes of cryptographic quality. **Implementation note** TweetNaCl.js uses the following methods to generate random bytes, depending on the platform it runs on: * `window.crypto.getRandomValues` (WebCrypto standard) * `window.msCrypto.getRandomValues` (Internet Explorer 11) * `crypto.randomBytes` (Node.js) If the platform doesn't provide a suitable PRNG, the following functions, which require random numbers, will throw exception: * `nacl.randomBytes` * `nacl.box.keyPair` * `nacl.sign.keyPair` Other functions are deterministic and will continue working. If a platform you are targeting doesn't implement secure random number generator, but you somehow have a cryptographically-strong source of entropy (not `Math.random`!), and you know what you are doing, you can plug it into TweetNaCl.js like this: nacl.setPRNG(function(x, n) { // ... copy n random bytes into x ... }); Note that `nacl.setPRNG` *completely replaces* internal random byte generator with the one provided. ### Constant-time comparison #### nacl.verify(x, y) Compares `x` and `y` in constant time and returns `true` if their lengths are non-zero and equal, and their contents are equal. Returns `false` if either of the arguments has zero length, or arguments have different lengths, or their contents differ. System requirements ------------------- TweetNaCl.js supports modern browsers that have a cryptographically secure pseudorandom number generator and typed arrays, including the latest versions of: * Chrome * Firefox * Safari (Mac, iOS) * Internet Explorer 11 Other systems: * Node.js Development and testing ------------------------ Install NPM modules needed for development: $ npm install To build minified versions: $ npm run build Tests use minified version, so make sure to rebuild it every time you change `nacl.js` or `nacl-fast.js`. ### Testing To run tests in Node.js: $ npm run test-node By default all tests described here work on `nacl.min.js`. To test other versions, set environment variable `NACL_SRC` to the file name you want to test. For example, the following command will test fast minified version: $ NACL_SRC=nacl-fast.min.js npm run test-node To run full suite of tests in Node.js, including comparing outputs of JavaScript port to outputs of the original C version: $ npm run test-node-all To prepare tests for browsers: $ npm run build-test-browser and then open `test/browser/test.html` (or `test/browser/test-fast.html`) to run them. To run tests in both Node and Electron: $ npm test ### Benchmarking To run benchmarks in Node.js: $ npm run bench $ NACL_SRC=nacl-fast.min.js npm run bench To run benchmarks in a browser, open `test/benchmark/bench.html` (or `test/benchmark/bench-fast.html`). Benchmarks ---------- For reference, here are benchmarks from MacBook Pro (Retina, 13-inch, Mid 2014) laptop with 2.6 GHz Intel Core i5 CPU (Intel) in Chrome 53/OS X and Xiaomi Redmi Note 3 smartphone with 1.8 GHz Qualcomm Snapdragon 650 64-bit CPU (ARM) in Chrome 52/Android: | | nacl.js Intel | nacl-fast.js Intel | nacl.js ARM | nacl-fast.js ARM | | ------------- |:-------------:|:-------------------:|:-------------:|:-----------------:| | salsa20 | 1.3 MB/s | 128 MB/s | 0.4 MB/s | 43 MB/s | | poly1305 | 13 MB/s | 171 MB/s | 4 MB/s | 52 MB/s | | hash | 4 MB/s | 34 MB/s | 0.9 MB/s | 12 MB/s | | secretbox 1K | 1113 op/s | 57583 op/s | 334 op/s | 14227 op/s | | box 1K | 145 op/s | 718 op/s | 37 op/s | 368 op/s | | scalarMult | 171 op/s | 733 op/s | 56 op/s | 380 op/s | | sign | 77 op/s | 200 op/s | 20 op/s | 61 op/s | | sign.open | 39 op/s | 102 op/s | 11 op/s | 31 op/s | (You can run benchmarks on your devices by clicking on the links at the bottom of the [home page](https://tweetnacl.js.org)). In short, with *nacl-fast.js* and 1024-byte messages you can expect to encrypt and authenticate more than 57000 messages per second on a typical laptop or more than 14000 messages per second on a $170 smartphone, sign about 200 and verify 100 messages per second on a laptop or 60 and 30 messages per second on a smartphone, per CPU core (with Web Workers you can do these operations in parallel), which is good enough for most applications. Contributors ------------ See AUTHORS.md file. Third-party libraries based on TweetNaCl.js ------------------------------------------- * [forward-secrecy](https://github.com/alax/forward-secrecy) — Axolotl ratchet implementation * [nacl-stream](https://github.com/dchest/nacl-stream-js) - streaming encryption * [tweetnacl-auth-js](https://github.com/dchest/tweetnacl-auth-js) — implementation of [`crypto_auth`](http://nacl.cr.yp.to/auth.html) * [tweetnacl-sealed-box](https://github.com/whs/tweetnacl-sealed-box) — implementation of [`sealed boxes`](https://download.libsodium.org/doc/public-key_cryptography/sealed_boxes.html) * [chloride](https://github.com/dominictarr/chloride) - unified API for various NaCl modules Who uses it ----------- Some notable users of TweetNaCl.js: * [GitHub](https://github.com) * [MEGA](https://github.com/meganz/webclient) * [Stellar](https://www.stellar.org/) * [miniLock](https://github.com/kaepora/miniLock) # capability.js - javascript environment capability detection [![Build Status](https://travis-ci.org/inf3rno/capability.png?branch=master)](https://travis-ci.org/inf3rno/capability) The capability.js library provides capability detection for different javascript environments. ## Documentation This project is empty yet. ### Installation ```bash npm install capability ``` ```bash bower install capability ``` #### Environment compatibility The lib requires only basic javascript features, so it will run in every js environments. #### Requirements If you want to use the lib in browser, you'll need a node module loader, e.g. browserify, webpack, etc... #### Usage In this documentation I used the lib as follows: ```js var capability = require("capability"); ``` ### Capabilities API #### Defining a capability You can define a capability by using the `define(name, test)` function. ```js capability.define("Object.create", function () { return Object.create; }); ``` The `name` parameter should contain the identifier of the capability and the `test` parameter should contain a function, which can detect the capability. If the capability is supported by the environment, then the `test()` should return `true`, otherwise it should return `false`. You don't have to convert the return value into a `Boolean`, the library will do that for you, so you won't have memory leaks because of this. #### Testing a capability The `test(name)` function will return a `Boolean` about whether the capability is supported by the actual environment. ```js console.log(capability.test("Object.create")); // true - in recent environments // false - by pre ES5 environments without Object.create ``` You can use `capability(name)` instead of `capability.test(name)` if you want a short code by optional requirements. #### Checking a capability The `check(name)` function will throw an Error when the capability is not supported by the actual environment. ```js capability.check("Object.create"); // this will throw an Error by pre ES5 environments without Object.create ``` #### Checking capability with require and modules It is possible to check the environments with `require()` by adding a module, which calls the `check(name)` function. By the capability definitions in this lib I added such modules by each definition, so you can do for example `require("capability/es5")`. Ofc. you can do fun stuff if you want, e.g. you can call multiple `check`s from a single `requirements.js` file in your lib, etc... ### Definitions Currently the following definitions are supported by the lib: - strict mode - `arguments.callee.caller` - es5 - `Array.prototype.forEach` - `Array.prototype.map` - `Function.prototype.bind` - `Object.create` - `Object.defineProperties` - `Object.defineProperty` - `Object.prototype.hasOwnProperty` - `Error.captureStackTrace` - `Error.prototype.stack` ## License MIT - 2016 Jánszky László Lajos # Ozone - Javascript Class Framework [![Build Status](https://travis-ci.org/inf3rno/o3.png?branch=master)](https://travis-ci.org/inf3rno/o3) The Ozone class framework contains enhanced class support to ease the development of object-oriented javascript applications in an ES5 environment. Another alternative to get a better class support to use ES6 classes and compilers like Babel, Traceur or TypeScript until native ES6 support arrives. ## Documentation ### Installation ```bash npm install o3 ``` ```bash bower install o3 ``` #### Environment compatibility The framework succeeded the tests on - node v4.2 and v5.x - chrome 51.0 - firefox 47.0 and 48.0 - internet explorer 11.0 - phantomjs 2.1 by the usage of npm scripts under win7 x64. I wasn't able to test the framework by Opera since the Karma launcher is buggy, so I decided not to support Opera. I used [Yadda](https://github.com/acuminous/yadda) to write BDD tests. I used [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. On pre-ES5 environments there will be bugs in the Class module due to pre-ES5 enumeration and the lack of some ES5 methods, so pre-ES5 environments are not supported. #### Requirements An ES5 capable environment is required with - `Object.create` - ES5 compatible property enumeration: `Object.defineProperty`, `Object.getOwnPropertyDescriptor`, `Object.prototype.hasOwnProperty`, etc. - `Array.prototype.forEach` #### Usage In this documentation I used the framework as follows: ```js var o3 = require("o3"), Class = o3.Class; ``` ### Inheritance #### Inheriting from native classes (from the Error class in these examples) You can extend native classes by calling the Class() function. ```js var UserError = Class(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` An alternative to call Class.extend() with the Ancestor as the context. The Class() function uses this in the background. ```js var UserError = Class.extend.call(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` #### Inheriting from custom classes You can use Class.extend() by any other class, not just by native classes. ```js var Ancestor = Class(Object, { prototype: { a: 1, b: 2 } }); var Descendant = Class.extend.call(Ancestor, { prototype: { c: 3 } }); ``` Or you can simply add it as a static method, so you don't have to pass context any time you want to use it. The only drawback, that this static method will be inherited as well. ```js var Ancestor = Class(Object, { extend: Class.extend, prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheriting from the Class class You can inherit the extend() method and other utility methods from the Class class. Probably this is the simplest solution if you need the Class API and you don't need to inherit from special native classes like Error. ```js var Ancestor = Class.extend({ prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheritance with clone and merge The static extend() method uses the clone() and merge() utility methods to inherit from the ancestor and add properties from the config. ```js var MyClass = Class.clone.call(Object, function MyClass(){ // ... }); Class.merge.call(MyClass, { prototype: { x: 1, y: 2 } }); ``` Or with utility methods. ```js var MyClass = Class.clone(function MyClass() { // ... }).merge({ prototype: { x: 1, y: 2 } }); ``` #### Inheritance with clone and absorb You can fill in missing properties with the usage of absorb. ```js var MyClass = Class(SomeAncestor, {...}); Class.absorb.call(MyClass, Class); MyClass.merge({...}); ``` For example if you don't have Class methods and your class already has an ancestor, then you can use absorb() to add Class methods. #### Abstract classes Using abstract classes with instantiation verification won't be implemented in this lib, however we provide an `abstractMethod`, which you can put to not implemented parts of your abstract class. ```js var AbstractA = Class({ prototype: { doA: function (){ // ... var b = this.getB(); // ... // do something with b // ... }, getB: abstractMethod } }); var AB1 = Class(AbstractA, { prototype: { getB: function (){ return new B1(); } } }); var ab1 = new AB1(); ``` I strongly support the composition over inheritance principle and I think you should use dependency injection instead of abstract classes. ```js var A = Class({ prototype: { init: function (b){ this.b = b; }, doA: function (){ // ... // do something with this.b // ... } } }); var b = new B1(); var ab1 = new A(b); ``` ### Constructors #### Using a custom constructor You can pass your custom constructor as a config option by creating the class. ```js var MyClass = Class(Object, { prototype: { constructor: function () { // ... } } }); ``` #### Using a custom factory to create the constructor Or you can pass a static factory method to create your custom constructor. ```js var MyClass = Class(Object, { factory: function () { return function () { // ... } } }); ``` #### Using an inherited factory to create the constructor By inheritance the constructors of the descendant classes will be automatically created as well. ```js var Ancestor = Class(Object, { factory: function () { return function () { // ... } } }); var Descendant = Class(Ancestor, {}); ``` #### Using the default factory to create the constructor You don't need to pass anything if you need a noop function as constructor. The Class.factory() will create a noop constructor by default. ```js var MyClass = Class(Object, {}); ``` In fact you don't need to pass any arguments to the Class function if you need an empty class inheriting from the Object native class. ```js var MyClass = Class(); ``` The default factory calls the build() and init() methods if they are given. ```js var MyClass = Class({ prototype: { build: function (options) { console.log("build", options); }, init: function (options) { console.log("init", options); } } }); var my = new MyClass({a: 1, b: 2}); // build {a: 1, b: 2} // init {a: 1, b: 2} var my2 = my.clone({c: 3}); // build {c: 3} var MyClass2 = MyClass.extend({}, [{d: 4}]); // build {d: 4} ``` ### Instantiation #### Creating new instance with the new operator Ofc. you can create a new instance in the javascript way. ```js var MyClass = Class(); var my = new MyClass(); ``` #### Creating a new instance with the static newInstance method If you want to pass an array of arguments then you can do it the following way. ```js var MyClass = Class.extend({ prototype: { constructor: function () { for (var i in arguments) console.log(arguments[i]); } } }); var my = MyClass.newInstance.apply(MyClass, ["a", "b", "c"]); // a // b // c ``` #### Creating new instance with clone You can create a new instance by cloning the prototype of the class. ```js var MyClass = Class(); var my = Class.prototype.clone.call(MyClass.prototype); ``` Or you can inherit the utility methods to make this easier. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); ``` Just be aware that by default cloning calls only the `build()` method, so the `init()` method won't be called by the new instance. #### Cloning instances You can clone an existing instance with the clone method. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); var my2 = my.clone(); ``` Be aware that this is prototypal inheritance with Object.create(), so the inherited properties won't be enumerable. The clone() method calls the build() method on the new instance if it is given. #### Using clone in the constructor You can use the same behavior both by cloning and by creating a new instance using the constructor ```js var MyClass = Class.extend({ lastIndex: 0, prototype: { index: undefined, constructor: function MyClass() { return MyClass.prototype.clone(); }, clone: function () { var instance = Class.prototype.clone.call(this); instance.index = ++MyClass.lastIndex; return instance; } } }); var my1 = new MyClass(); var my2 = MyClass.prototype.clone(); var my3 = my1.clone(); var my4 = my2.clone(); ``` Be aware that this way the constructor will drop the instance created with the `new` operator. Be aware that the clone() method is used by inheritance, so creating the prototype of a descendant class will use the clone() method as well. ```js var Descendant = MyClass.clone(function Descendant() { return Descendant.prototype.clone(); }); var my5 = Descendant.prototype; var my6 = new Descendant(); // ... ``` #### Using absorb(), merge() or inheritance to set the defaults values on properties You can use absorb() to set default values after configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(config); this.absorb(theDefaults); } } }); ``` You can use merge() to set default values before configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(theDefaults); this.merge(config); } } }); ``` You can use inheritance to set default values on class level. ```js var MyClass = Class.extend({ prototype: { aProperty: defaultValue, // ... constructor: function (config) { this.merge(config); } } }); ``` ## License MIT - 2015 Jánszky László Lajos # http-errors [![NPM Version][npm-version-image]][npm-url] [![NPM Downloads][npm-downloads-image]][node-url] [![Node.js Version][node-image]][node-url] [![Build Status][ci-image]][ci-url] [![Test Coverage][coveralls-image]][coveralls-url] Create HTTP errors for Express, Koa, Connect, etc. with ease. ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install http-errors ``` ## Example ```js var createError = require('http-errors') var express = require('express') var app = express() app.use(function (req, res, next) { if (!req.user) return next(createError(401, 'Please login to view this page.')) next() }) ``` ## API This is the current API, currently extracted from Koa and subject to change. ### Error Properties - `expose` - can be used to signal if `message` should be sent to the client, defaulting to `false` when `status` >= 500 - `headers` - can be an object of header names to values to be sent to the client, defaulting to `undefined`. When defined, the key names should all be lower-cased - `message` - the traditional error message, which should be kept short and all single line - `status` - the status code of the error, mirroring `statusCode` for general compatibility - `statusCode` - the status code of the error, defaulting to `500` ### createError([status], [message], [properties]) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = createError(404, 'This video does not exist!') ``` - `status: 500` - the status code as a number - `message` - the message of the error, defaulting to node's text for that status code. - `properties` - custom properties to attach to the object ### createError([status], [error], [properties]) Extend the given `error` object with `createError.HttpError` properties. This will not alter the inheritance of the given `error` object, and the modified `error` object is the return value. <!-- eslint-disable no-redeclare --> ```js fs.readFile('foo.txt', function (err, buf) { if (err) { if (err.code === 'ENOENT') { var httpError = createError(404, err, { expose: false }) } else { var httpError = createError(500, err) } } }) ``` - `status` - the status code as a number - `error` - the error object to extend - `properties` - custom properties to attach to the object ### createError.isHttpError(val) Determine if the provided `val` is an `HttpError`. This will return `true` if the error inherits from the `HttpError` constructor of this module or matches the "duck type" for an error this module creates. All outputs from the `createError` factory will return `true` for this function, including if an non-`HttpError` was passed into the factory. ### new createError\[code || name\](\[msg]\)) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = new createError.NotFound() ``` - `code` - the status code as a number - `name` - the name of the error as a "bumpy case", i.e. `NotFound` or `InternalServerError`. #### List of all constructors |Status Code|Constructor Name | |-----------|-----------------------------| |400 |BadRequest | |401 |Unauthorized | |402 |PaymentRequired | |403 |Forbidden | |404 |NotFound | |405 |MethodNotAllowed | |406 |NotAcceptable | |407 |ProxyAuthenticationRequired | |408 |RequestTimeout | |409 |Conflict | |410 |Gone | |411 |LengthRequired | |412 |PreconditionFailed | |413 |PayloadTooLarge | |414 |URITooLong | |415 |UnsupportedMediaType | |416 |RangeNotSatisfiable | |417 |ExpectationFailed | |418 |ImATeapot | |421 |MisdirectedRequest | |422 |UnprocessableEntity | |423 |Locked | |424 |FailedDependency | |425 |UnorderedCollection | |426 |UpgradeRequired | |428 |PreconditionRequired | |429 |TooManyRequests | |431 |RequestHeaderFieldsTooLarge | |451 |UnavailableForLegalReasons | |500 |InternalServerError | |501 |NotImplemented | |502 |BadGateway | |503 |ServiceUnavailable | |504 |GatewayTimeout | |505 |HTTPVersionNotSupported | |506 |VariantAlsoNegotiates | |507 |InsufficientStorage | |508 |LoopDetected | |509 |BandwidthLimitExceeded | |510 |NotExtended | |511 |NetworkAuthenticationRequired| ## License [MIT](LICENSE) [ci-image]: https://badgen.net/github/checks/jshttp/http-errors/master?label=ci [ci-url]: https://github.com/jshttp/http-errors/actions?query=workflow%3Aci [coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/http-errors/master [coveralls-url]: https://coveralls.io/r/jshttp/http-errors?branch=master [node-image]: https://badgen.net/npm/node/http-errors [node-url]: https://nodejs.org/en/download [npm-downloads-image]: https://badgen.net/npm/dm/http-errors [npm-url]: https://npmjs.org/package/http-errors [npm-version-image]: https://badgen.net/npm/v/http-errors [travis-image]: https://badgen.net/travis/jshttp/http-errors/master [travis-url]: https://travis-ci.org/jshttp/http-errors # near-api-js [![Build Status](https://travis-ci.com/near/near-api-js.svg?branch=master)](https://travis-ci.com/near/near-api-js) [![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/near/near-api-js) A JavaScript/TypeScript library for development of DApps on the NEAR platform # Documentation [Read the TypeDoc API documentation](https://near.github.io/near-api-js/) --- # Examples ## [Quick Reference](https://github.com/near/near-api-js/blob/master/examples/quick-reference.md) _(Cheat sheet / quick reference)_ ## [Cookbook](https://github.com/near/near-api-js/blob/master/examples/cookbook/README.md) _(Common use cases / more complex examples)_ --- # Contribute to this library 1. Install dependencies yarn 2. Run continuous build with: yarn build -- -w # Publish Prepare `dist` version by running: yarn dist When publishing to npm use [np](https://github.com/sindresorhus/np). --- # Integration Test Start the node by following instructions from [nearcore](https://github.com/nearprotocol/nearcore), then yarn test Tests use sample contract from `near-hello` npm package, see https://github.com/nearprotocol/near-hello # Update error schema Follow next steps: 1. [Change hash for the commit with errors in the nearcore](https://github.com/near/near-api-js/blob/master/gen_error_types.js#L7-L9) 2. Fetch new schema: `node fetch_error_schema.js` 3. `yarn build` to update `lib/**.js` files # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE](LICENSE) and [LICENSE-APACHE](LICENSE-APACHE) for details. # Statuses [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Node.js Version][node-version-image]][node-version-url] [![Build Status][travis-image]][travis-url] [![Test Coverage][coveralls-image]][coveralls-url] HTTP status utility for node. This module provides a list of status codes and messages sourced from a few different projects: * The [IANA Status Code Registry](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) * The [Node.js project](https://nodejs.org/) * The [NGINX project](https://www.nginx.com/) * The [Apache HTTP Server project](https://httpd.apache.org/) ## Installation This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```sh $ npm install statuses ``` ## API <!-- eslint-disable no-unused-vars --> ```js var status = require('statuses') ``` ### var code = status(Integer || String) If `Integer` or `String` is a valid HTTP code or status message, then the appropriate `code` will be returned. Otherwise, an error will be thrown. <!-- eslint-disable no-undef --> ```js status(403) // => 403 status('403') // => 403 status('forbidden') // => 403 status('Forbidden') // => 403 status(306) // throws, as it's not supported by node.js ``` ### status.STATUS_CODES Returns an object which maps status codes to status messages, in the same format as the [Node.js http module](https://nodejs.org/dist/latest/docs/api/http.html#http_http_status_codes). ### status.codes Returns an array of all the status codes as `Integer`s. ### var msg = status[code] Map of `code` to `status message`. `undefined` for invalid `code`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status[404] // => 'Not Found' ``` ### var code = status[msg] Map of `status message` to `code`. `msg` can either be title-cased or lower-cased. `undefined` for invalid `status message`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status['not found'] // => 404 status['Not Found'] // => 404 ``` ### status.redirect[code] Returns `true` if a status code is a valid redirect status. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.redirect[200] // => undefined status.redirect[301] // => true ``` ### status.empty[code] Returns `true` if a status code expects an empty body. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.empty[200] // => undefined status.empty[204] // => true status.empty[304] // => true ``` ### status.retry[code] Returns `true` if you should retry the rest. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.retry[501] // => undefined status.retry[503] // => true ``` [npm-image]: https://img.shields.io/npm/v/statuses.svg [npm-url]: https://npmjs.org/package/statuses [node-version-image]: https://img.shields.io/node/v/statuses.svg [node-version-url]: https://nodejs.org/en/download [travis-image]: https://img.shields.io/travis/jshttp/statuses.svg [travis-url]: https://travis-ci.org/jshttp/statuses [coveralls-image]: https://img.shields.io/coveralls/jshttp/statuses.svg [coveralls-url]: https://coveralls.io/r/jshttp/statuses?branch=master [downloads-image]: https://img.shields.io/npm/dm/statuses.svg [downloads-url]: https://npmjs.org/package/statuses # Polyfill for `Object.setPrototypeOf` [![NPM Version](https://img.shields.io/npm/v/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![NPM Downloads](https://img.shields.io/npm/dm/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](https://github.com/standard/standard) A simple cross platform implementation to set the prototype of an instianted object. Supports all modern browsers and at least back to IE8. ## Usage: ``` $ npm install --save setprototypeof ``` ```javascript var setPrototypeOf = require('setprototypeof') var obj = {} setPrototypeOf(obj, { foo: function () { return 'bar' } }) obj.foo() // bar ``` TypeScript is also supported: ```typescript import setPrototypeOf from 'setprototypeof' ``` # pnx_token pnx_token # toidentifier [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Build Status][github-actions-ci-image]][github-actions-ci-url] [![Test Coverage][codecov-image]][codecov-url] > Convert a string of words to a JavaScript identifier ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install toidentifier ``` ## Example ```js var toIdentifier = require('toidentifier') console.log(toIdentifier('Bad Request')) // => "BadRequest" ``` ## API This CommonJS module exports a single default function: `toIdentifier`. ### toIdentifier(string) Given a string as the argument, it will be transformed according to the following rules and the new string will be returned: 1. Split into words separated by space characters (`0x20`). 2. Upper case the first character of each word. 3. Join the words together with no separator. 4. Remove all non-word (`[0-9a-z_]`) characters. ## License [MIT](LICENSE) [codecov-image]: https://img.shields.io/codecov/c/github/component/toidentifier.svg [codecov-url]: https://codecov.io/gh/component/toidentifier [downloads-image]: https://img.shields.io/npm/dm/toidentifier.svg [downloads-url]: https://npmjs.org/package/toidentifier [github-actions-ci-image]: https://img.shields.io/github/workflow/status/component/toidentifier/ci/master?label=ci [github-actions-ci-url]: https://github.com/component/toidentifier?query=workflow%3Aci [npm-image]: https://img.shields.io/npm/v/toidentifier.svg [npm-url]: https://npmjs.org/package/toidentifier ## [npm]: https://www.npmjs.com/ [yarn]: https://yarnpkg.com/ # Borsh JS [![Project license](https://img.shields.io/badge/license-Apache2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Project license](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Discord](https://img.shields.io/discord/490367152054992913?label=discord)](https://discord.gg/Vyp7ETM) [![Travis status](https://travis-ci.com/near/borsh.svg?branch=master)](https://travis-ci.com/near/borsh-js) [![NPM version](https://img.shields.io/npm/v/borsh.svg?style=flat-square)](https://npmjs.com/borsh) [![Size on NPM](https://img.shields.io/bundlephobia/minzip/borsh.svg?style=flat-square)](https://npmjs.com/borsh) **Borsh JS** is an implementation of the [Borsh] binary serialization format for JavaScript and TypeScript projects. Borsh stands for _Binary Object Representation Serializer for Hashing_. It is meant to be used in security-critical projects as it prioritizes consistency, safety, speed, and comes with a strict specification. ## Examples ### Serializing an object ```javascript const value = new Test({ x: 255, y: 20, z: '123', q: [1, 2, 3] }); const schema = new Map([[Test, { kind: 'struct', fields: [['x', 'u8'], ['y', 'u64'], ['z', 'string'], ['q', [3]]] }]]); const buffer = borsh.serialize(schema, value); ``` ### Deserializing an object ```javascript const newValue = borsh.deserialize(schema, Test, buffer); ``` ## Type Mappings | Borsh | TypeScript | |-----------------------|----------------| | `u8` integer | `number` | | `u16` integer | `number` | | `u32` integer | `number` | | `u64` integer | `BN` | | `u128` integer | `BN` | | `u256` integer | `BN` | | `u512` integer | `BN` | | `f32` float | N/A | | `f64` float | N/A | | fixed-size byte array | `Uint8Array` | | UTF-8 string | `string` | | option | `null` or type | | map | N/A | | set | N/A | | structs | `any` | ## Contributing Install dependencies: ```bash yarn install ``` Continuously build with: ```bash yarn dev ``` Run tests: ```bash yarn test ``` Run linter ```bash yarn lint ``` ## Publish Prepare `dist` version by running: ```bash yarn build ``` When publishing to npm use [np](https://github.com/sindresorhus/np). # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-MIT](LICENSE-MIT.txt) and [LICENSE-APACHE](LICENSE-APACHE) for details. [Borsh]: https://borsh.io # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Current Status whatwg-url is currently up to date with the URL spec up to commit [a62223](https://github.com/whatwg/url/commit/a622235308342c9adc7fc2fd1659ff059f7d5e2a). ## API ### The `URL` Constructor The main API is the [`URL`](https://url.spec.whatwg.org/#url) export, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use this. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/browsers.html#serialization-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by the string `"failure"`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ the string `"failure"`. node-fetch ========== [![npm version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![coverage status][codecov-image]][codecov-url] [![install size][install-size-image]][install-size-url] [![Discord][discord-image]][discord-url] A light-weight module that brings `window.fetch` to Node.js (We are looking for [v2 maintainers and collaborators](https://github.com/bitinn/node-fetch/issues/567)) [![Backers][opencollective-image]][opencollective-url] <!-- TOC --> - [Motivation](#motivation) - [Features](#features) - [Difference from client-side fetch](#difference-from-client-side-fetch) - [Installation](#installation) - [Loading and configuring the module](#loading-and-configuring-the-module) - [Common Usage](#common-usage) - [Plain text or HTML](#plain-text-or-html) - [JSON](#json) - [Simple Post](#simple-post) - [Post with JSON](#post-with-json) - [Post with form parameters](#post-with-form-parameters) - [Handling exceptions](#handling-exceptions) - [Handling client and server errors](#handling-client-and-server-errors) - [Advanced Usage](#advanced-usage) - [Streams](#streams) - [Buffer](#buffer) - [Accessing Headers and other Meta data](#accessing-headers-and-other-meta-data) - [Extract Set-Cookie Header](#extract-set-cookie-header) - [Post data using a file stream](#post-data-using-a-file-stream) - [Post with form-data (detect multipart)](#post-with-form-data-detect-multipart) - [Request cancellation with AbortSignal](#request-cancellation-with-abortsignal) - [API](#api) - [fetch(url[, options])](#fetchurl-options) - [Options](#options) - [Class: Request](#class-request) - [Class: Response](#class-response) - [Class: Headers](#class-headers) - [Interface: Body](#interface-body) - [Class: FetchError](#class-fetcherror) - [License](#license) - [Acknowledgement](#acknowledgement) <!-- /TOC --> ## Motivation Instead of implementing `XMLHttpRequest` in Node.js to run browser-specific [Fetch polyfill](https://github.com/github/fetch), why not go from native `http` to `fetch` API directly? Hence, `node-fetch`, minimal code for a `window.fetch` compatible API on Node.js runtime. See Matt Andrews' [isomorphic-fetch](https://github.com/matthew-andrews/isomorphic-fetch) or Leonardo Quixada's [cross-fetch](https://github.com/lquixada/cross-fetch) for isomorphic usage (exports `node-fetch` for server-side, `whatwg-fetch` for client-side). ## Features - Stay consistent with `window.fetch` API. - Make conscious trade-off when following [WHATWG fetch spec][whatwg-fetch] and [stream spec](https://streams.spec.whatwg.org/) implementation details, document known differences. - Use native promise but allow substituting it with [insert your favorite promise library]. - Use native Node streams for body on both request and response. - Decode content encoding (gzip/deflate) properly and convert string output (such as `res.text()` and `res.json()`) to UTF-8 automatically. - Useful extensions such as timeout, redirect limit, response size limit, [explicit errors](ERROR-HANDLING.md) for troubleshooting. ## Difference from client-side fetch - See [Known Differences](LIMITS.md) for details. - If you happen to use a missing feature that `window.fetch` offers, feel free to open an issue. - Pull requests are welcomed too! ## Installation Current stable release (`2.x`) ```sh $ npm install node-fetch ``` ## Loading and configuring the module We suggest you load the module via `require` until the stabilization of ES modules in node: ```js const fetch = require('node-fetch'); ``` If you are using a Promise library other than native, set it through `fetch.Promise`: ```js const Bluebird = require('bluebird'); fetch.Promise = Bluebird; ``` ## Common Usage NOTE: The documentation below is up-to-date with `2.x` releases; see the [`1.x` readme](https://github.com/bitinn/node-fetch/blob/1.x/README.md), [changelog](https://github.com/bitinn/node-fetch/blob/1.x/CHANGELOG.md) and [2.x upgrade guide](UPGRADE-GUIDE.md) for the differences. #### Plain text or HTML ```js fetch('https://github.com/') .then(res => res.text()) .then(body => console.log(body)); ``` #### JSON ```js fetch('https://api.github.com/users/github') .then(res => res.json()) .then(json => console.log(json)); ``` #### Simple Post ```js fetch('https://httpbin.org/post', { method: 'POST', body: 'a=1' }) .then(res => res.json()) // expecting a json response .then(json => console.log(json)); ``` #### Post with JSON ```js const body = { a: 1 }; fetch('https://httpbin.org/post', { method: 'post', body: JSON.stringify(body), headers: { 'Content-Type': 'application/json' }, }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form parameters `URLSearchParams` is available in Node.js as of v7.5.0. See [official documentation](https://nodejs.org/api/url.html#url_class_urlsearchparams) for more usage methods. NOTE: The `Content-Type` header is only set automatically to `x-www-form-urlencoded` when an instance of `URLSearchParams` is given as such: ```js const { URLSearchParams } = require('url'); const params = new URLSearchParams(); params.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: params }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Handling exceptions NOTE: 3xx-5xx responses are *NOT* exceptions and should be handled in `then()`; see the next section for more information. Adding a catch to the fetch promise chain will catch *all* exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the [error handling document](ERROR-HANDLING.md) for more details. ```js fetch('https://domain.invalid/') .catch(err => console.error(err)); ``` #### Handling client and server errors It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses: ```js function checkStatus(res) { if (res.ok) { // res.status >= 200 && res.status < 300 return res; } else { throw MyCustomError(res.statusText); } } fetch('https://httpbin.org/status/400') .then(checkStatus) .then(res => console.log('will not get here...')) ``` ## Advanced Usage #### Streams The "Node.js way" is to use streams when possible: ```js fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => { const dest = fs.createWriteStream('./octocat.png'); res.body.pipe(dest); }); ``` #### Buffer If you prefer to cache binary data in full, use buffer(). (NOTE: `buffer()` is a `node-fetch`-only API) ```js const fileType = require('file-type'); fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => res.buffer()) .then(buffer => fileType(buffer)) .then(type => { /* ... */ }); ``` #### Accessing Headers and other Meta data ```js fetch('https://github.com/') .then(res => { console.log(res.ok); console.log(res.status); console.log(res.statusText); console.log(res.headers.raw()); console.log(res.headers.get('content-type')); }); ``` #### Extract Set-Cookie Header Unlike browsers, you can access raw `Set-Cookie` headers manually using `Headers.raw()`. This is a `node-fetch` only API. ```js fetch(url).then(res => { // returns an array of values, instead of a string of comma-separated values console.log(res.headers.raw()['set-cookie']); }); ``` #### Post data using a file stream ```js const { createReadStream } = require('fs'); const stream = createReadStream('input.txt'); fetch('https://httpbin.org/post', { method: 'POST', body: stream }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form-data (detect multipart) ```js const FormData = require('form-data'); const form = new FormData(); form.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: form }) .then(res => res.json()) .then(json => console.log(json)); // OR, using custom headers // NOTE: getHeaders() is non-standard API const form = new FormData(); form.append('a', 1); const options = { method: 'POST', body: form, headers: form.getHeaders() } fetch('https://httpbin.org/post', options) .then(res => res.json()) .then(json => console.log(json)); ``` #### Request cancellation with AbortSignal > NOTE: You may cancel streamed requests only on Node >= v8.0.0 You may cancel requests with `AbortController`. A suggested implementation is [`abort-controller`](https://www.npmjs.com/package/abort-controller). An example of timing out a request after 150ms could be achieved as the following: ```js import AbortController from 'abort-controller'; const controller = new AbortController(); const timeout = setTimeout( () => { controller.abort(); }, 150, ); fetch(url, { signal: controller.signal }) .then(res => res.json()) .then( data => { useData(data) }, err => { if (err.name === 'AbortError') { // request was aborted } }, ) .finally(() => { clearTimeout(timeout); }); ``` See [test cases](https://github.com/bitinn/node-fetch/blob/master/test/test.js) for more examples. ## API ### fetch(url[, options]) - `url` A string representing the URL for fetching - `options` [Options](#fetch-options) for the HTTP(S) request - Returns: <code>Promise&lt;[Response](#class-response)&gt;</code> Perform an HTTP(S) fetch. `url` should be an absolute url, such as `https://example.com/`. A path-relative URL (`/file/under/root`) or protocol-relative URL (`//can-be-http-or-https.com/`) will result in a rejected `Promise`. <a id="fetch-options"></a> ### Options The default values are shown after each option key. ```js { // These properties are part of the Fetch Standard method: 'GET', headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below) body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect signal: null, // pass an instance of AbortSignal to optionally abort requests // The following properties are node-fetch extensions follow: 20, // maximum redirect count. 0 to not follow redirect timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies). Signal is recommended instead. compress: true, // support gzip/deflate content encoding. false to disable size: 0, // maximum response body size in bytes. 0 to disable agent: null // http(s).Agent instance or function that returns an instance (see below) } ``` ##### Default Headers If no values are set, the following request headers will be sent automatically: Header | Value ------------------- | -------------------------------------------------------- `Accept-Encoding` | `gzip,deflate` _(when `options.compress === true`)_ `Accept` | `*/*` `Connection` | `close` _(when no `options.agent` is present)_ `Content-Length` | _(automatically calculated, if possible)_ `Transfer-Encoding` | `chunked` _(when `req.body` is a stream)_ `User-Agent` | `node-fetch/1.0 (+https://github.com/bitinn/node-fetch)` Note: when `body` is a `Stream`, `Content-Length` is not set automatically. ##### Custom Agent The `agent` option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following: - Support self-signed certificate - Use only IPv4 or IPv6 - Custom DNS Lookup See [`http.Agent`](https://nodejs.org/api/http.html#http_new_agent_options) for more information. In addition, the `agent` option accepts a function that returns `http`(s)`.Agent` instance given current [URL](https://nodejs.org/api/url.html), this is useful during a redirection chain across HTTP and HTTPS protocol. ```js const httpAgent = new http.Agent({ keepAlive: true }); const httpsAgent = new https.Agent({ keepAlive: true }); const options = { agent: function (_parsedURL) { if (_parsedURL.protocol == 'http:') { return httpAgent; } else { return httpsAgent; } } } ``` <a id="class-request"></a> ### Class: Request An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the [Body](#iface-body) interface. Due to the nature of Node.js, the following properties are not implemented at this moment: - `type` - `destination` - `referrer` - `referrerPolicy` - `mode` - `credentials` - `cache` - `integrity` - `keepalive` The following node-fetch extension properties are provided: - `follow` - `compress` - `counter` - `agent` See [options](#fetch-options) for exact meaning of these extensions. #### new Request(input[, options]) <small>*(spec-compliant)*</small> - `input` A string representing a URL, or another `Request` (which will be cloned) - `options` [Options][#fetch-options] for the HTTP(S) request Constructs a new `Request` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request). In most cases, directly `fetch(url, options)` is simpler than creating a `Request` object. <a id="class-response"></a> ### Class: Response An HTTP(S) response. This class implements the [Body](#iface-body) interface. The following properties are not implemented in node-fetch at this moment: - `Response.error()` - `Response.redirect()` - `type` - `trailer` #### new Response([body[, options]]) <small>*(spec-compliant)*</small> - `body` A `String` or [`Readable` stream][node-readable] - `options` A [`ResponseInit`][response-init] options dictionary Constructs a new `Response` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Response/Response). Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a `Response` directly. #### response.ok <small>*(spec-compliant)*</small> Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300. #### response.redirected <small>*(spec-compliant)*</small> Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0. <a id="class-headers"></a> ### Class: Headers This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the [Fetch Standard][whatwg-fetch] are implemented. #### new Headers([init]) <small>*(spec-compliant)*</small> - `init` Optional argument to pre-fill the `Headers` object Construct a new `Headers` object. `init` can be either `null`, a `Headers` object, an key-value map object or any iterable object. ```js // Example adapted from https://fetch.spec.whatwg.org/#example-headers-class const meta = { 'Content-Type': 'text/xml', 'Breaking-Bad': '<3' }; const headers = new Headers(meta); // The above is equivalent to const meta = [ [ 'Content-Type', 'text/xml' ], [ 'Breaking-Bad', '<3' ] ]; const headers = new Headers(meta); // You can in fact use any iterable objects, like a Map or even another Headers const meta = new Map(); meta.set('Content-Type', 'text/xml'); meta.set('Breaking-Bad', '<3'); const headers = new Headers(meta); const copyOfHeaders = new Headers(headers); ``` <a id="iface-body"></a> ### Interface: Body `Body` is an abstract interface with methods that are applicable to both `Request` and `Response` classes. The following methods are not yet implemented in node-fetch at this moment: - `formData()` #### body.body <small>*(deviation from spec)*</small> * Node.js [`Readable` stream][node-readable] Data are encapsulated in the `Body` object. Note that while the [Fetch Standard][whatwg-fetch] requires the property to always be a WHATWG `ReadableStream`, in node-fetch it is a Node.js [`Readable` stream][node-readable]. #### body.bodyUsed <small>*(spec-compliant)*</small> * `Boolean` A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again. #### body.arrayBuffer() #### body.blob() #### body.json() #### body.text() <small>*(spec-compliant)*</small> * Returns: <code>Promise</code> Consume the body and return a promise that will resolve to one of these formats. #### body.buffer() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;Buffer&gt;</code> Consume the body and return a promise that will resolve to a Buffer. #### body.textConverted() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;String&gt;</code> Identical to `body.text()`, except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible. (This API requires an optional dependency of the npm package [encoding](https://www.npmjs.com/package/encoding), which you need to install manually. `webpack` users may see [a warning message](https://github.com/bitinn/node-fetch/issues/412#issuecomment-379007792) due to this optional dependency.) <a id="class-fetcherror"></a> ### Class: FetchError <small>*(node-fetch extension)*</small> An operational error in the fetching process. See [ERROR-HANDLING.md][] for more info. <a id="class-aborterror"></a> ### Class: AbortError <small>*(node-fetch extension)*</small> An Error thrown when the request is aborted in response to an `AbortSignal`'s `abort` event. It has a `name` property of `AbortError`. See [ERROR-HANDLING.MD][] for more info. ## Acknowledgement Thanks to [github/fetch](https://github.com/github/fetch) for providing a solid implementation reference. `node-fetch` v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr). ## License MIT [npm-image]: https://flat.badgen.net/npm/v/node-fetch [npm-url]: https://www.npmjs.com/package/node-fetch [travis-image]: https://flat.badgen.net/travis/bitinn/node-fetch [travis-url]: https://travis-ci.org/bitinn/node-fetch [codecov-image]: https://flat.badgen.net/codecov/c/github/bitinn/node-fetch/master [codecov-url]: https://codecov.io/gh/bitinn/node-fetch [install-size-image]: https://flat.badgen.net/packagephobia/install/node-fetch [install-size-url]: https://packagephobia.now.sh/result?p=node-fetch [discord-image]: https://img.shields.io/discord/619915844268326952?color=%237289DA&label=Discord&style=flat-square [discord-url]: https://discord.gg/Zxbndcm [opencollective-image]: https://opencollective.com/node-fetch/backers.svg [opencollective-url]: https://opencollective.com/node-fetch [whatwg-fetch]: https://fetch.spec.whatwg.org/ [response-init]: https://fetch.spec.whatwg.org/#responseinit [node-readable]: https://nodejs.org/api/stream.html#stream_readable_streams [mdn-headers]: https://developer.mozilla.org/en-US/docs/Web/API/Headers [LIMITS.md]: https://github.com/bitinn/node-fetch/blob/master/LIMITS.md [ERROR-HANDLING.md]: https://github.com/bitinn/node-fetch/blob/master/ERROR-HANDLING.md [UPGRADE-GUIDE.md]: https://github.com/bitinn/node-fetch/blob/master/UPGRADE-GUIDE.md text-encoding-utf-8 ============== This is a **partial** polyfill for the [Encoding Living Standard](https://encoding.spec.whatwg.org/) API for the Web, allowing encoding and decoding of textual data to and from Typed Array buffers for binary data in JavaScript. This is fork of [text-encoding](https://github.com/inexorabletash/text-encoding) that **only** support **UTF-8**. Basic examples and tests are included. ### Install ### There are a few ways you can get the `text-encoding-utf-8` library. #### Node #### `text-encoding-utf-8` is on `npm`. Simply run: ```js npm install text-encoding-utf-8 ``` Or add it to your `package.json` dependencies. ### HTML Page Usage ### ```html <script src="encoding.js"></script> ``` ### API Overview ### Basic Usage ```js var uint8array = TextEncoder(encoding).encode(string); var string = TextDecoder(encoding).decode(uint8array); ``` Streaming Decode ```js var string = "", decoder = TextDecoder(encoding), buffer; while (buffer = next_chunk()) { string += decoder.decode(buffer, {stream:true}); } string += decoder.decode(); // finish the stream ``` ### Encodings ### Only `utf-8` and `UTF-8` are supported. ### Non-Standard Behavior ### Only `utf-8` and `UTF-8` are supported. ### Motivation Binary size matters, especially on a mobile phone. Safari on iOS does not support TextDecoder or TextEncoder. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # js-sha256 [![Build Status](https://travis-ci.org/emn178/js-sha256.svg?branch=master)](https://travis-ci.org/emn178/js-sha256) [![Coverage Status](https://coveralls.io/repos/emn178/js-sha256/badge.svg?branch=master)](https://coveralls.io/r/emn178/js-sha256?branch=master) [![CDNJS](https://img.shields.io/cdnjs/v/js-sha256.svg)](https://cdnjs.com/libraries/js-sha256/) [![NPM](https://nodei.co/npm/js-sha256.png?stars&downloads)](https://nodei.co/npm/js-sha256/) A simple SHA-256 / SHA-224 hash function for JavaScript supports UTF-8 encoding. ## Demo [SHA256 Online](http://emn178.github.io/online-tools/sha256.html) [SHA224 Online](http://emn178.github.io/online-tools/sha224.html) ## Download [Compress](https://raw.github.com/emn178/js-sha256/master/build/sha256.min.js) [Uncompress](https://raw.github.com/emn178/js-sha256/master/src/sha256.js) ## Installation You can also install js-sha256 by using Bower. bower install js-sha256 For node.js, you can use this command to install: npm install js-sha256 ## Usage You could use like this: ```JavaScript sha256('Message to hash'); sha224('Message to hash'); var hash = sha256.create(); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.update('Message to hash'); hash2.update('Message2 to hash'); hash2.array(); // HMAC sha256.hmac('key', 'Message to hash'); sha224.hmac('key', 'Message to hash'); var hash = sha256.hmac.create('key'); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.hmac.update('key', 'Message to hash'); hash2.update('Message2 to hash'); hash2.array(); ``` If you use node.js, you should require the module first: ```JavaScript var sha256 = require('js-sha256'); ``` or ```JavaScript var sha256 = require('js-sha256').sha256; var sha224 = require('js-sha256').sha224; ``` It supports AMD: ```JavaScript require(['your/path/sha256.js'], function(sha256) { // ... }); ``` or TypeScript ```TypeScript import { sha256, sha224 } from 'js-sha256'; ``` ## Example ```JavaScript sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256('The quick brown fox jumps over the lazy dog'); // d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592 sha256('The quick brown fox jumps over the lazy dog.'); // ef537f25c895bfa782526529a9b63d97aa631564d5d789c2b765448c8635fb6c sha224(''); // d14a028c2a3a2bc9476102bb288234c415a2b01f828ea62ac5b3e42f sha224('The quick brown fox jumps over the lazy dog'); // 730e109bd7a8a32b1cb9d9a09aa2325d2430587ddbc0c38bad911525 sha224('The quick brown fox jumps over the lazy dog.'); // 619cba8e8e05826e9b8c519c0a5c68f4fb653e8a3d8aa04bb2c8cd4c // It also supports UTF-8 encoding sha256('中文'); // 72726d8818f693066ceb69afa364218b692e62ea92b385782363780f47529c21 sha224('中文'); // dfbab71afdf54388af4d55f8bd3de8c9b15e0eb916bf9125f4a959d4 // It also supports byte `Array`, `Uint8Array`, `ArrayBuffer` input sha256([]); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256(new Uint8Array([211, 212])); // 182889f925ae4e5cc37118ded6ed87f7bdc7cab5ec5e78faef2e50048999473f // Different output sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.hex(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.array(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.digest(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.arrayBuffer(''); // ArrayBuffer ``` ## License The project is released under the [MIT license](http://www.opensource.org/licenses/MIT). ## Contact The project's website is located at https://github.com/emn178/js-sha256 Author: Chen, Yi-Cyuan (emn178@gmail.com) # WebIDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [WebIDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a WebIDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different WebIDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the WebIDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the WebIDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). ## Status All of the numeric types are implemented (float being implemented as double) and some others are as well - check the source for all of them. This list will grow over time in service of the [HTML as Custom Elements](https://github.com/dglazkov/html-as-custom-elements) project, but in the meantime, pull requests welcome! I'm not sure yet what the strategy will be for modifiers, e.g. [`[Clamp]`](http://heycam.github.io/webidl/#Clamp). Maybe something like `conversions["unsigned long"](x, { clamp: true })`? We'll see. We might also want to extend the API to give better error messages, e.g. "Argument 1 of HTMLMediaElement.fastSeek is not a finite floating-point value" instead of "Argument is not a finite floating-point value." This would require passing in more information to the conversion functions than we currently do. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. WebIDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on WebIDL values, i.e. instances of WebIDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a WebIDL value of [WebIDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, WebIDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given WebIDL operation, how does that get converted into a WebIDL value? For example, a JavaScript `true` passed in the position of a WebIDL `boolean` argument becomes a WebIDL `true`. But, a JavaScript `true` passed in the position of a [WebIDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a WebIDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the WebIDL algorithms, they don't actually use WebIDL values, since those aren't "real" outside of specs. Instead, implementations apply the WebIDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting WebIDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of WebIDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given WebIDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ WebIDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ WebIDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a WebIDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't Use This Seriously, why would you ever use this? You really shouldn't. WebIDL is … not great, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from WebIDL. In general, your JavaScript should not be trying to become more like WebIDL; if anything, we should fix WebIDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in WebIDL. # Javascript Error Polyfill [![Build Status](https://travis-ci.org/inf3rno/error-polyfill.png?branch=master)](https://travis-ci.org/inf3rno/error-polyfill) Implementing the [V8 Stack Trace API](https://github.com/v8/v8/wiki/Stack-Trace-API) in non-V8 environments as much as possible ## Installation ```bash npm install error-polyfill ``` ```bash bower install error-polyfill ``` ### Environment compatibility Tested on the following environments: Windows 7 - **Node.js** 9.6 - **Chrome** 64.0 - **Firefox** 58.0 - **Internet Explorer** 10.0, 11.0 - **PhantomJS** 2.1 - **Opera** 51.0 Travis - **Node.js** 8, 9 - **Chrome** - **Firefox** - **PhantomJS** The polyfill might work on other environments too due to its adaptive design. I use [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. ### Requirements ES5 support is required, without that the lib throws an Error and stops working. The ES5 features are tested by the [capability](https://github.com/inf3rno/capability) lib run time. Classes are created by the [o3](https://github.com/inf3rno/o3) lib. Utility functions are implemented in the [u3](https://github.com/inf3rno/u3) lib. ## API documentation ### Usage In this documentation I used the framework as follows: ```js require("error-polyfill"); // <- your code here ``` It is recommended to require the polyfill in your main script. ### Getting a past stack trace with `Error.getStackTrace` This static method is not part of the V8 Stack Trace API, but it is recommended to **use `Error.getStackTrace(throwable)` instead of `throwable.stack`** to get the stack trace of Error instances! Explanation: By non-V8 environments we cannot replace the default stack generation algorithm, so we need a workaround to generate the stack when somebody tries to access it. So the original stack string will be parsed and the result will be properly formatted by accessing the stack using the `Error.getStackTrace` method. Arguments and return values: - The `throwable` argument should be an `Error` (descendant) instance, but it can be an `Object` instance as well. - The return value is the generated `stack` of the `throwable` argument. Example: ```js try { theNotDefinedFunction(); } catch (error) { console.log(Error.getStackTrace(error)); // ReferenceError: theNotDefinedFunction is not defined // at ... // ... } ``` ### Capturing the present stack trace with `Error.captureStackTrace` The `Error.captureStackTrace(throwable [, terminator])` sets the present stack above the `terminator` on the `throwable`. Arguments and return values: - The `throwable` argument should be an instance of an `Error` descendant, but it can be an `Object` instance as well. It is recommended to use `Error` descendant instances instead of inline objects, because we can recognize them by type e.g. `error instanceof UserError`. - The optional `terminator` argument should be a `Function`. Only the calls before this function will be reported in the stack, so without a `terminator` argument, the last call in the stack will be the call of the `Error.captureStackTrace`. - There is no return value, the `stack` will be set on the `throwable` so you will be able to access it using `Error.getStackTrace`. The format of the stack depends on the `Error.prepareStackTrace` implementation. Example: ```js var UserError = function (message){ this.name = "UserError"; this.message = message; Error.captureStackTrace(this, this.constructor); }; UserError.prototype = Object.create(Error.prototype); function codeSmells(){ throw new UserError("What's going on?!"); } codeSmells(); // UserError: What's going on?! // at codeSmells (myModule.js:23:1) // ... ``` Limitations: By the current implementation the `terminator` can be only the `Error.captureStackTrace` caller function. This will change soon, but in certain conditions, e.g. by using strict mode (`"use strict";`) it is not possible to access the information necessary to implement this feature. You will get an empty `frames` array and a `warning` in the `Error.prepareStackTrace` when the stack parser meets with such conditions. ### Formatting the stack trace with `Error.prepareStackTrace` The `Error.prepareStackTrace(throwable, frames [, warnings])` formats the stack `frames` and returns the `stack` value for `Error.captureStackTrace` or `Error.getStackTrace`. The native implementation returns a stack string, but you can override that by setting a new function value. Arguments and return values: - The `throwable` argument is an `Error` or `Object` instance coming from the `Error.captureStackTrace` or from the creation of a new `Error` instance. Be aware that in some environments you need to throw that instance to get a parsable stack. Without that you will get only a `warning` by trying to access the stack with `Error.getStackTrace`. - The `frames` argument is an array of `Frame` instances. Each `frame` represents a function call in the stack. You can use these frames to build a stack string. To access information about individual frames you can use the following methods. - `frame.toString()` - Returns the string representation of the frame, e.g. `codeSmells (myModule.js:23:1)`. - `frame.getThis()` - **Cannot be supported.** Returns the context of the call, only V8 environments support this natively. - `frame.getTypeName()` - **Not implemented yet.** Returns the type name of the context, by the global namespace it is `Window` in Chrome. - `frame.getFunction()` - Returns the called function or `undefined` by strict mode. - `frame.getFunctionName()` - **Not implemented yet.** Returns the name of the called function. - `frame.getMethodName()` - **Not implemented yet.** Returns the method name of the called function is a method of an object. - `frame.getFileName()` - **Not implemented yet.** Returns the file name where the function was called. - `frame.getLineNumber()` - **Not implemented yet.** Returns at which line the function was called in the file. - `frame.getColumnNumber()` - **Not implemented yet.** Returns at which column the function was called in the file. This information is not always available. - `frame.getEvalOrigin()` - **Not implemented yet.** Returns the original of an `eval` call. - `frame.isTopLevel()` - **Not implemented yet.** Returns whether the function was called from the top level. - `frame.isEval()` - **Not implemented yet.** Returns whether the called function was `eval`. - `frame.isNative()` - **Not implemented yet.** Returns whether the called function was native. - `frame.isConstructor()` - **Not implemented yet.** Returns whether the called function was a constructor. - The optional `warnings` argument contains warning messages coming from the stack parser. It is not part of the V8 Stack Trace API. - The return value will be the stack you can access with `Error.getStackTrace(throwable)`. If it is an object, it is recommended to add a `toString` method, so you will be able to read it in the console. Example: ```js Error.prepareStackTrace = function (throwable, frames, warnings) { var string = ""; string += throwable.name || "Error"; string += ": " + (throwable.message || ""); if (warnings instanceof Array) for (var warningIndex in warnings) { var warning = warnings[warningIndex]; string += "\n # " + warning; } for (var frameIndex in frames) { var frame = frames[frameIndex]; string += "\n at " + frame.toString(); } return string; }; ``` ### Stack trace size limits with `Error.stackTraceLimit` **Not implemented yet.** You can set size limits on the stack trace, so you won't have any problems because of too long stack traces. Example: ```js Error.stackTraceLimit = 10; ``` ### Handling uncaught errors and rejections **Not implemented yet.** ## Differences between environments and modes Since there is no Stack Trace API standard, every browsers solves this problem differently. I try to document what I've found about these differences as detailed as possible, so it will be easier to follow the code. Overriding the `error.stack` property with custom Stack instances - by Node.js and Chrome the `Error.prepareStackTrace()` can override every `error.stack` automatically right by creation - by Firefox, Internet Explorer and Opera you cannot automatically override every `error.stack` by native errors - by PhantomJS you cannot override the `error.stack` property of native errors, it is not configurable Capturing the current stack trace - by Node.js, Chrome, Firefox and Opera the stack property is added by instantiating a native error - by Node.js and Chrome the stack creation is lazy loaded and cached, so the `Error.prepareStackTrace()` is called only by the first access - by Node.js and Chrome the current stack can be added to any object with `Error.captureStackTrace()` - by Internet Explorer the stack is created by throwing a native error - by PhantomJS the stack is created by throwing any object, but not a primitive Accessing the stack - by Node.js, Chrome, Firefox, Internet Explorer, Opera and PhantomJS you can use the `error.stack` property - by old Opera you have to use the `error.stacktrace` property to get the stack Prefixes and postfixes on the stack string - by Node.js, Chrome, Internet Explorer and Opera you have the `error.name` and the `error.message` in a `{name}: {message}` format at the beginning of the stack string - by Firefox and PhantomJS the stack string does not contain the `error.name` and the `error.message` - by Firefox you have an empty line at the end of the stack string Accessing the stack frames array - by Node.js and Chrome you can access the frame objects directly by overriding the `Error.prepareStackTrace()` - by Firefox, Internet Explorer, PhantomJS, and Opera you need to parse the stack string in order to get the frames The structure of the frame string - by Node.js and Chrome - the frame string of calling a function from a module: `thirdFn (http://localhost/myModule.js:45:29)` - the frame strings contain an ` at ` prefix, which is not present by the `frame.toString()` output, so it is added by the `stack.toString()` - by Firefox - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Internet Explorer - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45:29)` - by PhantomJS - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Opera - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45)` Accessing information by individual frames - by Node.js and Chrome the `frame.getThis()` and the `frame.getFunction()` returns `undefined` by frames originate from [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode) code - by Firefox, Internet Explorer, PhantomJS, and Opera the context of the function calls is not accessible, so the `frame.getThis()` cannot be implemented - by Firefox, Internet Explorer, PhantomJS, and Opera functions are not accessible with `arguments.callee.caller` by frames originate from strict mode, so by these frames `frame.getFunction()` can return only `undefined` (this is consistent with V8 behavior) ## License MIT - 2016 Jánszky László Lajos # mustache.js - Logic-less {{mustache}} templates with JavaScript > What could be more logical awesome than no logic at all? [![Build Status](https://travis-ci.org/janl/mustache.js.svg?branch=master)](https://travis-ci.org/janl/mustache.js) [mustache.js](http://github.com/janl/mustache.js) is a zero-dependency implementation of the [mustache](http://mustache.github.com/) template system in JavaScript. [Mustache](http://mustache.github.com/) is a logic-less template syntax. It can be used for HTML, config files, source code - anything. It works by expanding tags in a template using values provided in a hash or object. We call it "logic-less" because there are no if statements, else clauses, or for loops. Instead there are only tags. Some tags are replaced with a value, some nothing, and others a series of values. For a language-agnostic overview of mustache's template syntax, see the `mustache(5)` [manpage](http://mustache.github.com/mustache.5.html). ## Where to use mustache.js? You can use mustache.js to render mustache templates anywhere you can use JavaScript. This includes web browsers, server-side environments such as [Node.js](http://nodejs.org/), and [CouchDB](http://couchdb.apache.org/) views. mustache.js ships with support for the [CommonJS](http://www.commonjs.org/) module API, the [Asynchronous Module Definition](https://github.com/amdjs/amdjs-api/wiki/AMD) API (AMD) and [ECMAScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules). In addition to being a package to be used programmatically, you can use it as a [command line tool](#command-line-tool). And this will be your templates after you use Mustache: !['stache](https://cloud.githubusercontent.com/assets/288977/8779228/a3cf700e-2f02-11e5-869a-300312fb7a00.gif) ## Install You can get Mustache via [npm](http://npmjs.com). ```bash $ npm install mustache --save ``` ## Usage Below is a quick example how to use mustache.js: ```js var view = { title: "Joe", calc: function () { return 2 + 4; } }; var output = Mustache.render("{{title}} spends {{calc}}", view); ``` In this example, the `Mustache.render` function takes two parameters: 1) the [mustache](http://mustache.github.com/) template and 2) a `view` object that contains the data and code needed to render the template. ## Templates A [mustache](http://mustache.github.com/) template is a string that contains any number of mustache tags. Tags are indicated by the double mustaches that surround them. `{{person}}` is a tag, as is `{{#person}}`. In both examples we refer to `person` as the tag's key. There are several types of tags available in mustache.js, described below. There are several techniques that can be used to load templates and hand them to mustache.js, here are two of them: #### Include Templates If you need a template for a dynamic part in a static website, you can consider including the template in the static HTML file to avoid loading templates separately. Here's a small example: ```js // file: render.js function renderHello() { var template = document.getElementById('template').innerHTML; var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; } ``` ```html <html> <body onload="renderHello()"> <div id="target">Loading...</div> <script id="template" type="x-tmpl-mustache"> Hello {{ name }}! </script> <script src="https://unpkg.com/mustache@latest"></script> <script src="render.js"></script> </body> </html> ``` #### Load External Templates If your templates reside in individual files, you can load them asynchronously and render them when they arrive. Another example using [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch): ```js function renderHello() { fetch('template.mustache') .then((response) => response.text()) .then((template) => { var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; }); } ``` ### Variables The most basic tag type is a simple variable. A `{{name}}` tag renders the value of the `name` key in the current context. If there is no such key, nothing is rendered. All variables are HTML-escaped by default. If you want to render unescaped HTML, use the triple mustache: `{{{name}}}`. You can also use `&` to unescape a variable. If you'd like to change HTML-escaping behavior globally (for example, to template non-HTML formats), you can override Mustache's escape function. For example, to disable all escaping: `Mustache.escape = function(text) {return text;};`. If you want `{{name}}` _not_ to be interpreted as a mustache tag, but rather to appear exactly as `{{name}}` in the output, you must change and then restore the default delimiter. See the [Custom Delimiters](#custom-delimiters) section for more information. View: ```json { "name": "Chris", "company": "<b>GitHub</b>" } ``` Template: ``` * {{name}} * {{age}} * {{company}} * {{{company}}} * {{&company}} {{=<% %>=}} * {{company}} <%={{ }}=%> ``` Output: ```html * Chris * * &lt;b&gt;GitHub&lt;/b&gt; * <b>GitHub</b> * <b>GitHub</b> * {{company}} ``` JavaScript's dot notation may be used to access keys that are properties of objects in a view. View: ```json { "name": { "first": "Michael", "last": "Jackson" }, "age": "RIP" } ``` Template: ```html * {{name.first}} {{name.last}} * {{age}} ``` Output: ```html * Michael Jackson * RIP ``` ### Sections Sections render blocks of text zero or more times, depending on the value of the key in the current context. A section begins with a pound and ends with a slash. That is, `{{#person}}` begins a `person` section, while `{{/person}}` ends it. The text between the two tags is referred to as that section's "block". The behavior of the section is determined by the value of the key. #### False Values or Empty Lists If the `person` key does not exist, or exists and has a value of `null`, `undefined`, `false`, `0`, or `NaN`, or is an empty string or an empty list, the block will not be rendered. View: ```json { "person": false } ``` Template: ```html Shown. {{#person}} Never shown! {{/person}} ``` Output: ```html Shown. ``` #### Non-Empty Lists If the `person` key exists and is not `null`, `undefined`, or `false`, and is not an empty list the block will be rendered one or more times. When the value is a list, the block is rendered once for each item in the list. The context of the block is set to the current item in the list for each iteration. In this way we can loop over collections. View: ```json { "stooges": [ { "name": "Moe" }, { "name": "Larry" }, { "name": "Curly" } ] } ``` Template: ```html {{#stooges}} <b>{{name}}</b> {{/stooges}} ``` Output: ```html <b>Moe</b> <b>Larry</b> <b>Curly</b> ``` When looping over an array of strings, a `.` can be used to refer to the current item in the list. View: ```json { "musketeers": ["Athos", "Aramis", "Porthos", "D'Artagnan"] } ``` Template: ```html {{#musketeers}} * {{.}} {{/musketeers}} ``` Output: ```html * Athos * Aramis * Porthos * D'Artagnan ``` If the value of a section variable is a function, it will be called in the context of the current item in the list on each iteration. View: ```js { "beatles": [ { "firstName": "John", "lastName": "Lennon" }, { "firstName": "Paul", "lastName": "McCartney" }, { "firstName": "George", "lastName": "Harrison" }, { "firstName": "Ringo", "lastName": "Starr" } ], "name": function () { return this.firstName + " " + this.lastName; } } ``` Template: ```html {{#beatles}} * {{name}} {{/beatles}} ``` Output: ```html * John Lennon * Paul McCartney * George Harrison * Ringo Starr ``` #### Functions If the value of a section key is a function, it is called with the section's literal block of text, un-rendered, as its first argument. The second argument is a special rendering function that uses the current view as its view argument. It is called in the context of the current view object. View: ```js { "name": "Tater", "bold": function () { return function (text, render) { return "<b>" + render(text) + "</b>"; } } } ``` Template: ```html {{#bold}}Hi {{name}}.{{/bold}} ``` Output: ```html <b>Hi Tater.</b> ``` ### Inverted Sections An inverted section opens with `{{^section}}` instead of `{{#section}}`. The block of an inverted section is rendered only if the value of that section's tag is `null`, `undefined`, `false`, *falsy* or an empty list. View: ```json { "repos": [] } ``` Template: ```html {{#repos}}<b>{{name}}</b>{{/repos}} {{^repos}}No repos :({{/repos}} ``` Output: ```html No repos :( ``` ### Comments Comments begin with a bang and are ignored. The following template: ```html <h1>Today{{! ignore me }}.</h1> ``` Will render as follows: ```html <h1>Today.</h1> ``` Comments may contain newlines. ### Partials Partials begin with a greater than sign, like {{> box}}. Partials are rendered at runtime (as opposed to compile time), so recursive partials are possible. Just avoid infinite loops. They also inherit the calling context. Whereas in ERB you may have this: ```html+erb <%= partial :next_more, :start => start, :size => size %> ``` Mustache requires only this: ```html {{> next_more}} ``` Why? Because the `next_more.mustache` file will inherit the `size` and `start` variables from the calling context. In this way you may want to think of partials as includes, imports, template expansion, nested templates, or subtemplates, even though those aren't literally the case here. For example, this template and partial: base.mustache: <h2>Names</h2> {{#names}} {{> user}} {{/names}} user.mustache: <strong>{{name}}</strong> Can be thought of as a single, expanded template: ```html <h2>Names</h2> {{#names}} <strong>{{name}}</strong> {{/names}} ``` In mustache.js an object of partials may be passed as the third argument to `Mustache.render`. The object should be keyed by the name of the partial, and its value should be the partial text. ```js Mustache.render(template, view, { user: userTemplate }); ``` ### Custom Delimiters Custom delimiters can be used in place of `{{` and `}}` by setting the new values in JavaScript or in templates. #### Setting in JavaScript The `Mustache.tags` property holds an array consisting of the opening and closing tag values. Set custom values by passing a new array of tags to `render()`, which gets honored over the default values, or by overriding the `Mustache.tags` property itself: ```js var customTags = [ '<%', '%>' ]; ``` ##### Pass Value into Render Method ```js Mustache.render(template, view, {}, customTags); ``` ##### Override Tags Property ```js Mustache.tags = customTags; // Subsequent parse() and render() calls will use customTags ``` #### Setting in Templates Set Delimiter tags start with an equals sign and change the tag delimiters from `{{` and `}}` to custom strings. Consider the following contrived example: ```html+erb * {{ default_tags }} {{=<% %>=}} * <% erb_style_tags %> <%={{ }}=%> * {{ default_tags_again }} ``` Here we have a list with three items. The first item uses the default tag style, the second uses ERB style as defined by the Set Delimiter tag, and the third returns to the default style after yet another Set Delimiter declaration. According to [ctemplates](https://htmlpreview.github.io/?https://raw.githubusercontent.com/OlafvdSpek/ctemplate/master/doc/howto.html), this "is useful for languages like TeX, where double-braces may occur in the text and are awkward to use for markup." Custom delimiters may not contain whitespace or the equals sign. ## Pre-parsing and Caching Templates By default, when mustache.js first parses a template it keeps the full parsed token tree in a cache. The next time it sees that same template it skips the parsing step and renders the template much more quickly. If you'd like, you can do this ahead of time using `mustache.parse`. ```js Mustache.parse(template); // Then, sometime later. Mustache.render(template, view); ``` ## Command line tool mustache.js is shipped with a Node.js based command line tool. It might be installed as a global tool on your computer to render a mustache template of some kind ```bash $ npm install -g mustache $ mustache dataView.json myTemplate.mustache > output.html ``` also supports stdin. ```bash $ cat dataView.json | mustache - myTemplate.mustache > output.html ``` or as a package.json `devDependency` in a build process maybe? ```bash $ npm install mustache --save-dev ``` ```json { "scripts": { "build": "mustache dataView.json myTemplate.mustache > public/output.html" } } ``` ```bash $ npm run build ``` The command line tool is basically a wrapper around `Mustache.render` so you get all the features. If your templates use partials you should pass paths to partials using `-p` flag: ```bash $ mustache -p path/to/partial1.mustache -p path/to/partial2.mustache dataView.json myTemplate.mustache ``` ## Plugins for JavaScript Libraries mustache.js may be built specifically for several different client libraries, including the following: - [jQuery](http://jquery.com/) - [MooTools](http://mootools.net/) - [Dojo](http://www.dojotoolkit.org/) - [YUI](http://developer.yahoo.com/yui/) - [qooxdoo](http://qooxdoo.org/) These may be built using [Rake](http://rake.rubyforge.org/) and one of the following commands: ```bash $ rake jquery $ rake mootools $ rake dojo $ rake yui3 $ rake qooxdoo ``` ## TypeScript Since the source code of this package is written in JavaScript, we follow the [TypeScript publishing docs](https://www.typescriptlang.org/docs/handbook/declaration-files/publishing.html) preferred approach by having type definitions available via [@types/mustache](https://www.npmjs.com/package/@types/mustache). ## Testing In order to run the tests you'll need to install [Node.js](http://nodejs.org/). You also need to install the sub module containing [Mustache specifications](http://github.com/mustache/spec) in the project root. ```bash $ git submodule init $ git submodule update ``` Install dependencies. ```bash $ npm install ``` Then run the tests. ```bash $ npm test ``` The test suite consists of both unit and integration tests. If a template isn't rendering correctly for you, you can make a test for it by doing the following: 1. Create a template file named `mytest.mustache` in the `test/_files` directory. Replace `mytest` with the name of your test. 2. Create a corresponding view file named `mytest.js` in the same directory. This file should contain a JavaScript object literal enclosed in parentheses. See any of the other view files for an example. 3. Create a file with the expected output in `mytest.txt` in the same directory. Then, you can run the test with: ```bash $ TEST=mytest npm run test-render ``` ### Browser tests Browser tests are not included in `npm test` as they run for too long, although they are ran automatically on Travis when merged into master. Run browser tests locally in any browser: ```bash $ npm run test-browser-local ``` then point your browser to `http://localhost:8080/__zuul` ## Who uses mustache.js? An updated list of mustache.js users is kept [on the Github wiki](https://github.com/janl/mustache.js/wiki/Beard-Competition). Add yourself or your company if you use mustache.js! ## Contributing mustache.js is a mature project, but it continues to actively invite maintainers. You can help out a high-profile project that is used in a lot of places on the web. No big commitment required, if all you do is review a single [Pull Request](https://github.com/janl/mustache.js/pulls), you are a maintainer. And a hero. ### Your First Contribution - review a [Pull Request](https://github.com/janl/mustache.js/pulls) - fix an [Issue](https://github.com/janl/mustache.js/issues) - update the [documentation](https://github.com/janl/mustache.js#usage) - make a website - write a tutorial ## Thanks mustache.js wouldn't kick ass if it weren't for these fine souls: * Chris Wanstrath / defunkt * Alexander Lang / langalex * Sebastian Cohnen / tisba * J Chris Anderson / jchris * Tom Robinson / tlrobinson * Aaron Quint / quirkey * Douglas Crockford * Nikita Vasilyev / NV * Elise Wood / glytch * Damien Mathieu / dmathieu * Jakub Kuźma / qoobaa * Will Leinweber / will * dpree * Jason Smith / jhs * Aaron Gibralter / agibralter * Ross Boucher / boucher * Matt Sanford / mzsanford * Ben Cherry / bcherry * Michael Jackson / mjackson * Phillip Johnsen / phillipj * David da Silva Contín / dasilvacontin Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT
NEARFoundation_transaction-tracking-app-original
.vscode settings.json README.md backend data benchmarkTheSqlQueries.ts cleanTheSql.sh csvToJson.ts defineTransactionHashesInSql.ts seedLocalDatabase.sh tableDefinitions.sql updateLocalSeedFile.sh docker-compose.yml dropActionsAndTasksAndTypes.ts jest.config.ts package.json src helpers TxTypes 2FA - Remove 2FA.sql 2FA - Set up 2FA.sql DAOs - Add proposal to DAO.sql DAOs - Approve DAO proposal.sql DAOs - Create DAO.sql DAOs - Receive funds from DAO.sql DAOs - Send funds to DAO.sql DeFi - Activate Farm.sql DeFi - Add single-stake Ref.sql DeFi - Add to liquidity pool.sql DeFi - Claim reward from farm.sql DeFi - Deactivate Farm.sql DeFi - Deposit.sql DeFi - Regular 2-pair pool Ref Swap.sql DeFi - Remove from liquidity pool.sql DeFi - Remove single-stake Ref.sql DeFi - Withdraw reward from farm.sql Fungible tokens - Receive token.sql Fungible tokens - Send token.sql Generate account & claim name.sql Linkdrops - Create account and claim linkdrop.sql Linkdrops - Send single linkdrop.sql Lockups - Cliff.sql Lockups - Linear release.sql Multisend - Deposit & send from wallet.sql Multisend - Deposit to app balance.sql Multisend - Send from app balance.sql Multisig - Confirm and execute request.sql Non-Fungible tokens - Buy & Receive NFT.sql Non-Fungible tokens - Send NFT.sql Rainbow bridge - Send NEAR from NEAR to Aurora.sql Rainbow bridge - Send NEAR from NEAR to Ethereum mainnet.sql Receive NEAR.sql Send NEAR.sql Staking - Finalize withdraw from validator.sql Staking - Initiate unstake release from validator.sql Staking - Stake with validator.sql WRAP NEAR.sql addDefaultTypesTx.ts config.ts errors.ts formatAmount.ts getCurrency.test.ts getCurrency.ts nearConnection.ts syncedCron.ts updateTransactions.test.ts updateTransactions.ts index.ts models PoolsCurrencies.ts TxActions.ts TxTasks.ts TxTypes.ts routes collector.routes.ts services addTasks.ts deleteAccountData.ts getAccounts.ts getTransactions.ts getTypes.ts test_helpers internal defineTransactionHashesInSql.ts jsonToCsv.ts testData.sql updateTestData.ts updateTestData.sh tsconfig.eslint.json tsconfig.json docker-compose.yml docker backend build_container.sh build_tag_push.sh build_tag_push_ecr.sh push_container.sh run_container.sh tag_container.sh frontend build_container.sh frontend.js package.json push_container.sh run_container.sh tag_container.sh frontend package.json src __mocks__ fileMock.js assets logo-black.svg logo-white.svg helpers config.ts csv.ts errors.ts localStorage.ts transactions.ts utils.ts index.html wallet login index.html tsconfig.eslint.json tsconfig.json imagedefinitions-us-east-1-dev-transactionsbackend.json imagedefinitions-us-east-1-dev-transactionsfrontend.json jest.config.ts package.json shared config.ts helpers datetime.test.ts datetime.ts logging.ts precision.test.ts precision.ts statusCodes.ts strings.ts package.json tsconfig.eslint.json tsconfig.json types csvToJson.d.ts index.d.ts jsonToCsv.ts tsconfig.eslint.json tsconfig.json
# 🛑 IMPORTANT! Please see the newer repo at https://github.com/NEARFoundation/transaction-tracking-app instead of this repo, which is legacy. # NEAR Transaction Tracker App (also known as "Transactions Accounting Report") Transaction Tracker App (TTA) produces a report that helps teams across the ecosystem to see a simplified view of all transactions over a certain period (e.g. the Finance/Legal/Operations team uses it to reconcile their transactions and stay compliant). ## What it does Ledgers like https://explorer.near.org don't always provide a simple view of when money changes hands (i.e. NEAR tokens or fungible tokens from one NEAR account to another). TTA allows you to specify a NEAR mainnet account ID and see a table of all transactions involding the transfer of NEAR tokens or fungible tokens into or out of that account. You can export the table as CSV. When you specify one or more NEAR acccount IDs, those account IDs get saved to your browser's localStorage. Additionally, the server starts downloading all transactions (from the private indexer) for those account IDs and processes them and saves the data into TTA's Mongo database, which is what powers the table you see in your browser. The downloads can take a while (because the tables are huge), and a cron job keeps track of their progress. --- # Overview - The frontend is a React app in the "frontend" folder. - `/frontend/src/index.html` is a great place to start exploring. Note that it loads in `/frontend/src/index.tsx`, where you can learn how the frontend connects to the NEAR blockchain. - The backend is an Express app (with cron jobs and a Mongo database) in the "backend" folder. - The backend relies on a private [clone](https://github.com/near/near-indexer-for-explorer/) of the [NEAR Explorer](https://explorer.near.org) indexer, a large PostgreSQL database (certain tables are ~1 TB). We use our own clone of NEAR Explorer (on a bare metal Hetzner server) instead of using the public credentials of the actual NEAR Explorer because the complicated queries take too long and time out. - There is also a folder called "shared" for code that both apps use. - Tests use [jest](https://jestjs.io/docs/getting-started#using-typescript). You can run via `yarn test`. --- # Getting Started To run this project locally (as an operations engineer, or a developer looking to deliver code): - Get a Nix-based development environment (Linux, Mac, WSL). - Get and setup Docker for your environment. - Clone this repo and get into the project. - run `docker-compose up` - App will be available at: http://localhost:8085/ (frontend) and http://localhost:8086/ (backend) To run this project locally (as a developer running a fully local development environment) 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/package-manager/) ≥ 18. `nvm use 18`. 1. Install and start [Mongo](https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-os-x/) using the instructions in its own section below. 1. `cp frontend/.env.development frontend/.env.development.local && cp backend/.env.development backend/.env.development.local` 1. Edit the values for each of those local env files. If you set REACT_APP_ALLOW_DELETING_FROM_DATABASE to "true" in `frontend/.env.development.local` and ALLOW_DELETING_FROM_DATABASE to "true" in `backend/.env.development.local`, you will see a button in the frontend that allows you to delete records from the database, which is useful when you are manually testing whether transaction processing is working after editing the SQL queries. - Similarly, if you ever want to nuke your local Mongo cache, you can run `yarn drop_actions_and_tasks_and_types`. 1. Install PostreSQL: ```bash brew install postgresql brew services start postgresql psql postgres \du CREATE ROLE testuser WITH LOGIN PASSWORD 'secret'; ALTER ROLE testuser CREATEDB; CREATE ROLE dev WITH LOGIN PASSWORD 'public'; ALTER ROLE dev CREATEDB; \q psql postgres -U testuser CREATE DATABASE tta_test_db; GRANT ALL PRIVILEGES ON DATABASE tta_test_db TO testuser; \list \q psql postgres -U dev CREATE DATABASE local_explorer; GRANT ALL PRIVILEGES ON DATABASE local_explorer TO dev; \list \q ``` 1. Install dependencies for frontend and backend: `yarn install_all` 1. Seed the local dev database via `yarn seed`. 1. (optional) `POSTGRESQL_CONNECTION_STRING=___ ./backend/test_helpers/updateTestData.sh` (where `___` is the mainnet Postgres credentials string) 1. `yarn test` 1. Start the backend: `yarn backend_dev` 1. In a second terminal, start the frontend: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`). TODO: Check whether https://www.npmjs.com/package/concurrently would help. 1. Visit http://localhost:1234/ in the browser. Go ahead and play with the app and the code. As you make frontend code changes, the app will automatically reload. ## Setting up Mongo and MongoDB Shell ``` brew tap mongodb/brew brew update -v brew install mongodb-community@6.0 brew services start mongodb-community@6.0 brew install mongosh mongosh use admin show databases db.createUser( { user: "MongoTestDbUser", pwd: "MongoTestDbSecretPhrase", roles: [ { role: "readWrite", db: "test" } ] } ) db.createUser( { user: "MongoDbUser", pwd: "MongoDbSecretPhrase", roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] } ) show users exit ``` https://medium.com/@haxzie/getting-started-with-mongodb-setting-up-admin-and-user-accounts-4fdd33687741 was useful. # Tests `*.test.ts` files live right next to whatever file they are testing. `backend/src/helpers/updateTransactions.test.ts` contains tests about the complicated SQL queries that process each of the ~40 transaction types. To test that `updateTransactions` works correctly, first ensure that `backend/test_helpers/expectedOutput.csv` contains the values that you want. (Ideally we will have more than 1 row per transaction type.) Then run `yarn update_test_data`. This command will download all of the real-world data from the mainnet indexer Postgres database into SQL files that the automated tests will use when seeing your local database (the mock indexer). (See https://stackoverflow.com/a/20909045/ for how the update_test_data script works.) To avoid downloading terabytes of data from the remote database (private indexer), the scripts look in `expectedOutput.csv` to see exactly which transaction hashes matter. The output of `updateTestData.sh` is `backend/test_helpers/internal/testData.sql`, which is how tests can see the local PostgreSQL test database (mock indexer). The inputs for `backend/src/helpers/updateTransactions.test.ts` come from `expectedOutput.csv` (its transaction hashes and account IDs), and of course so do the expected outputs. Then run `yarn test` to run the tests. # Updating seed files for local database and test database 1. Visit https://docs.google.com/spreadsheets/d/1g3yymiHP2QJqwLrJdwma8gu-J92XCbZR-DDpObmu_2o/edit#gid=726351924 - The FLO team has decided that this is the official list of transaction types. 1. Sort the rows by "keep" descending, "phase" ascending, "txType" ascending. - (We can't try to skip this step by creating a sorted "filter view" in Google Sheets because the "Download" step doesn't honor filter views.) 1. For rows where "keep" === 1, search for occurrences of "E+". If any numeric values are using this kind of (exponential) notation, you need to correct the cell (write out the full number instead). - You might need to prepend the value with a single quote ('). 1. File > Download > .ods (choose a temporary folder somewhere). - (We need this extra step before CSV because Google Sheets doesn't save the double-quotes correctly.) 1. Open the .ods file in LibreOffice. 1. Delete the rows where "keep" is not "1". 1. Delete these columns: "keep", "errors", "phase". 1. File > Save As > Text CSV (.csv) 1. Choose to save to `backend/test_helpers/expectedOutput.csv` 1. Check the boxes for "Save cell content as shown" and "Quote all text cells" # TODO: Explain how to get permissions. # Other Helpful Docs - [react](https://reactjs.org/) - [create-near-app](https://github.com/near/create-near-app) - [near accounts](https://docs.near.org/docs/concepts/account) - [near wallet](https://wallet.testnet.near.org/) - [near-cli](https://github.com/near/near-cli) - [gh-pages](https://github.com/tschaub/gh-pages)
OSTOLEX-Technologies_room-contract
.gitpod.yml .idea modules.xml vcs.xml README.md contract Cargo.toml README.md build.sh deploy.sh src account.rs enumerable.rs lib.rs storage_tracker.rs integration-tests Cargo.toml src tests.rs package-lock.json package.json
# room-contract # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
hwnprsd_near-scaffold
.gitpod.yml README.md contract Cargo.toml README.md src lib.rs frontend App.js __mocks__ fileMock.js assets css global.css img logo-black.svg logo-white.svg js near config.js utils.js index.html index.js integration-tests package.json src config.ts main.ava.ts package.json
near-blank-project ================== This [React] app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/assets/js/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages near-blank-project Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html
Ouch-Metaverse-Dao_quickjs-rust-near
.github workflows main.yml Cargo.toml README.md build.rs buildanddeploy.sh e2e e2e.test.js music.test.js musicscript.js meta-dce.json package.json src jslib.rs lib.rs tests musicscript.rs testenv.rs viewaccesscontrol.rs wasimock.rs web4.rs web4 types.rs webappbundle.rs test.sh web4 app.html.js audio audio.js audiobuffertowav.js audioworkletprocessor.js index.html main.js rollup.config.js style.css.js synth.js ui progressbar.js webassemblymusic musictemplate.js musictemplate.test.js sample_music.js sample_music2.js
Rust WebAssembly smart contract for NEAR with Javascript runtime ================================================================ This is a Proof of Concept of embedding QuickJS with https://github.com/near/near-sdk-rs for being able to execute custom JavaScript code inside a smart contract written in Rust. First of all, have a look at the videos where I present the project https://www.youtube.com/watch?v=JBZEr__pid0&list=PLv5wm4YuO4IwVNrSsYxeqKrtQZYRML03Z The QuickJS runtime is compiled from https://github.com/petersalomonsen/quickjs-wasm-near The contract has two functions: - `run_script` accepting javascript as text for compiling on the fly. - `run_bytecode` for running JS pre-compiled into the QuickJS bytecode format. Send the pre-compiled bytecode as a base64 string. See https://github.com/petersalomonsen/quickjs-wasm-near/blob/master/web/compiler/compile.spec.js for examples on compiling JS to QuickJS bytecode. - `submit_script` for submitting and storing JavaScript and running later - `run_script_for_account` run script stored by account, returns an integer returned by the script - `run_script_for_account_no_return` run script stored by account, does not return anything unless the script calls `env.value_return`. For building and deploying the contract have a look at [buildanddeploy.sh](./buildanddeploy.sh). # Calling the deployed contract Test running javascript as text: ``` near call dev-1650299983789-21350249865305 --accountId=psalomo.testnet run_script '{"script": "(function() {return 5*33+22;})();" }' ``` Here are some examples from a deployment to testnet account: `dev-1650299983789-21350249865305` Test running bytecode ( which is compiled from `JSON.parse('{"a": 222}').a+3`): ``` near call dev-1650299983789-21350249865305 --accountId=psalomo.testnet run_bytecode '{"bytecodebase64": "AgQKcGFyc2UUeyJhIjogMjIyfQJhGDxldmFsc291cmNlPg4ABgCgAQABAAMAABsBogEAAAA4mwAAAELeAAAABN8AAAAkAQBB4AAAALidzSjCAwEA" }' ``` # Testing Trying to run tests with `wasm32` targets will not work out of the box. As you will see from running the command below, it will fail when trying to run the compiled test file. `cargo test --target=wasm32-wasi` but you can run the wasm file it produces with a WebAssembly runtime like [wasmtime](http://wasmtime.dev), [wasmer](https://wasmer.io/) or [wasm3](https://github.com/wasm3/wasm3/). Have a look at [test.sh](./test.sh) and try running it and you'll see that it outputs results just like when running normal tests in Rust. # Web4 and a WebAssembly Music showcase The web application in the [web4](./web4) folder is a vanilla JS Web Component application for uploading music written in Javascript and also playing it, and accepting parameters in JSON for configuring the playback. It also contains functionality for exporting to WAV. See the video playlist above for a demo. The music to be played back is fetched in a view method call, and for controlling who can access this view method the JSON parameters payload is signed using the callers private key. The contract will then verify the signature according to the callers public key stored in a transaction before the view method call. The web application is packaged into a single HTML file using rollup, where the final bundle is embedded into the Rust sources encoded as a base64 string. # TODO - **DONE** Implement (mock) WASI methods in a linkable library so that WAT file does not have to be edited manually - **DONE** Integration/Unit testing support for Wasm32 target ( which is not supported with near-sdk-rs, see https://github.com/near/near-sdk-rs/issues/467 ) - **DONE** Running tests - **DONE** Displaying errors (needs a panic hook) - **DONE** Minimum NEAR mock env - Local simulation in browser/node Wasm runtime with mocked NEAR env in JavaScript - **DONE** End to End tests (testnet) - **DONE** Expose some NEAR environment functions to JS runtime - **DONE** `env.value_return` - **DONE** `env.input` (no need to load into register first) - **DONE** `env.signer_account_id` (no need to load into register first) - **DONE** Web4 hosting showcase - NFT implementation configurable with JavaScript - Implement Web interface for copying base64 encoded bytecode to clipboard (in https://github.com/petersalomonsen/quickjs-wasm-near)
farukbaktas_Near_Web3
README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
# `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068)
mcshel_near_guestbook
.eslintrc.yml .github dependabot.yml workflows deploy.yml tests.yml .gitpod.yml .travis.yml README-Gitpod.md README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts guestbook.spec.ts as_types.d.ts main.ts model.ts tsconfig.json babel.config.js neardev shared-test-staging test.near.json shared-test test.near.json package.json src App.js config.js index.html index.js tests integration App-integration.test.js ui App-ui.test.js
Guest Book ========== [![Build Status](https://travis-ci.com/near-examples/guest-book.svg?branch=master)](https://travis-ci.com/near-examples/guest-book) [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/near-examples/guest-book) <!-- MAGIC COMMENT: DO NOT DELETE! Everything above this line is hidden on NEAR Examples page --> Sign in with [NEAR] and add a message to the guest book! A starter app built with an [AssemblyScript] backend and a [React] frontend. Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you have Node.js ≥ 12 installed (https://nodejs.org), then use it to install [yarn]: `npm install --global yarn` (or just `npm i -g yarn`) 2. Run the local development server: `yarn && yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Running `yarn dev` will tell you the URL you can visit in your browser to see the app. Exploring The Code ================== 1. The backend code lives in the `/assembly` folder. This code gets deployed to the NEAR blockchain when you run `yarn deploy:contract`. This sort of code-that-runs-on-a-blockchain is called a "smart contract" – [learn more about NEAR smart contracts][smart contract docs]. 2. The frontend code lives in the `/src` folder. [/src/index.html](/src/index.html) is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and backend. The backend code gets tested with the [asp] command for running the backend AssemblyScript tests, and [jest] for running frontend tests. You can run both of these at once with `yarn test`. Both contract and client-side code will auto-reload as you change source files. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contracts get deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli -------------------------- You need near-cli installed globally. Here's how: npm install --global near-cli This will give you the `near` [CLI] tool. Ensure that it's installed with: near --version Step 1: Create an account for the contract ------------------------------------------ Visit [NEAR Wallet] and make a new account. You'll be deploying these smart contracts to this new account. Now authorize NEAR CLI for this new account, and follow the instructions it gives you: near login Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'your-account-here!' Step 3: change remote URL if you cloned this repo ------------------------- Unless you forked this repository you will need to change the remote URL to a repo that you have commit access to. This will allow auto deployment to GitHub Pages from the command line. 1) go to GitHub and create a new repository for this project 2) open your terminal and in the root of this project enter the following: $ `git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY.git` Step 4: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contracts to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. [NEAR]: https://near.org/ [yarn]: https://yarnpkg.com/ [AssemblyScript]: https://www.assemblyscript.org/introduction.html [React]: https://reactjs.org [smart contract docs]: https://docs.near.org/docs/develop/contracts/overview [asp]: https://www.npmjs.com/package/@as-pect/cli [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.near.org [near-cli]: https://github.com/near/near-cli [CLI]: https://www.w3schools.com/whatis/whatis_cli.asp [create-near-app]: https://github.com/near/create-near-app [gh-pages]: https://github.com/tschaub/gh-pages
nearprotocol__archived_todomvc
.travis.yml app-spec.md bower.json bower_components bootstrap .bower.json Gruntfile.js README.md bower.json dist css bootstrap-theme.css bootstrap-theme.min.css bootstrap.css bootstrap.min.css fonts glyphicons-halflings-regular.svg js bootstrap.js bootstrap.min.js fonts glyphicons-halflings-regular.svg grunt bs-glyphicons-data-generator.js bs-lessdoc-parser.js bs-raw-files-generator.js sauce_browsers.yml js affix.js alert.js button.js carousel.js collapse.js dropdown.js modal.js popover.js scrollspy.js tab.js tooltip.js transition.js package.json font-roboto .bower.json README.md bower.json roboto.html iron-a11y-keys-behavior .bower.json README.md bower.json demo index.html x-key-aware.html index.html iron-a11y-keys-behavior.html test basic-test.html index.html iron-behaviors .bower.json .travis.yml CONTRIBUTING.md README.md bower.json demo index.html simple-button.html index.html iron-button-state.html iron-control-state.html test active-state.html disabled-state.html focused-state.html index.html test-elements.html iron-checked-element-behavior .bower.json README.md bower.json demo index.html simple-checkbox.html index.html iron-checked-element-behavior.html test basic.html index.html simple-checkbox.html iron-flex-layout .bower.json .travis.yml CONTRIBUTING.md README.md bower.json classes iron-flex-layout.html iron-shadow-flex-layout.html demo demo-snippet.html index.html index.html iron-flex-layout.html iron-form-element-behavior .bower.json README.md bower.json demo index.html simple-element.html simple-form.html index.html iron-form-element-behavior.html iron-icon .bower.json README.md bower.json demo async.html index.html hero.svg index.html iron-icon.html test index.html iron-icon.html iron-icons .bower.json README.md av-icons.html bower.json communication-icons.html demo index.html device-icons.html editor-icons.html hardware-icons.html hero.svg image-icons.html index.html iron-icons.html maps-icons.html notification-icons.html social-icons.html iron-iconset-svg .bower.json .travis.yml CONTRIBUTING.md README.md bower.json demo index.html svg-sample-icons.html index.html iron-iconset-svg.html test index.html iron-iconset-svg.html iron-menu-behavior .bower.json README.md bower.json demo index.html simple-menu.html simple-menubar.html index.html iron-menu-behavior.html iron-menubar-behavior.html test index.html iron-menu-behavior.html iron-menubar-behavior.html test-menu.html test-menubar.html iron-meta .bower.json .travis.yml CONTRIBUTING.md README.md bower.json demo index.html hero.svg index.html iron-meta.html test basic.html index.html iron-meta.html iron-pages .bower.json README.md bower.json demo index.html hero.svg index.html iron-pages.html test attr-for-selected.html basic.html index.html iron-resizable-behavior .bower.json README.md bower.json demo index.html src x-app.html index.html iron-resizable-behavior.html test basic.html index.html iron-resizable-behavior.html test-elements.html iron-selector .bower.json .travis.yml README.md bower.json demo index.html index.html iron-multi-selectable.html iron-selectable.html iron-selection.html iron-selector.html test activate-event.html basic.html content-element.html content.html excluded-local-names.html index.html multi.html next-previous.html selected-attribute.html template-repeat.html iron-validatable-behavior .bower.json README.md bower.json demo cats-only.html index.html validatable-input.html index.html iron-validatable-behavior.html test index.html iron-validatable-behavior.html test-validatable.html jquery .bower.json MIT-LICENSE.txt bower.json dist jquery.js jquery.min.js src ajax.js ajax jsonp.js load.js parseJSON.js parseXML.js script.js var nonce.js rquery.js xhr.js attributes.js attributes attr.js classes.js prop.js support.js val.js callbacks.js core.js core access.js init.js parseHTML.js ready.js var rsingleTag.js css.js css addGetHookIf.js curCSS.js defaultDisplay.js hiddenVisibleSelectors.js support.js swap.js var cssExpand.js getStyles.js isHidden.js rmargin.js rnumnonpx.js data.js data Data.js accepts.js var data_priv.js data_user.js deferred.js deprecated.js dimensions.js effects.js effects Tween.js animatedSelector.js event.js event ajax.js alias.js support.js exports amd.js global.js intro.js jquery.js manipulation.js manipulation _evalUrl.js support.js var rcheckableType.js offset.js outro.js queue.js queue delay.js selector-native.js selector-sizzle.js selector.js serialize.js sizzle dist sizzle.js sizzle.min.js traversing.js traversing findFilter.js var rneedsContext.js var arr.js class2type.js concat.js hasOwn.js indexOf.js pnum.js push.js rnotwhite.js slice.js strundefined.js support.js toString.js wrap.js paper-behaviors .bower.json README.md bower.json demo index.html paper-button.html paper-radio-button.html index.html paper-button-behavior.html paper-checked-element-behavior.html paper-inky-focus-behavior.html paper-ripple-behavior.html test index.html paper-button-behavior.html paper-checked-element-behavior.html paper-radio-button-behavior.html paper-ripple-behavior.html test-button.html test-radio-button.html paper-icon-button .bower.json README.md bower.json demo index.html index.html paper-icon-button.html test a11y.html basic.html index.html paper-ripple .bower.json README.md bower.json demo index.html hero.svg index.html paper-ripple.html test index.html paper-ripple.html paper-styles .bower.json README.md bower.json classes global.html shadow-layout.html shadow.html typography.html color.html default-theme.html demo-pages.html demo.css demo index.html index.html paper-styles-classes.html paper-styles.html shadow.html typography.html paper-tabs .bower.json .travis.yml CONTRIBUTING.md README.md bower.json demo index.html paper-tabs-demo-styles.html tabs-with-content-example.html hero.svg index.html paper-tab.html paper-tabs-icons.html paper-tabs.html test attr-for-selected.html basic.html index.html polymer .bower.json LICENSE.txt bower.json polymer-micro.html polymer-mini.html polymer.html prefixfree .bower.json bower.json prefixfree.min.js webcomponentsjs .bower.json CustomElements.js CustomElements.min.js HTMLImports.js HTMLImports.min.js MutationObserver.js MutationObserver.min.js README.md ShadowDOM.js ShadowDOM.min.js bower.json package.json webcomponents-lite.js webcomponents-lite.min.js webcomponents.js webcomponents.min.js changelog.md circle.yml code-of-conduct.md codestyle.md contributing.md cypress.json cypress fixtures example.json integration spec.js plugins index.js support commands.js defaults.js index.js examples angular-dart node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md web index.html packages browser dart.js angular2 app app.html app.js app.ts bootstrap.js bootstrap.ts services store.js store.ts index.html node_modules angular2 bundles angular2-polyfills.js rxjs bundles Rx.js systemjs dist system.src.js todomvc-app-css index.css todomvc-common base.css package.json readme.md tsconfig.json angular2_es2015 app components app app.component.js app.template.html index.js todo-footer todo-footer.component.js todo-footer.template.html todo-header todo-header.component.js todo-header.template.html todo-item todo-item.component.js todo-item.template.html todo-list todo-list.component.js todo-list.template.html main.js main.module.js models todo.model.js pipes index.js trim trim.pipe.js routes.js services todo-store.service.js index.html node_modules todomvc-app-css index.css todomvc-common base.css package.json readme.md webpack.config.js angularjs index.html js app.js controllers todoCtrl.js directives todoEscape.js todoFocus.js services todoStorage.js node_modules angular-resource angular-resource.js angular-route angular-route.js angular angular.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md test config karma.conf.js unit directivesSpec.js todoCtrlSpec.js angularjs_require index.html js app.js controllers todo.js directives todoEscape.js todoFocus.js main.js services todoStorage.js node_modules angular angular.js requirejs require.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md test gruntfile.js package.json unit directives todoEscapeSpec.js todoFocusSpec.js aurelia app.js behaviors focus.js config.js dist app.html app.js aurelia.js main.js todo-item.js todos.html todos.js index.html jspm_packages npm todomvc-app-css@2.0.4 index.css todomvc-common@1.0.2 base.css base.js system-polyfills.js system.js main.js readme.md todo-item.js todos.js backbone index.html js app.js collections todos.js models todo.js routers router.js views app-view.js todo-view.js node_modules backbone.localstorage backbone.localStorage.js backbone backbone.js jquery dist jquery.js todomvc-app-css index.css todomvc-common base.css base.js underscore underscore.js package.json readme.md backbone_marionette css app.css index.html js TodoMVC.Application.js TodoMVC.FilterState.js TodoMVC.Layout.js TodoMVC.Router.js TodoMVC.TodoList.Views.js TodoMVC.Todos.js TodoMVC.js node_modules backbone.localstorage backbone.localStorage.js backbone.marionette lib backbone.marionette.js backbone.radio build backbone.radio.js backbone backbone.js jquery dist jquery.js todomvc-app-css index.css todomvc-common base.css base.js underscore underscore.js package.json readme.md backbone_require index.html js collections todos.js common.js main.js models todo.js routers router.js templates stats.html todos.html views app.js todos.js node_modules backbone.localstorage backbone.localStorage.js backbone backbone.js jquery dist jquery.js requirejs-text text.js requirejs require.js todomvc-app-css index.css todomvc-common base.css base.js underscore underscore.js package.json readme.md binding-scala index.html js src main scala com thoughtworks todo Main.scala node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md canjs index.html js app.js components todo-app.js models todo.js node_modules canjs-localstorage can.localstorage.js canjs can.jquery.js jquery dist jquery.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md canjs_require index.html js app.js components todo-app.js models todo.js node_modules canjs-localstorage can.localstorage.js canjs amd can.js can component.js compute.js construct.js control.js list.js map.js model.js observe.js route.js util array each.js batch.js bind.js can.js event.js inserted.js jquery.js library.js string.js string deparam.js view.js view bindings.js elements.js live.js mustache.js node_lists.js render.js scanner.js scope.js jquery dist jquery.js requirejs require.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md closure index.html js app.js compiled.js todomvc model ToDoItem.js ToDoItemStore.js view ClearCompletedControlRenderer.js ItemCountControlRenderer.js ToDoItemControl.js ToDoItemControlRenderer.js ToDoListContainer.js ToDoListContainerRenderer.js node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json plovr.json readme.md cujo TODO.md app controller.js controls strings.js structure.css template.html create cleanTodo.js generateMetadata.js strings.js template.html validateTodo.js footer strings.js template.html list setCompletedClass.js strings.js structure.css template.html main.js run.js bower.json bower_components cola Collection.js Model.js SortedMap.js adapter Array.js LocalStorage.js Object.js ObjectToArray.js Query.js makeJoined.js makeTransformed.js makeTransformedProperties.js adapterResolver.js cola.js collectionAdapterResolver.js comparator byProperty.js compose.js naturalOrder.js reverse.js dom adapter Node.js NodeList.js bindingHandler.js classList.js events.js form.js formElementFinder.js guess.js has.js enqueue.js hub Base.js eventProcessor.js identifier default.js property.js network strategy base.js changeEvent.js collectThenDeliver.js compose.js default.js defaultModel.js minimal.js syncAfterJoin.js syncDataDirectly.js syncModel.js targetFirstItem.js validate.js objectAdapterResolver.js projection assign.js inherit.js relational hashJoin.js propertiesKey.js strategy leftOuterJoin.js transform compose.js configure.js createEnum.js expression.js validation composeValidators.js form formValidationHandler.js curl src curl.js curl debug.js domReady.js loader cjsm11.js plugin README.md _fetchText.js async.js css.js domReady.js i18n.js js.js json.js link.js style.js text.js shim dojo16.js ssjs.js tdd runner.js undefine.js meld aspect cache.js memoize.js trace.js meld.js poly all.js array.js date.js es5-strict.js es5.js function.js json.js lib _base.js object.js poly.js setImmediate.js strict.js string.js support json3.js xhr.js todomvc-common base.css base.js when apply.js callbacks.js cancelable.js debug.js delay.js function.js guard.js keys.js node function.js parallel.js pipeline.js poll.js sequence.js timed.js timeout.js unfold.js unfold list.js when.js wire aop.js builder cram.js connect.js debug.js dojo data.js dijit.js dom.js events.js on.js pubsub.js store.js dom.js dom render.js transform cardinality.js mapClasses.js mapTokenList.js replaceClasses.js toggleClasses.js domReady.js jquery dom.js on.js ui.js lib ComponentFactory.js Container.js WireContext.js WireProxy.js advice.js array.js connection.js context.js dom base.js functional.js graph DirectedGraph.js formatCycles.js tarjan.js trackInflightRefs.js instantiate.js invoker.js lifecycle.js loader.js object.js plugin-base dom.js on.js plugin basePlugin.js defaultPlugins.js priority.js registry.js wirePlugin.js resolver.js scope.js on.js sizzle.js wire.js index.html readme.md dijon index.html js app.js config.js lib dijon-0.5.3.js models TodosModel.js services LocalStorageService.js utils Utils.js views FooterView.js TodoFormView.js TodoListView.js node_modules handlebars dist handlebars.js jquery dist jquery.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md dojo index.html js lib dojo dojo.js main.js todo computed.js empty.js store TodoLocalStorage.js widgets CSSToggleWidget.js Todo.js TodoEnter.js TodoEscape.js TodoFocus.js Todos.js node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json profiles todomvc.profile.js readme.md test RouterMock.js all.js allBrowser.js allNode.js handleCleaner.js intern.js todo computed.js store TodoLocalStorage.js widgets CSSToggleWidget.js Todo.js TodoEscape.js TodoFocus.js Todos.js templates CSSToggleWidget.html TodoFocus.html duel package.json pom.xml readme.md src main webapp WEB-INF web.xml css base.css index.css js lib duel.js todos.js todos controller.js model.js staticapp.json www cdn 3aba0c24fc2dfb2ef530691bf611672891b75c6d.js b12c1274056c76efb21a375280fdd622eb22b845.css index.html elm build elm.js elm-package.json index.html node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md emberjs README.md index.html todomvc .eslintrc.js .template-lintrc.js .travis.yml app app.js components todo-item.js todo-list.js controllers active.js application.js completed.js helpers gt.js pluralize.js index.html resolver.js router.js routes application.js services repo.js styles app.css config environment.js optional-features.json targets.js dist assets todomvc-3fe090b4fc6a4e94b886378993e1e388.js todomvc-d41d8cd98f00b204e9800998ecf8427e.css vendor-50b76499fe0b46b3444173854527334b.css vendor-73b1903a6c4e658b1d74cccae9ff6977.js index.html robots.txt ember-cli-build.js package-lock.json package.json public robots.txt testem.js tests index.html test-helper.js emberjs_require index.html readme.md enyo_backbone enyo enyo.js loader.js source boot boot.js package.js ready.js package.js index.html js apps app.js package.js collections TaskCollection.js package.js controllers NotepadController.js Routes.js package.js models TaskModel.js package.js package.js start.js views FooterView.js NotepadFooterView.js NotepadHeaderView.js NotepadMainView.js NotepadView.js WindowView.js package.js lib app.js enyo.js package.js node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md exoskeleton index.html js app.js collections todos.js models todo.js routers router.js views app-view.js todo-view.js node_modules backbone.localstorage backbone.localStorage.js backbone.nativeview backbone.nativeview.js exoskeleton exoskeleton.js microtemplates index.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md firebase-angular index.html js app.js controllers todoCtrl.js directives todoBlur.js todoEscape.js todoFocus.js node_modules angular angular.js angularfire dist angularfire.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md gwt css app.css gwttodo 0176BFA5D0194FC6FB9562B368F6A8FA.cache.js 694169309C8362F4AD96846150CDA3B1.cache.js BCF576FAD7F67F33B82C55CCB4D63DF9.cache.js D6D931BA9B534E30A73C8FD93F9AAC9A.cache.js D87BBFA5B364F29D90EA3EF922379E77.cache.js gwttodo.nocache.js index.html node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md src com todo GwtToDo.gwt.xml client GwtToDo.java TextBoxWithPlaceholder.java ToDoCell.java ToDoItem.java ToDoPresenter.java ToDoRouting.java ToDoView.java ToDoView.ui.xml events ToDoEvent.java ToDoRemovedEvent.java ToDoUpdatedEvent.java jquery css app.css index.html js app.js node_modules director build director.js handlebars dist handlebars.js jquery dist jquery.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md js_of_ocaml build.sh index.html js todomvc.js node_modules todomvc-app-css index.css package.json readme.md todomvc-common base.css base.js package.json readme.md package.json readme.md jsblocks index.html js app.js node_modules blocks blocks.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md knockback index.html js collections todos.js models todo.js viewmodels app.js todo.js node_modules backbone.localstorage backbone.localStorage.js backbone backbone.js jquery dist jquery.js knockback knockback.js knockout build output knockout-latest.debug.js todomvc-app-css index.css todomvc-common base.css base.js underscore underscore.js package.json readme.md knockoutjs index.html js app.js node_modules director build director.js knockout build output knockout-latest.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md knockoutjs_require index.html js config global.js extends handlers.js main.js models Todo.js viewmodels todo.js node_modules knockout build output knockout-latest.js requirejs require.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md kotlin-react .idea kotlinc.xml libraries KotlinJavaScript.xml kotlin_extensions.xml kotlin_react.xml kotlin_react_dom.xml kotlinx_html_js.xml markdown-navigator.xml markdown-navigator profiles_settings.xml modules.xml runConfigurations Debug_in_Chrome.xml npm_start.xml uiDesigner.xml vcs.xml README.md app.css asset-manifest.json base.css build.sh index.css index.html languages language-en_US.json manifest.json package.json public app.css base.css index.css index.html languages language-en_US.json manifest.json src app App.css static css main.13b13b75.css main.d41d8cd9.css js main.37adc507.js main.4240f14f.js main.9c3ce066.js main.a24e7468.js main.e307ed0b.js main.ef12774d.js lavaca_require index.html js app app.js boot.js models TodosCollection.js net TodosController.js ui views CollectionView.js TodoItemView.js TodosCollectionView.js TodosView.js libs lavaca.js require-dust.js templates todo-item.html todos.html node_modules dustjs-helpers dist dust-helpers.js dustjs-linkedin dist dust-full.js jquery dist jquery.js mout src array.js array append.js collect.js combine.js compact.js contains.js difference.js every.js filter.js find.js findIndex.js flatten.js forEach.js indexOf.js insert.js intersection.js invoke.js join.js lastIndexOf.js map.js max.js min.js pick.js pluck.js range.js reduce.js reduceRight.js reject.js remove.js removeAll.js shuffle.js some.js sort.js split.js toLookup.js union.js unique.js xor.js zip.js collection.js collection contains.js every.js filter.js find.js forEach.js make_.js map.js max.js min.js pluck.js reduce.js reject.js size.js some.js date.js date dayOfTheYear.js diff.js i18n de-DE.js en-US.js pt-BR.js i18n_.js isLeapYear.js isSame.js parseIso.js startOf.js strftime.js timezoneAbbr.js timezoneOffset.js totalDaysInMonth.js totalDaysInYear.js weekOfTheYear.js function.js function bind.js compose.js debounce.js func.js makeIterator_.js partial.js prop.js series.js throttle.js timeout.js times.js index.js lang.js lang clone.js createObject.js ctorApply.js deepClone.js defaults.js inheritPrototype.js is.js isArguments.js isArray.js isBoolean.js isDate.js isEmpty.js isFinite.js isFunction.js isInteger.js isKind.js isNaN.js isNull.js isNumber.js isObject.js isPlainObject.js isRegExp.js isString.js isUndefined.js isnt.js kindOf.js toArray.js toNumber.js toString.js math.js math ceil.js clamp.js countSteps.js floor.js inRange.js isNear.js lerp.js loop.js map.js norm.js round.js number.js number MAX_INT.js MAX_UINT.js MIN_INT.js abbreviate.js currencyFormat.js enforcePrecision.js pad.js rol.js ror.js sign.js toInt.js toUInt.js toUInt31.js object.js object bindAll.js contains.js deepEquals.js deepFillIn.js deepMatches.js deepMixIn.js equals.js every.js fillIn.js filter.js find.js forIn.js forOwn.js functions.js get.js has.js hasOwn.js keys.js map.js matches.js max.js merge.js min.js mixIn.js namespace.js pick.js pluck.js reduce.js reject.js set.js size.js some.js unset.js values.js queryString.js queryString contains.js decode.js encode.js getParam.js getQuery.js parse.js setParam.js random.js random choice.js guid.js rand.js randBit.js randHex.js randInt.js randSign.js random.js string.js string WHITE_SPACES.js camelCase.js contains.js crop.js endsWith.js escapeHtml.js escapeRegExp.js escapeUnicode.js hyphenate.js insert.js interpolate.js lowerCase.js lpad.js ltrim.js makePath.js normalizeLineBreaks.js pascalCase.js properCase.js removeNonASCII.js removeNonWord.js repeat.js replace.js replaceAccents.js rpad.js rtrim.js sentenceCase.js slugify.js startsWith.js stripHtmlTags.js trim.js truncate.js typecast.js unCamelCase.js underscore.js unescapeHtml.js unescapeUnicode.js unhyphenate.js upperCase.js time.js time convert.js now.js parseMs.js toTimeString.js requirejs require.js todomvc-app-css index.css todomvc-common base.css base.js package.json mithril index.html js app.js controllers todo.js models storage.js todo.js views footer-view.js main-view.js node_modules mithril mithril.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md polymer CONTRIBUTING.md README.md bower.json bower_components flatiron-director director director.min.js flatiron-director.html iron-localstorage iron-localstorage.html iron-selector iron-multi-selectable.html iron-selectable.html iron-selection.html iron-selector.html polymer polymer-micro.html polymer-mini.html polymer.html todomvc-app-css index.css todomvc-common base.css base.js webcomponentsjs webcomponents-lite.min.js elements elements.build.html elements.build.js elements.html td-input.html td-item.html td-model.html td-todos.html index.html package.json ractive css app.css index.html js app.js persistence.js routes.js node_modules director build director.js ractive build Ractive.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md react-alt index.html js actions todoActions.js alt.js stores todoStore.js utils.js node_modules alt dist alt.js director build director.js react dist JSXTransformer.js react-with-addons.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md react-backbone index.html js todo.js todos.js node_modules backbone.localstorage backbone.localStorage.js backbone backbone.js classnames index.js jquery dist jquery.js react dist JSXTransformer.js react.js todomvc-app-css index.css todomvc-common base.css base.js underscore underscore.js package.json readme.md react index.html js todoModel.js utils.js node_modules classnames index.js director build director.js react dist JSXTransformer.js react-with-addons.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md reagent index.html node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md riotjs index.html js app.js store.js todo.html node_modules riot riot+compiler.min.js riot.min.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md scalajs-react generated todomvc-jsdeps.js todomvc-jsdeps.min.js todomvc-launcher.js todomvc-opt.js index.html node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md src main scala todomvc Footer.scala Main.scala Storage.scala TodoItem.scala TodoList.scala TodoModel.scala types.scala typescript-angular index.html js Application.js Application.ts _all.ts controllers TodoCtrl.ts directives TodoBlur.ts TodoEscape.ts TodoFocus.ts interfaces ITodoScope.ts ITodoStorage.ts libs angular angular.d.ts jquery jquery.d.ts models TodoItem.ts services TodoStorage.ts node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md typescript-backbone index.html js app.js app.ts node_modules backbone.localstorage backbone.localStorage.js backbone backbone.js jquery dist jquery.js lodash index.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md typescript-react index.html js app.js bundle.js constants.js constants.ts footer.js interfaces.d.ts todoItem.js todoModel.js todoModel.ts tsconfig.json utils.js utils.ts node_modules classnames index.js director build director.js react dist JSXTransformer.js react-with-addons.js todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md tsd.json typings classnames classnames.d.ts react react-global.d.ts react.d.ts tsd.d.ts vanilla-es6 README.md dist bundle.js index.html node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json src app.js controller.js helpers.js item.js store.js template.js view.js vanillajs index.html js app.js controller.js helpers.js model.js store.js template.js view.js node_modules todomvc-app-css index.css todomvc-common base.css base.js package.json readme.md test ControllerSpec.js SpecRunner.html vue index.html js app.js routes.js store.js node_modules director build director.js todomvc-app-css index.css todomvc-common base.css base.js vue dist vue.js package.json readme.md Example with simple pluralization rules for en locale Example with offset ko.templateSources.domElement ko.templateSources.anonymousTemplate Computed Properties Handle Persistence Event Handlers The Life-Cycle of a Composite Component Welcome to debugging React gulpfile.js index.html learn.html learn.json learn.template.json license.md media logo.svg symbol.svg package-lock.json package.json readme.md server.js site-assets learn.js lodash.custom.js logo.svg main.css main.js tasks Gruntfile.js package.json test-runner.sh tests README.md allTests.js cya.js excluded.js framework-path-lookup.js knownIssues.js memory-exceptions.json memory.js package.json page.js pageLaxMode.js run.sh test.js testOperations.js Test framework issues tooling package.json run.sh
iron-icons ========= ## Building Running `update-icons.sh` will checkout [material-design-icons](https://github.com/google/material-design-icons), reduce the fileset to 24px svgs, and compile the iconsets. # font-roboto # iron-menu-behavior `Polymer.IronMenuBehavior` implements accessible menu behavior. iron-pages ========== `iron-pages` is used to select one of its children to show. One use is to cycle through a list of children "pages". Example: ```html <iron-pages selected="0"> <div>One</div> <div>Two</div> <div>Three</div> </iron-pages> <script> document.addEventListener('click', function(e) { var pages = document.querySelector('iron-pages'); pages.selectNext(); }); </script> ``` iron-icon ========= The `iron-icon` element displays an icon. By default an icon renders as a 24px square. Example using src: ```html <iron-icon src="star.png"></iron-icon> ``` Example setting size to 32px x 32px: ```html <iron-icon class="big" src="big_star.png"></iron-icon> <style> .big { height: 32px; width: 32px; } </style> ``` The iron elements include several sets of icons. To use the default set of icons, import `iron-icons.html` and use the `icon` attribute to specify an icon: ```html <!-- import default iconset and iron-icon --> <link rel="import" href="/components/iron-icons/iron-icons.html"> <iron-icon icon="menu"></iron-icon> ``` To use a different built-in set of icons, import `iron-icons/<iconset>-icons.html`, and specify the icon as `<iconset>:<icon>`. For example: ```html <!-- import communication iconset and iron-icon --> <link rel="import" href="/components/iron-icons/communication-icons.html"> <iron-icon icon="communication:email"></iron-icon> ``` You can also create custom icon sets of bitmap or SVG icons. Example of using an icon named `cherry` from a custom iconset with the ID `fruit`: ```html <iron-icon icon="fruit:cherry"></iron-icon> ``` See [iron-iconset](#iron-iconset) and [iron-iconset-svg](#iron-iconset-svg) for more information about how to create a custom iconset. See [iron-icons](http://www.polymer-project.org/components/iron-icons/demo.html) for the default set of icons. <!--- This README is automatically generated from the comments in these files: paper-tab.html paper-tabs.html Edit those files, and our readme bot will duplicate them over here! Edit this file, and the bot will squash your changes :) --> [![Build Status](https://travis-ci.org/PolymerElements/paper-tabs.svg?branch=master)](https://travis-ci.org/PolymerElements/paper-tabs) _[Demo and API Docs](https://elements.polymer-project.org/elements/paper-tabs)_ ##&lt;paper-tabs&gt; Material design: [Tabs](https://www.google.com/design/spec/components/tabs.html) `paper-tabs` makes it easy to explore and switch between different views or functional aspects of an app, or to browse categorized data sets. Use `selected` property to get or set the selected tab. Example: <paper-tabs selected="0"> <paper-tab>TAB 1</paper-tab> <paper-tab>TAB 2</paper-tab> <paper-tab>TAB 3</paper-tab> </paper-tabs> See <a href="#paper-tab">paper-tab</a> for more information about `paper-tab`. A common usage for `paper-tabs` is to use it along with `iron-pages` to switch between different views. <paper-tabs selected="{{selected}}"> <paper-tab>Tab 1</paper-tab> <paper-tab>Tab 2</paper-tab> <paper-tab>Tab 3</paper-tab> </paper-tabs> <iron-pages selected="{{selected}}"> <div>Page 1</div> <div>Page 2</div> <div>Page 3</div> </iron-pages> To use links in tabs, add `link` attribute to `paper-tab` and put an `<a>` element in `paper-tab`. Example: <paper-tabs selected="0"> <paper-tab link> <a href="#link1" class="horizontal center-center layout">TAB ONE</a> </paper-tab> <paper-tab link> <a href="#link2" class="horizontal center-center layout">TAB TWO</a> </paper-tab> <paper-tab link> <a href="#link3" class="horizontal center-center layout">TAB THREE</a> </paper-tab> </paper-tabs> ### Styling The following custom properties and mixins are available for styling: Custom property | Description | Default ----------------|-------------|---------- `--paper-tabs-selection-bar-color` | Color for the selection bar | `--paper-yellow-a100` `--paper-tabs` | Mixin applied to the tabs | `{}` ##&lt;paper-tab&gt; `paper-tab` is styled to look like a tab. It should be used in conjunction with `paper-tabs`. Example: <paper-tabs selected="0"> <paper-tab>TAB 1</paper-tab> <paper-tab>TAB 2</paper-tab> <paper-tab>TAB 3</paper-tab> </paper-tabs> ### Styling The following custom properties and mixins are available for styling: Custom property | Description | Default ----------------|-------------|---------- `--paper-tab-ink` | Ink color | `--paper-yellow-a100` `--paper-tab` | Mixin applied to the tab | `{}` `--paper-tab-content` | Mixin applied to the tab content | `{}` `--paper-tab-content-unselected` | Mixin applied to the tab content when the tab is not selected | `{}` paper-behaviors =============== These are common behaviors used across `paper-*` elements. # iron-validatable-behavior Implements an element validated with Polymer.IronValidatorBehavior # Vanilla ES6 (ES2015) • [TodoMVC](http://todomvc.com) > A port of the [Vanilla JS Example](http://todomvc.com/examples/vanillajs/), but translated into ES6, also known as ES2015. ## Learning ES6 - [ES6 Features](https://github.com/lukehoban/es6features) - [Learning Resources](https://github.com/ericdouglas/ES6-Learning) - [Babel's ES6 Guide](https://babeljs.io/docs/learn-es2015/) - [Babel Compiler](https://babeljs.io/) ## Installation To get started with this example, navigate into the example folder and install the NPM modules. ```bash cd todomvc/examples/vanilla-es6 npm install ``` ## Compiling ES6 to ES5 After NPM modules have been installed, use the pre-defined Babel script to convert the `src` files. Browserify is also used so that `module.exports` and `require()` can be run in your browser. ```bash npm run compile ``` ## Support - [Twitter](http://twitter.com/lukeed05) *Let us [know](https://github.com/tastejs/todomvc/issues) if you discover anything worth sharing.* ## Implementation Uses [Google Closure Compiler](https://developers.google.com/closure/compiler/) to compile ES6 code to ES5, which is then readable by all browsers. ## Credit Created by [Luke Edwards](http://www.lukeed.com) Refactored by [Aaron Muir Hamilton](https://github.com/xorgy) # iron-checked-element-behavior Implements an element that has a checked attribute and can be added to a form This project was bootstrapped with [Create React Kotlin App](https://github.com/JetBrains/create-react-kotlin-app). Below you will find some useful information on how to work with this application.<br> We're still working on this guide and you can find its most recent version [here](https://github.com/JetBrains/create-react-kotlin-app/blob/master/packages/react-scripts/template/README.md). ## Sending Feedback We are always open to [your feedback](https://youtrack.jetbrains.com/issues/CRKA). ## Folder Structure After creation, your project should look like this: ``` my-app/ README.md node_modules/ package.json .gitignore public/ favicon.ico index.html manifest.json src/ app/ App.css App.kt index/ index.css index.kt logo/ kotlin.svg Logo.css Logo.kt react.svg ticker/ Ticker.kt ``` For the project to build, **these files must exist with exact filenames**: * `public/index.html` is the page template; You can delete or rename the other files. You may create subdirectories inside `src`. For faster rebuilds, only files inside `src` are processed by Webpack.<br> You need to **put any Kotlin and CSS files inside `src`**, or Webpack won’t see them. Only files inside `public` can be used from `public/index.html`.<br> Read instructions below for using assets from JavaScript and HTML. You can, however, create more top-level directories.<br> They will not be included in the production build so you can use them for things like documentation. ## Available Scripts Once the installation is done, you can run some commands inside the project folder: ### `npm start` or `yarn start` Runs the app in development mode.<br> Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload automatically when you make edits.<br> You will see build errors and lint warnings in the console. ### `npm run build` or `yarn build` Builds the app for production to the `build` folder.<br> It ensures that React is bundled in production mode and the build is optimized for best performance. The build is minified and the filenames include hashes for cache management. Your app is ready to be deployed. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Running `npm run eject` copies all configuration files and transitive dependencies (webpack, Kotlin Compiler, etc) right into your project so you have full control over them. Commands like `npm start` and `npm run build` will still work, but they will point to the copied scripts so you can tweak them. At this point, you’re on your own. ## Debugging the App You can debug the running app right in IntelliJ IDEA Ultimate using its built-in JavaScript debugger. The IDE will run a new instance of Chrome and attach a debugger to it. Start your app by running `npm start`. Put the breakpoints in your Kotlin code. Then select `Debug in Chrome` from the list of run/debug configurations on the top-right and click the green debug icon or press `^D` on macOS or `F9` on Windows and Linux to start debugging. Currently, debugging is supported only in IntelliJ IDEA Ultimate 2017.3. You can also debug your application using the developer tools in your browser. paper-icon-button ================= Material Design: <a href="http://www.google.com/design/spec/components/buttons.html">Buttons</a> `paper-icon-button` is a button with an image placed at the center. When the user touches the button, a ripple effect emanates from the center of the button. `paper-icon-button` includes a default icon set. Use `icon` to specify which icon from the icon set to use. ```html <paper-icon-button icon="menu"></paper-icon-button> ``` See [`iron-iconset`](#iron-iconset) for more information about how to use a custom icon set. Example: ```html <link href="path/to/iron-icons/iron-icons.html" rel="import"> <paper-icon-button icon="favorite"></paper-icon-button> <paper-icon-button src="star.png"></paper-icon-button> ``` # [Bootstrap](http://getbootstrap.com) [![Bower version](https://badge.fury.io/bo/bootstrap.svg)](http://badge.fury.io/bo/bootstrap) [![NPM version](https://badge.fury.io/js/bootstrap.svg)](http://badge.fury.io/js/bootstrap) [![Build Status](https://secure.travis-ci.org/twbs/bootstrap.svg?branch=master)](http://travis-ci.org/twbs/bootstrap) [![devDependency Status](https://david-dm.org/twbs/bootstrap/dev-status.svg)](https://david-dm.org/twbs/bootstrap#info=devDependencies) [![Selenium Test Status](https://saucelabs.com/browser-matrix/bootstrap.svg)](https://saucelabs.com/u/bootstrap) Bootstrap is a sleek, intuitive, and powerful front-end framework for faster and easier web development, created by [Mark Otto](http://twitter.com/mdo) and [Jacob Thornton](http://twitter.com/fat), and maintained by the [core team](https://github.com/twbs?tab=members) with the massive support and involvement of the community. To get started, check out <http://getbootstrap.com>! ## Table of contents - [Quick start](#quick-start) - [Bugs and feature requests](#bugs-and-feature-requests) - [Documentation](#documentation) - [Contributing](#contributing) - [Community](#community) - [Versioning](#versioning) - [Creators](#creators) - [Copyright and license](#copyright-and-license) ## Quick start Three quick start options are available: - [Download the latest release](https://github.com/twbs/bootstrap/archive/v3.2.0.zip). - Clone the repo: `git clone https://github.com/twbs/bootstrap.git`. - Install with [Bower](http://bower.io): `bower install bootstrap`. Read the [Getting started page](http://getbootstrap.com/getting-started/) for information on the framework contents, templates and examples, and more. ### What's included Within the download you'll find the following directories and files, logically grouping common assets and providing both compiled and minified variations. You'll see something like this: ``` bootstrap/ ├── css/ │ ├── bootstrap.css │ ├── bootstrap.min.css │ ├── bootstrap-theme.css │ └── bootstrap-theme.min.css ├── js/ │ ├── bootstrap.js │ └── bootstrap.min.js └── fonts/ ├── glyphicons-halflings-regular.eot ├── glyphicons-halflings-regular.svg ├── glyphicons-halflings-regular.ttf └── glyphicons-halflings-regular.woff ``` We provide compiled CSS and JS (`bootstrap.*`), as well as compiled and minified CSS and JS (`bootstrap.min.*`). Fonts from Glyphicons are included, as is the optional Bootstrap theme. ## Bugs and feature requests Have a bug or a feature request? Please first read the [issue guidelines](https://github.com/twbs/bootstrap/blob/master/CONTRIBUTING.md#using-the-issue-tracker) and search for existing and closed issues. If your problem or idea is not addressed yet, [please open a new issue](https://github.com/twbs/bootstrap/issues/new). ## Documentation Bootstrap's documentation, included in this repo in the root directory, is built with [Jekyll](http://jekyllrb.com) and publicly hosted on GitHub Pages at <http://getbootstrap.com>. The docs may also be run locally. ### Running documentation locally 1. If necessary, [install Jekyll](http://jekyllrb.com/docs/installation) (requires v2.0.x). - **Windows users:** Read [this unofficial guide](https://github.com/juthilo/run-jekyll-on-windows/) to get Jekyll up and running without problems. We use Pygments for syntax highlighting, so make sure to read the sections on installing Python and Pygments. 2. From the root `/bootstrap` directory, run `jekyll serve` in the command line. 3. Open <http://localhost:9001> in your browser, and voilà. Learn more about using Jekyll by reading its [documentation](http://jekyllrb.com/docs/home/). ### Documentation for previous releases Documentation for v2.3.2 has been made available for the time being at <http://getbootstrap.com/2.3.2/> while folks transition to Bootstrap 3. [Previous releases](https://github.com/twbs/bootstrap/releases) and their documentation are also available for download. ## Contributing Please read through our [contributing guidelines](https://github.com/twbs/bootstrap/blob/master/CONTRIBUTING.md). Included are directions for opening issues, coding standards, and notes on development. Moreover, if your pull request contains JavaScript patches or features, you must include relevant unit tests. All HTML and CSS should conform to the [Code Guide](http://github.com/mdo/code-guide), maintained by [Mark Otto](http://github.com/mdo). Editor preferences are available in the [editor config](https://github.com/twbs/bootstrap/blob/master/.editorconfig) for easy use in common text editors. Read more and download plugins at <http://editorconfig.org>. ## Community Keep track of development and community news. - Follow [@twbootstrap on Twitter](http://twitter.com/twbootstrap). - Read and subscribe to [The Official Bootstrap Blog](http://blog.getbootstrap.com). - Chat with fellow Bootstrappers in IRC. On the `irc.freenode.net` server, in the `##twitter-bootstrap` channel. - Implementation help may be found at Stack Overflow (tagged [`twitter-bootstrap-3`](http://stackoverflow.com/questions/tagged/twitter-bootstrap-3)). ## Versioning For transparency into our release cycle and in striving to maintain backward compatibility, Bootstrap is maintained under [the Semantic Versioning guidelines](http://semver.org/). Sometimes we screw up, but we'll adhere to those rules whenever possible. ## Creators **Mark Otto** - <http://twitter.com/mdo> - <http://github.com/mdo> **Jacob Thornton** - <http://twitter.com/fat> - <http://github.com/fat> ## Copyright and license Code and documentation copyright 2011-2014 Twitter, Inc. Code released under [the MIT license](LICENSE). Docs released under [Creative Commons](docs/LICENSE). iron-resizable-behavior ======================= `IronResizableBehavior` is a behavior that can be used in Polymer elements to coordinate the flow of resize events between "resizers" (elements that control the size or hidden state of their children) and "resizables" (elements that need to be notified when they are resized or un-hidden by their parents in order to take action on their new measurements). Elements that perform measurement should add the `IronResizableBehavior` behavior to their element definition and listen for the `iron-resize` event on themselves. This event will be fired when they become showing after having been hidden, when they are resized explicitly by another resizable, or when the window has been resized. Note, the `iron-resize` event is non-bubbling. curl.js loader plugins === Please see the wiki for information about using plugins. If you're interested in creating your own plugins, please check out the Plugin Author's Guide on the wiki (TBD). All of these plugins conform to the AMD specification. However, that doesn't necessarily mean that they'll work with other AMD loaders or builders. Until the build-time API of AMD is finalized, there will be incompatibilities. Modules that should work with any loader/builder: async! domReady! js! link! TODO: json! (auto-detects xdomain and uses JSON-P) # paper-styles Material design CSS styles. paper-ripple ============ `paper-ripple` provides a visual effect that other paper elements can use to simulate a rippling effect emanating from the point of contact. The effect can be visualized as a concentric circle with motion. Example: ```html <paper-ripple></paper-ripple> ``` `paper-ripple` listens to "mousedown" and "mouseup" events so it would display ripple effect when touches on it. You can also defeat the default behavior and manually route the down and up actions to the ripple element. Note that it is important if you call downAction() you will have to make sure to call upAction() so that `paper-ripple` would end the animation loop. Example: ```html <paper-ripple id="ripple" style="pointer-events: none;"></paper-ripple> ... <script> downAction: function(e) { this.$.ripple.downAction({x: e.x, y: e.y}); }, upAction: function(e) { this.$.ripple.upAction(); } </script> ``` Styling ripple effect: Use CSS color property to style the ripple: ```css paper-ripple { color: #4285f4; } ``` Note that CSS color property is inherited so it is not required to set it on the `paper-ripple` element directly. By default, the ripple is centered on the point of contact. Apply the ``recenters`` attribute to have the ripple grow toward the center of its container. ```html <paper-ripple recenters></paper-ripple> ``` Apply `center` to center the ripple inside its container from the start. ```html <paper-ripple center></paper-ripple> ``` Apply `circle` class to make the rippling effect within a circle. ```html <paper-ripple class="circle"></paper-ripple> ``` # TodoMVC Browser Tests The TodoMVC project has a great many implementations of exactly the same app using different MV* frameworks. Each app should be functionally identical. The goal of these tests is to provide a fully automated browser-based test that can be used to ensure that the specification is being followed by each and every TodoMVC app. ## Todo - [ ] Complete the test implementation (27 out of 28 are now complete). The only test that I am struggling with is to test that the delete button becomes visible on hover. - [ ] Make it work with PhantomJS. In practice, Phantom is only a little bit faster, but it should work. Currently there are a few Phantom specific failures. - [ ] Allow testing of apps that require a server (see: https://github.com/tastejs/todomvc/pull/821/files#r9377070) ## Running the tests These tests use Selenium 2 (WebDriver), via the JavaScript API (WebdriverJS). In order to run the tests, you will need to install the dependencies. npm must be version 2.0.0 or greater, so upgrade it first with `npm install -g npm` if `npm --version` outputs anything less than 2.0.0. Run the following command from within the `tests` folder: ```sh $ npm install ``` If you haven't already run `npm install` in the root directory, execute `npm install` there as well. You need to run a local server for the tests. Start the test server using: ```sh $ gulp test-server ``` To run the tests for all TodoMVC implementations, run the following: ```sh $ npm run test ``` In order to run tests for a single TodoMVC implementation, supply a framework argument as follows: ```sh $ npm run test -- --framework=angularjs ``` N.B. Remember the extra -- to separate the script arguments from the npm arguments. In order to run a specific test, use the mocha 'grep' function. For example: ``` $ npm run test -- --framework=jquery --grep 'should trim entered text' TodoMVC - jquery Editing ✓ should trim entered text (1115ms) 1 passing (3s) ``` ### Specifying the browser You can also specify the browser that will be used to execute the tests via the `---browser` argument. The tests default to using Chrome. For example, to run against phantomjs, use the following: ```sh $ npm run test -- --browser=phantomjs ``` You must install phantomjs first of course! Valid browser names can be found within webdriver via the `webdriver.Browser` enumeration. ## Reporting against known issues The `knownIssues.js` file details the currently known issues with the TodoMVC implementations. You can run the tests and compare against these issues using the `mocha-known-issues-reporter`: ```sh $ npm run test -- --reporter=mocha-known-issues-reporter ``` When run via grunt the suite supports exactly the same command line arguments. An example output with the known issues reporter is shown below: ``` $ npm run test -- --reporter=mocha-known-issues-reporter --framework=jquery ... Running "simplemocha:files" (simplemocha) task (1 of 27) pass: TodoMVC - jquery, No Todos, should hide #main and #footer [...] (17 of 27) pass: TodoMVC - jquery, Editing, should remove the item if an empty text string was entered (18 of 27) known issue: TodoMVC - jquery, Editing, should cancel edits on escape -- error: undefined (19 of 27) pass: TodoMVC - jquery, Counter, should display the current number of todo items (20 of 27) pass: TodoMVC - jquery, Clear completed button, should display the number of completed items (21 of 27) pass: TodoMVC - jquery, Clear completed button, should remove completed items when clicked (22 of 27) pass: TodoMVC - jquery, Clear completed button, should be hidden when there are no items that are completed (23 of 27) pass: TodoMVC - jquery, Persistence, should persist its data (24 of 27) known issue: TodoMVC - jquery, Routing, should allow me to display active items -- error: Cannot call method 'click' of undefined (25 of 27) known issue: TodoMVC - jquery, Routing, should allow me to display completed items -- error: Cannot call method 'click' of undefined (26 of 27) known issue: TodoMVC - jquery, Routing, should allow me to display all items -- error: Cannot call method 'click' of undefined (27 of 27) known issue: TodoMVC - jquery, Routing, should highlight the currently applied filter -- error: Cannot call method 'getAttribute' of undefined passed: 22/27 failed: 5/27 new issues: 0 resolved issues: 0 ``` The reporter indicates the number of passes, failed, new and resolved issues. This makes it ideal for regression testing. ### Example output A test run with the 'spec' reporter looks something like the following: ``` $ npm run test -- --framework=angularjs angularjs TodoMVC No Todos ✓ should hide #main and #footer (201ms) New Todo ✓ should allow me to add todo items (548ms) ✓ should clear text input field when an item is added (306ms) ✓ should trim text input (569ms) ✓ should show #main and #footer when items added (405ms) Mark all as completed ✓ should allow me to mark all items as completed (1040ms) ✓ should allow me to clear the completion state of all items (1014ms) ✓ complete all checkbox should update state when items are completed (1413ms) Item ✓ should allow me to mark items as complete (843ms) ✓ should allow me to un-mark items as complete (978ms) ✓ should allow me to edit an item (1155ms) ✓ should show the remove button on hover Editing ✓ should hide other controls when editing (718ms) ✓ should save edits on enter (1093ms) ✓ should save edits on blur (1256ms) ✓ should trim entered text (1163ms) ✓ should remove the item if an empty text string was entered (1033ms) ✓ should cancel edits on escape (1115ms) Counter ✓ should display the current number of todo items (462ms) Clear completed button ✓ should display the number of completed items (873ms) ✓ should remove completed items when clicked (898ms) ✓ should be hidden when there are no items that are completed (893ms) Persistence ✓ should persist its data (3832ms) Routing ✓ should allow me to display active items (871ms) ✓ should allow me to display completed items (960ms) ✓ should allow me to display all items (1192ms) ✓ should highlight the currently applied filter (1095ms) 27 passing (1m) ``` ## Speed mode In order to keep each test case fully isolated, the browser is closed then re-opened in between each test. This does mean that the tests can take quite a long time to run. If you don't mind the risk of side-effects you can run the tests in speed mode by adding the `--speedMode` argument. ```sh $ npm run test -- --speedMode ``` Before each test, all the todo items are checked as completed and the 'clear complete' button pressed. This makes the tests run in around half the time, but with the obvious risk that the tear-down code may fail. ## Lax mode There are certain implementations (e.g. GWT and Dojo) where the constraints of the framework mean that it is not possible to match exactly the HTML specification for TodoMVC. In these cases, the tests can be run in a 'lax' mode where the XPath queries used to locate DOM elements are more general. For example, rather than looking for a checkbox `input` element with a class of `toggle`, in lax mode it simply looks for any `input` elements of type `checkbox`. To run the tests in lax mode, simply use the `--laxMode` argument: ```sh $ npm run test -- --laxMode ``` ## Test design Very briefly, the tests are designed as follows: - `page.js` - provides an abstraction layer for the HTML template. All the code required to access elements from the DOM is found within this file. The XPaths used to locate elements are based on the TodoMVC specification, using the required element classes / ids. - `pageLaxMode.js` - extends the above in order to relax the XPath constraints. - `testOperations.js` - provides common assertions and operations. - `test.js` - Erm … the tests! These are written to closely match the TodoMVC spec. - `allTest.js` - A simple file that locates all of the framework examples, and runs the tests for each. **NOTE:** All of the WebdriverJS methods return promises and are executed asynchronously. However, you do not have to 'chain' them using `then`, they are instead automagically added to a queue, then executed. This means that if you add non-WebdriverJS operations (asserts, log messages) these will not be executed at the point you might expect. This is why `TestOperations.js` uses an explicit `then` each time it asserts. <!--- This README is automatically generated from the comments in these files: iron-flex-layout.html Edit those files, and our readme bot will duplicate them over here! Edit this file, and the bot will squash your changes :) --> [![Build Status](https://travis-ci.org/PolymerElements/iron-flex-layout.svg?branch=master)](https://travis-ci.org/PolymerElements/iron-flex-layout) _[Demo and API Docs](https://elements.polymer-project.org/elements/iron-flex-layout)_ ##&lt;iron-flex-layout&gt; The `<iron-flex-layout>` component provides simple ways to use [CSS flexible box layout](https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Flexible_boxes), also known as flexbox. This component provides two different ways to use flexbox: 1. [Layout classes](https://github.com/PolymerElements/iron-flex-layout/tree/master/classes). The layout class stylesheet provides a simple set of class-based flexbox rules. Layout classes let you specify layout properties directly in markup. 2. [Custom CSS mixins](https://github.com/PolymerElements/iron-flex-layout/blob/master/iron-flex-layout.html). The mixin stylesheet includes custom CSS mixins that can be applied inside a CSS rule using the `@apply` function. A complete [guide](https://elements.polymer-project.org/guides/flex-layout) to `<iron-flex-layout>` is available. iron-a11y-keys-behavior ======================= `Polymer.IronA11yKeysBehavior` provides a normalized interface for processing keyboard commands that pertain to [WAI-ARIA best practices](http://www.w3.org/TR/wai-aria-practices/#kbd_general_binding). The element takes care of browser differences with respect to Keyboard events and uses an expressive syntax to filter key presses. Use the `keyBindings` prototype property to express what combination of keys will trigger the event to fire. Use the `key-event-target` attribute to set up event handlers on a specific node. The `keys-pressed` event will fire when one of the key combinations set with the `keys` property is pressed. webcomponents.js ================ [![Join the chat at https://gitter.im/webcomponents/webcomponentsjs](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/webcomponents/webcomponentsjs?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) A suite of polyfills supporting the [Web Components](http://webcomponents.org) specs: **Custom Elements**: allows authors to define their own custom tags ([spec](https://w3c.github.io/webcomponents/spec/custom/)). **HTML Imports**: a way to include and reuse HTML documents via other HTML documents ([spec](https://w3c.github.io/webcomponents/spec/imports/)). **Shadow DOM**: provides encapsulation by hiding DOM subtrees under shadow roots ([spec](https://w3c.github.io/webcomponents/spec/shadow/)). This also folds in polyfills for `MutationObserver` and `WeakMap`. ## Releases Pre-built (concatenated & minified) versions of the polyfills are maintained in the [tagged versions](https://github.com/webcomponents/webcomponentsjs/releases) of this repo. There are two variants: `webcomponents.js` includes all of the polyfills. `webcomponents-lite.js` includes all polyfills except for shadow DOM. ## Browser Support Our polyfills are intended to work in the latest versions of evergreen browsers. See below for our complete browser support matrix: | Polyfill | IE10 | IE11+ | Chrome* | Firefox* | Safari 7+* | Chrome Android* | Mobile Safari* | | ---------- |:----:|:-----:|:-------:|:--------:|:----------:|:---------------:|:--------------:| | Custom Elements | ~ | ✓ | ✓ | ✓ | ✓ | ✓| ✓ | | HTML Imports | ~ | ✓ | ✓ | ✓ | ✓| ✓| ✓ | | Shadow DOM | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Templates | ✓ | ✓ | ✓ | ✓| ✓ | ✓ | ✓ | *Indicates the current version of the browser ~Indicates support may be flaky. If using Custom Elements or HTML Imports with Shadow DOM, you will get the non-flaky Mutation Observer polyfill that Shadow DOM includes. The polyfills may work in older browsers, however require additional polyfills (such as classList) to be used. We cannot guarantee support for browsers outside of our compatibility matrix. ### Manually Building If you wish to build the polyfills yourself, you'll need `node` and `gulp` on your system: * install [node.js](http://nodejs.org/) using the instructions on their website * use `npm` to install [gulp.js](http://gulpjs.com/): `npm install -g gulp` Now you are ready to build the polyfills with: # install dependencies npm install # build gulp build The builds will be placed into the `dist/` directory. ## Contribute See the [contributing guide](CONTRIBUTING.md) ## License Everything in this repository is BSD style license unless otherwise specified. Copyright (c) 2015 The Polymer Authors. All rights reserved. ## Helper utilities ### `WebComponentsReady` Under native HTML Imports, `<script>` tags in the main document block the loading of such imports. This is to ensure the imports have loaded and any registered elements in them have been upgraded. The webcomponents.js and webcomponents-lite.js polyfills parse element definitions and handle their upgrade asynchronously. If prematurely fetching the element from the DOM before it has an opportunity to upgrade, you'll be working with an `HTMLUnknownElement`. For these situations (or when you need an approximate replacement for the Polymer 0.5 `polymer-ready` behavior), you can use the `WebComponentsReady` event as a signal before interacting with the element. The criteria for this event to fire is all Custom Elements with definitions registered by the time HTML Imports available at load time have loaded have upgraded. ```js window.addEventListener('WebComponentsReady', function(e) { // imports are loaded and elements have been registered console.log('Components are ready'); }); ``` ## Known Issues * [Custom element's constructor property is unreliable](#constructor) * [Contenteditable elements do not trigger MutationObserver](#contentedit) * [ShadowCSS: :host-context(...):host(...) doesn't work](#hostcontext) * [execCommand isn't supported under Shadow DOM](#execcommand) ### Custom element's constructor property is unreliable <a id="constructor"></a> See [#215](https://github.com/webcomponents/webcomponentsjs/issues/215) for background. In Safari and IE, instances of Custom Elements have a `constructor` property of `HTMLUnknownElementConstructor` and `HTMLUnknownElement`, respectively. It's unsafe to rely on this property for checking element types. It's worth noting that `customElement.__proto__.__proto__.constructor` is `HTMLElementPrototype` and that the prototype chain isn't modified by the polyfills(onto `ElementPrototype`, etc.) ### Contenteditable elements do not trigger MutationObserver <a id="contentedit"></a> Using the MutationObserver polyfill, it isn't possible to monitor mutations of an element marked `contenteditable`. See [the mailing list](https://groups.google.com/forum/#!msg/polymer-dev/LHdtRVXXVsA/v1sGoiTYWUkJ) ### ShadowCSS: :host-context(...):host(...) doesn't work <a id="hostcontext"></a> See [#16](https://github.com/webcomponents/webcomponentsjs/issues/16) for background. Under the shadow DOM polyfill, rules like: ``` :host-context(.foo):host(.bar) {...} ``` don't work, despite working under native Shadow DOM. The solution is to use `polyfill-next-selector` like: ``` polyfill-next-selector { content: '.foo :host.bar, :host.foo.bar'; } ``` ### execCommand and contenteditable isn't supported under Shadow DOM <a id="execcommand"></a> See [#212](https://github.com/webcomponents/webcomponentsjs/issues/212) `execCommand`, and `contenteditable` aren't supported under the ShadowDOM polyfill, with commands that insert or remove nodes being especially prone to failure. <!--- This README is automatically generated from the comments in these files: iron-iconset-svg.html Edit those files, and our readme bot will duplicate them over here! Edit this file, and the bot will squash your changes :) --> [![Build Status](https://travis-ci.org/PolymerElements/iron-iconset-svg.svg?branch=master)](https://travis-ci.org/PolymerElements/iron-iconset-svg) _[Demo and API Docs](https://elements.polymer-project.org/elements/iron-iconset-svg)_ ##&lt;iron-iconset-svg&gt; The `iron-iconset-svg` element allows users to define their own icon sets that contain svg icons. The svg icon elements should be children of the `iron-iconset-svg` element. Multiple icons should be given distinct id's. Using svg elements to create icons has a few advantages over traditional bitmap graphics like jpg or png. Icons that use svg are vector based so they are resolution independent and should look good on any device. They are stylable via css. Icons can be themed, colorized, and even animated. Example: <iron-iconset-svg name="my-svg-icons" size="24"> <svg> <defs> <g id="shape"> <rect x="50" y="50" width="50" height="50" /> <circle cx="50" cy="50" r="50" /> </g> </defs> </svg> </iron-iconset-svg> This will automatically register the icon set "my-svg-icons" to the iconset database. To use these icons from within another element, make a `iron-iconset` element and call the `byId` method to retrieve a given iconset. To apply a particular icon inside an element use the `applyIcon` method. For example: iconset.applyIcon(iconNode, 'car'); iron-selector ============= `iron-selector` is an element which can be used to manage a list of elements that can be selected. Tapping on the item will make the item selected. The `selected` indicates which item is being selected. The default is to use the index of the item. Example: ```html <iron-selector selected="0"> <div>Item 1</div> <div>Item 2</div> <div>Item 3</div> </iron-selector> ``` If you want to use the attribute value of an element for `selected` instead of the index, set `attrForSelected` to the name of the attribute. For example, if you want to select item by `name`, set `attrForSelected` to `name`. Example: ```html <iron-selector attr-for-selected="name" selected="foo"> <div name="foo">Foo</div> <div name="bar">Bar</div> <div name="zot">Zot</div> </iron-selector> ``` `iron-selector` is not styled. Use the `iron-selected` CSS class to style the selected element. Example: ```html <style> .iron-selected { background: #eee; } </style> ... <iron-selector selected="0"> <div>Item 1</div> <div>Item 2</div> <div>Item 3</div> </iron-selector> ``` # Polymer TodoMVC Example > Polymer is a new type of library for the web, built on top of Web Components, and designed to leverage the evolving web platform on modern browsers. > _[Polymer - www.polymer-project.org](https://www.polymer-project.org/)_ ## Learning Polymer The [Polymer website](https://www.polymer-project.org) is a great resource for getting started. Here are some links you may find helpful: * [Getting Started](https://www.polymer-project.org/1.0/docs/start/getting-the-code.html) * [FAQ](https://www.polymer-project.org/0.5/resources/faq.html) (old) * [Browser Compatibility](https://www.polymer-project.org/1.0/resources/compatibility.html) Get help from Polymer devs and users: * Find us Slack - polymer.slack.com * Join the high-traffic [polymer-dev](https://groups.google.com/forum/?fromgroups=#!forum/polymer-dev) Google group or the announcement-only [polymer-announce](https://groups.google.com/forum/?fromgroups=#!forum/polymer-announce) Google group. ## Implementation The Polymer implementation of TodoMVC has a few key differences with other implementations: * [Web Components](http://w3c.github.io/webcomponents/explainer/) allow you to create new HTML elements that are reusable, composable, and encapsulated. * Non-visual elements such as the router and the model are also implemented as custom elements and appear in the DOM. Implementing them as custom elements instead of plain objects allows you to take advantage of Polymer's data binding and event handling throughout the app. ## Compatibility Polymer and the web component polyfills are intended to work in the latest version of [evergreen browsers](http://tomdale.net/2013/05/evergreen-browsers/). IE9 is not supported. Please refer to [Browser Compatibility](https://www.polymer-project.org/1.0/resources/compatibility.html) for more details. ## Running this sample 1. Install [node.js](nodejs.org) (required for `bower` client-side package management) 1. Install bower: `npm install -g bower` 1. From the `todomvc\` folder, run `bower update` 1. Start a web server in the `todomvc\` folder. Hint: if you have python installed, you can just run: `python -m SimpleHTTPServer` 1. Browse to the server root ## Making updates If you want to make a change to any of the elements in elements/elements.html, you'll need to install `polybuild` (Polymer's build tool) and re-vulcanize elements.build.html. 1. Run `npm install` (first time only) 1. Make a change 1. Run `npm run build` <!--- This README is automatically generated from the comments in these files: iron-meta.html Edit those files, and our readme bot will duplicate them over here! Edit this file, and the bot will squash your changes :) --> [![Build Status](https://travis-ci.org/PolymerElements/iron-meta.svg?branch=master)](https://travis-ci.org/PolymerElements/iron-meta) _[Demo and API Docs](https://elements.polymer-project.org/elements/iron-meta)_ ##&lt;iron-meta&gt; `iron-meta` is a generic element you can use for sharing information across the DOM tree. It uses [monostate pattern](http://c2.com/cgi/wiki?MonostatePattern) such that any instance of iron-meta has access to the shared information. You can use `iron-meta` to share whatever you want (or create an extension [like x-meta] for enhancements). The `iron-meta` instances containing your actual data can be loaded in an import, or constructed in any way you see fit. The only requirement is that you create them before you try to access them. Examples: If I create an instance like this: <iron-meta key="info" value="foo/bar"></iron-meta> Note that value="foo/bar" is the metadata I've defined. I could define more attributes or use child nodes to define additional metadata. Now I can access that element (and it's metadata) from any iron-meta instance via the byKey method, e.g. meta.byKey('info').getAttribute('value'); Pure imperative form would be like: document.createElement('iron-meta').byKey('info').getAttribute('value'); Or, in a Polymer element, you can include a meta in your template: <iron-meta id="meta"></iron-meta> ... this.$.meta.byKey('info').getAttribute('value'); ##&lt;iron-meta-query&gt; `iron-meta` is a generic element you can use for sharing information across the DOM tree. It uses [monostate pattern](http://c2.com/cgi/wiki?MonostatePattern) such that any instance of iron-meta has access to the shared information. You can use `iron-meta` to share whatever you want (or create an extension [like x-meta] for enhancements). The `iron-meta` instances containing your actual data can be loaded in an import, or constructed in any way you see fit. The only requirement is that you create them before you try to access them. Examples: If I create an instance like this: <iron-meta key="info" value="foo/bar"></iron-meta> Note that value="foo/bar" is the metadata I've defined. I could define more attributes or use child nodes to define additional metadata. Now I can access that element (and it's metadata) from any iron-meta instance via the byKey method, e.g. meta.byKey('info').getAttribute('value'); Pure imperative form would be like: document.createElement('iron-meta').byKey('info').getAttribute('value'); Or, in a Polymer element, you can include a meta in your template: <iron-meta id="meta"></iron-meta> ... this.$.meta.byKey('info').getAttribute('value'); # Ember.js TodoMVC Example using Ember CLI v3.2 > A framework for creating ambitious web applications. > _[Ember.js - emberjs.com](http://emberjs.com)_ > _[Ember CLI - ember-cli.com](http://ember-cli.com)_ ## Note for people updating this app. The `index.html` and the `assets` folder in the parent folder as simlinks into the items with the same names inside `dist`. The `dist` folder has to be checked in git and built for production. You can develop this project as a standard Ember CLI application: ```bash $ cd todomvc $ npm install $ ember server ``` Update to the latest Ember with `ember-cli-update` and with the latest codemods: ```bash $ cd todomvc $ npx ember-cli-update $ git commit -m 'Update Ember with ember-cli-update' -a $ npx ember-cli-update --run-codemods $ git commit -m 'Update TodoMVC with codemods' -a ``` Build Ember TodoMVC for production: ```bash $ ember build --prod ``` Run Cypress Test: ```bash # Run this command from the root folder of this repository $ npm install $ npm run server # Run in a separated terminal $ CYPRESS_framework=emberjs npm run cy:open ``` ### Ember Notes * The `rootURL` param in `config/environment.js` should keep as empty string. # iron-form-element-behavior Behavior that allows an element to be tracked by an iron-form <!--- This README is automatically generated from the comments in these files: iron-button-state.html iron-control-state.html Edit those files, and our readme bot will duplicate them over here! Edit this file, and the bot will squash your changes :) --> [![Build Status](https://travis-ci.org/PolymerElements/iron-behaviors.svg?branch=master)](https://travis-ci.org/PolymerElements/iron-behaviors) _[Demo and API Docs](https://elements.polymer-project.org/elements/iron-behaviors)_ <!-- No docs for Polymer.IronControlState found. --> <!-- No docs for Polymer.IronButtonState found. -->
evgenykuzyakov_upgradability-example
.dependabot config.yml .gitpod.yml .travis.yml Cargo.toml README-Gitpod.md README-Windows.md README.md package-lock.json package.json src lib.rs
Status Message ============== [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/near-examples/rust-status-message) <!-- MAGIC COMMENT: DO NOT DELETE! Everything above this line is hidden on NEAR Examples page --> This smart contract saves and records the status messages of NEAR accounts that call it. Windows users: please visit the [Windows-specific README file](README-Windows.md). ## Prerequisite Ensure `near-shell` is installed by running: ``` near --version ``` If needed, install `near-shell`: ``` npm install near-shell -g ``` ## Building this contract To make the build process compatible with multiple operating systems, the build process exists as a script in `package.json`. There are a number of special flags used to compile the smart contract into the wasm file. Run this command to build and place the wasm file in the `res` directory: ```bash npm run build ``` **Note**: Instead of `npm`, users of [yarn](https://yarnpkg.com) may run: ```bash yarn build ``` ## Using this contract ### Quickest deploy Build and deploy this smart contract to an development account. This development account will be created automatically and is not intended to be permanent. Please see the "Standard deploy" section for creating a more personalized account to deploy to. ```bash near dev-deploy --wasmFile res/status_message.wasm --helperUrl https://near-contract-helper.onrender.com ``` Behind the scenes, this is creating an account and deploying a contract to it. On the console, notice a message like: >Done deploying to dev-1234567890123 In this instance, the account is `dev-1234567890123`. A file has been created containing the key to the account, located at `neardev/dev-account`. To make the next few steps easier, we're going to set an environment variable containing this development account id and use that when copy/pasting commands. Run this command to the environment variable: ```bash source neardev/dev-account.env ``` You can tell if the environment variable is set correctly if your command line prints the account name after this command: ```bash echo $CONTRACT_NAME ``` The next command will call the contract's `set_status` method: ```bash near call $CONTRACT_NAME set_status '{"message": "aloha!"}' --accountId $CONTRACT_NAME ``` To retrieve the message from the contract, call `get_status` with the following: ```bash near view $CONTRACT_NAME get_status '{"account_id": "'$CONTRACT_NAME'"}' ``` ### Standard deploy In this second option, the smart contract will get deployed to a specific account created with the NEAR Wallet. If you do not have a NEAR account, please create one with [NEAR Wallet](https://wallet.nearprotocol.com). In the project root, login with `near-shell` by following the instructions after this command: ``` near login ``` Deploy the contract: ```bash near deploy --wasmFile res/status_message.wasm --accountId YOUR_ACCOUNT_NAME ``` Set a status for your account: ```bash near call YOUR_ACCOUNT_NAME set_status '{"message": "aloha friend"}' --accountId YOUR_ACCOUNT_NAME ``` Get the status: ```bash near view YOUR_ACCOUNT_NAME get_status '{"account_id": "YOUR_ACCOUNT_NAME"}' ``` Note that these status messages are stored per account in a `HashMap`. See `src/lib.rs` for the code. We can try the same steps with another account to verify. **Note**: we're adding `NEW_ACCOUNT_NAME` for the next couple steps. There are two ways to create a new account: - the NEAR Wallet (as we did before) - `near create_account NEW_ACCOUNT_NAME --masterAccount YOUR_ACCOUNT_NAME` Now call the contract on the first account (where it's deployed): ```bash near call YOUR_ACCOUNT_NAME set_status '{"message": "bonjour"}' --accountId NEW_ACCOUNT_NAME ``` ```bash near view YOUR_ACCOUNT_NAME get_status '{"account_id": "NEW_ACCOUNT_NAME"}' ``` Returns `bonjour`. Make sure the original status remains: ```bash near view YOUR_ACCOUNT_NAME get_status '{"account_id": "YOUR_ACCOUNT_NAME"}' ``` ## Testing To test run: ```bash cargo test --package status-message -- --nocapture ```
indeavr_dynamic-dapp
README.md css bootstrap-grid.min.css bootstrap-reboot.min.css global.css magnific-popup.css owl.carousel.min.css select2.min.css index.html package-lock.json package.json public img arrow.svg arrow2.svg breadcrumb.svg checkmark.svg close.svg sort-down.svg verified.svg scripts bootstrap.bundle.min.js jquery-3.5.1.min.js jquery.magnific-popup.min.js owl.carousel.min.js select2.min.js smooth-scrollbar.js src abi.ts constants.ts main.ts routes.ts scripts ceramic.ts modal.js moralis.ts template.js service api.ts ceramics.ts helpers.ts stores connection.store.ts forms.store.ts nfts.store.ts types.ts utils.ts vite-env.d.ts svelte.config.js tsconfig.json vite.config.js
# UI for the Dynamic NFTs
linhai0212_test-near-wallet-example
.github dependabot.yml workflows tests.yml .gitpod.yml .travis.yml LICENSE-APACHE.txt LICENSE-MIT.txt README-Gitpod.md README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts index.ts model.ts tsconfig.json neardev shared-test-staging test.near.json shared-test test.near.json package.json src config.js index.html loader.html main.js styles.css test-setup.js test.js
Example of NEAR Wallet integration ================================== [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/near-examples/wallet-example) <!-- MAGIC COMMENT: DO NOT DELETE! Everything above this line is hidden on NEAR Examples page --> This example demonstrates how to integrate your application with NEAR Wallet. The contract is quite simple. It can store the account_id of last sender and return it. It also shows how you can debug contracts using logs. ## Getting started There are two ways to run this project. The first is the quick and a good way to instantly become familiar with this example. Once familiar, the next step is for a developer to create their own NEAR account and deploy the contract to testnet. This is covered in the following section. There's a button at the top of this file that says "Open in Gitpod." This will open the project in a new tab with an integrated development environment. The other option is to clone this repository and follow the same instructions. ### Quickest option 1. Install dependencies: ``` yarn --frozen-lockfile ``` 2. Build and deploy this smart contract to a development account. This development account will be created automatically and is not intended for reuse: ``` yarn dev ``` Your command line which will display a link to localhost similar to: ```bash Server running at http://localhost:1234 ``` Please open that link your browser to continue and see how to log in with NEAR Wallet in a simple webapp. ### Standard deploy option In this second option, the smart contract will get deployed to a specific account created with the NEAR Wallet. 1. Ensure `near-cli` is installed by running: ``` near --version ``` If needed, install `near-cli`: ``` npm install near-cli -g ``` 2. If you do not have a NEAR account, please create one with [NEAR Wallet](wallet.testnet.near.org). In the project root, login with `near-cli` by following the instructions after this command: ``` near login ``` 3. Modify the top of `src/config.js`, changing the `CONTRACT_NAME` to be the NEAR account that was just used to log in. ```javascript … const CONTRACT_NAME = process.env.CONTRACT_NAME || 'YOUR_ACCOUNT_NAME_HERE'; /* TODO: fill this in! */ … ``` 4. Start the example! ``` yarn start ``` ## To Test ``` yarn asp // as-pect tests yarn jest // integration tests yarn test // both ``` ## To Explore - `assembly/main.ts` for the contract code - `src/index.html` for the front-end HTML - `src/main.js` for the JavaScript front-end code and how to integrate contracts - `src/test.js` for the JS tests for the contract ## Data collection By using Gitpod in this project, you agree to opt-in to basic, anonymous analytics. No personal information is transmitted. Instead, these usage statistics aid in discovering potential bugs and user flow information.
JMario1_election-is-near
.gitpod.yml README.md contract Cargo.toml README.md compile.js src election.rs lib.rs package.json src App.css App.js __mocks__ fileMock.js assets logo-black.svg logo-white.svg components AddElection.js Cover.js Election.js Elections.js Loader.js Notifications.js ViewElection.js Wallet.js index.html index.js jest.init.js utils config.js election.js near.js wallet login index.html
Voting Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html Election-is-near ================== Election-is-near allows user to create election for various positions and ensure each user can only vote once per every election. [Live Demo](https://election-is-near.surge.sh) Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test` or just the contract with `yarn run test-contract`.
N1ghtSe7en_hellomint-main
README.md babel.config.js contract Cargo.toml README.md compile.js src lib.rs package.json src App.js __mocks__ fileMock.js assets logo-black.svg logo-white.svg config.js global.css index.html index.js jest.init.js main.test.js utils.js wallet login index.html
hellomint Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html NSeven NEAR Warrior
ligebit_BURROW
README.md burrow.js config.js demultiplicator.js multiplicator.js near.js package-lock.json package.json
Заполните config.js Для мультипликации: `node multiplicator` Для демультипликации: `node demultiplicator`
Fabulousugo_guest-book-js-master
.github workflows tests.yml .gitpod.yml README.md contract README.md build.sh deploy.sh package-lock.json package.json src contract.ts model.ts tsconfig.json frontend .cypress cypress.config.js e2e guest-book.cy.ts tsconfig.json App.js index.html index.js near-interface.js near-wallet.js package-lock.json package.json start.sh integration-tests package-lock.json package.json src main.ava.ts package-lock.json package.json
# Guest Book Contract The smart contract stores messages from users. Messages can be `premium` if the user attaches sufficient money (0.1 $NEAR). ```ts this.messages = []; @call // Public - Adds a new message. add_message({ text }: { text: string }) { // If the user attaches more than 0.1N the message is premium const premium = near.attachedDeposit() >= BigInt(POINT_ONE); const sender = near.predecessorAccountId(); const message = new PostedMessage({premium, sender, text}); this.messages.push(message); } @view // Returns an array of messages. get_messages({ fromIndex = 0, limit = 10 }: { fromIndex: number, limit: number }): PostedMessage[] { return this.messages.slice(fromIndex, fromIndex + limit); } ``` <br /> # Quickstart 1. Make sure you have installed [node.js](https://nodejs.org/en/download/package-manager/) >= 16. 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash npm run deploy ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Stored Messages `get_messages` is a read-only method (`view` method) that returns a slice of the vector `messages`. `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash near view <dev-account> get_messages '{"from_index":0, "limit":10}' ``` <br /> ## 3. Add a Message `add_message` adds a message to the vector of `messages` and marks it as premium if the user attached more than `0.1 NEAR`. `add_message` is a payable method for which can only be invoked using a NEAR account. The account needs to attach money and pay GAS for the transaction. ```bash # Use near-cli to donate 1 NEAR near call <dev-account> add_message '{"text": "a message"}' --amount 0.1 --accountId <account> ``` **Tip:** If you would like to add a message using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Guest Book 📖 [![](https://img.shields.io/badge/⋈%20Examples-Basics-green)](https://docs.near.org/tutorials/welcome) [![](https://img.shields.io/badge/Gitpod-Ready-orange)](https://gitpod.io/#/https://github.com/near-examples/guest-book-js) [![](https://img.shields.io/badge/Contract-js-yellow)](https://docs.near.org/develop/contracts/anatomy) [![](https://img.shields.io/badge/Frontend-React-blue)](https://docs.near.org/develop/integrate/frontend) [![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fnear-examples%2Fguest-book-js%2Fbadge&style=flat&label=Tests)](https://actions-badge.atrox.dev/near-examples/guest-book-js/goto) The Guest Book is a simple app that stores messages from users, allowing to pay for a premium message. ![](https://docs.near.org/assets/images/guest-book-b305a87a35cbef2b632ebe289d44f7b2.png) # What This Example Shows 1. How to receive $NEAR on a contract. 2. How to store and retrieve information from the blockchain. 3. How to use a `Vector`. 4. How to interact with a contract from `React JS`. <br /> # Quickstart Clone this repository locally or [**open it in gitpod**](https://gitpod.io/#/github.com/near-examples/guest_book-js). Then follow these steps: ### 1. Install Dependencies ```bash npm install ``` ### 2. Test the Contract Deploy your contract in a sandbox and simulate interactions from users. ```bash npm test ``` ### 3. Deploy the Contract Build the contract and deploy it in a testnet account ```bash npm run deploy ``` ### 4. Start the Frontend Start the web application to interact with your smart contract ```bash npm start ``` --- # Learn More 1. Learn more about the contract through its [README](./contract/README.md). 2. Check [**our documentation**](https://docs.near.org/develop/welcome).
imsks_learning-polygon
.gitpod.yml .prettierrc.json README.md __test__ avalanche.test.ts polygon.test.ts secret.test.ts solana.test.ts components protocols avalanche components index.ts lib index.ts celo components index.ts lib index.ts ceramic lib figmentLearnSchema.json figmentLearnSchemaCompact.json identityStore LocalStorage.ts index.ts index.ts types index.ts near components index.ts lib index.ts polkadot components index.ts lib index.ts polygon challenges balance.ts connect.ts deploy.ts getter.ts index.ts query.ts restore.ts setter.ts transfer.ts components index.ts lib index.ts pyth components index.ts lib index.ts swap.ts secret components index.ts lib index.ts solana components index.ts lib index.ts tezos components index.ts lib index.ts the_graph graphql query.ts the_graph_near graphql query.ts shared Button.styles.ts CustomMarkdown Markdown.styles.ts VideoPlayer VideoPlayer.styles.ts utils markdown-utils.ts string-utils.ts ProtocolNav ProtocolNav.styles.ts contracts celo HelloWorld.json near Cargo.toml README.md compile.js src lib.rs polygon SimpleStorage README.md SimpleStorage.json migrations 1_initial_migration.js 2_deploy_contracts.js package.json truffle-config.js solana program Cargo.toml Xargo.toml src lib.rs tests lib.rs tezos counter.js the_graph CryptopunksData.abi.json docker docker-compose-near.yml docker-compose.yml hooks index.ts useColors.ts useLocalStorage.ts useSteps.ts jest.config.js lib constants.ts markdown PREFACE.md avalanche CHAIN_CONNECTION.md CREATE_KEYPAIR.md EXPORT_TOKEN.md FINAL.md GET_BALANCE.md IMPORT_TOKEN.md PROJECT_SETUP.md TRANSFER_TOKEN.md celo CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md SWAP_TOKEN.md TRANSFER_TOKEN.md ceramic BASIC_PROFILE.md CHAIN_CONNECTION.md CUSTOM_DEFINITION.md FINAL.md LOGIN.md PROJECT_SETUP.md near CHAIN_CONNECTION.md CREATE_ACCOUNT.md CREATE_KEYPAIR.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md polkadot CHAIN_CONNECTION.md CREATE_ACCOUNT.md ESTIMATE_DEPOSIT.md ESTIMATE_FEES.md FINAL.md GET_BALANCE.md PROJECT_SETUP.md RESTORE_ACCOUNT.md TRANSFER_TOKEN.md polygon CHAIN_CONNECTION.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md QUERY_CHAIN.md RESTORE_ACCOUNT.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md pyth FINAL.md PROJECT_SETUP.md PYTH_CONNECT.md PYTH_EXCHANGE.md PYTH_LIQUIDATE.md PYTH_SOLANA_WALLET.md PYTH_VISUALIZE_DATA.md secret CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md solana CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md FUND_ACCOUNT.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md SOLANA_CREATE_GREETER.md TRANSFER_TOKEN.md tezos CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md the_graph FINAL.md GRAPH_NODE.md PROJECT_SETUP.md SUBGRAPH_MANIFEST.md SUBGRAPH_MAPPINGS.md SUBGRAPH_QUERY.md SUBGRAPH_SCAFFOLD.md SUBGRAPH_SCHEMA.md the_graph_near FINAL.md GRAPH_NODE.md PROJECT_SETUP.md SUBGRAPH_MANIFEST.md SUBGRAPH_MAPPINGS.md SUBGRAPH_QUERY.md SUBGRAPH_SCAFFOLD.md SUBGRAPH_SCHEMA.md | n : | next-env.d.ts next.config.js package.json pages api avalanche account.ts balance.ts connect.ts export.ts import.ts transfer.ts celo account.ts balance.ts connect.ts deploy.ts getter.ts setter.ts swap.ts transfer.ts near balance.ts check-account.ts connect.ts create-account.ts deploy.ts getter.ts keypair.ts setter.ts transfer.ts polkadot account.ts balance.ts connect.ts deposit.ts estimate.ts restore.ts transfer.ts pyth connect.ts secret account.ts balance.ts connect.ts deploy.ts getter.ts setter.ts transfer.ts solana balance.ts connect.ts deploy.ts fund.ts getter.ts greeter.ts keypair.ts setter.ts transfer.ts tezos account.ts balance.ts connect.ts deploy.ts getter.ts setter.ts transfer.ts the-graph-near entity.ts manifest.ts scaffold.ts the-graph entity.ts manifest.ts mapping.ts node.ts scaffold.ts public discord.svg figment-learn-compact.svg vercel.svg theme colors.ts index.ts media.ts tsconfig.json types index.ts utils colors.ts context.ts datahub.ts markdown.ts networks.ts pages.ts string-utils.ts tracking-utils.ts
Based on: MetaCoin tutorial from Truffle docs https://www.trufflesuite.com/docs/truffle/quickstart SimpleStorage example contract from Solidity docs https://docs.soliditylang.org/en/v0.4.24/introduction-to-smart-contracts.html#storage 1. Install truffle (https://www.trufflesuite.com/docs/truffle/getting-started/installation) `npm install -g truffle` 2. Navigate to this directory (/contracts/polygon/SimpleStorage) 3. Install dependencies `yarn` 4. Test contract `truffle test ./test/TestSimpleStorage.sol` **Possible issue:** "Something went wrong while attempting to connect to the network. Check your network configuration. Could not connect to your Ethereum client with the following parameters:" **Solution:** run `truffle develop` and make sure port matches the one in truffle-config.js under development and test networks 5. Run locally via `truffle develop` $ truffle develop ``` migrate let instance = await SimpleStorage.deployed(); let storedDataBefore = await instance.get(); storedDataBefore.toNumber() // Should print 0 instance.set(50); let storedDataAfter = await instance.get(); storedDataAfter.toNumber() // Should print 50 ``` 6. Create Polygon testnet account - Install MetaMask (https://chrome.google.com/webstore/detail/metamask/nkbihfbeogaeaoehlefnkodbefgpgknn?hl=en) - Add a custom network with the following params: Network Name: "Polygon Mumbai" RPC URL: https://rpc-mumbai.maticvigil.com/ Chain ID: 80001 Currency Symbol: MATIC Block Explorer URL: https://mumbai.polygonscan.com 7. Fund your account from the Matic Faucet https://faucet.matic.network Select MATIC Token, Mumbai Network Enter your account address from MetaMask Wait until time limit is up, requests tokens 3-4 times so you have enough to deploy your contract 8. Add a `.secret` file in this directory with your account's seed phrase or mnemonic (you should be required to write this down or store it securely when creating your account in MetaMask). In `truffle-config.js`, uncomment the three constant declarations at the top, along with the matic section of the networks section of the configuration object. 9. Deploy contract `truffle migrate --network matic` 10. Interact via web3.js ```js const {ethers} = require('ethers'); const fs = require('fs'); const mnemonic = fs.readFileSync('.secret').toString().trim(); const signer = new ethers.Wallet.fromMnemonic(mnemonic); const provider = new ethers.providers.JsonRpcProvider( 'https://matic-mumbai.chainstacklabs.com', ); const json = JSON.parse( fs.readFileSync('build/contracts/SimpleStorage.json').toString(), ); const contract = new ethers.Contract( json.networks['80001'].address, json.abi, signer.connect(provider), ); contract.get().then((val) => console.log(val.toNumber())); // should log 0 contract.set(50).then((receipt) => console.log(receipt)); contract.get().then((val) => console.log(val.toNumber())); // should log 50 ``` # 👋🏼 What is `learn-web3-dapp`? We made this decentralized application (dApp) to help developers learn about Web 3 protocols. It's a Next.js app that uses React, TypeScript and various smart contract languages (mostly Solidity and Rust). We will guide you through using the various blockchain JavaScript SDKs to interact with their networks. Each protocol is slightly different, but we have attempted to standardize the workflow so that you can quickly get up to speed on networks like Solana, NEAR, Polygon and more! - ✅ Solana - ✅ Polygon - ✅ Avalanche - ✅ NEAR - ✅ Tezos - ✅ Secret - ✅ Polkadot - ✅ Celo - ✅ The Graph - ✅ The Graph for NEAR - ✅ Pyth - 🔜 Ceramic - 🔜 Arweave - 🔜 Chainlink - [Let us know which one you'd like us to cover](https://github.com/figment-networks/learn-web3-dapp/issues) <img width="1024" alt="Screen Shot 1" src="https://raw.githubusercontent.com/figment-networks/learn-web3-dapp/main/markdown/__images__/readme_01.png"> <img width="1024" alt="Screen Shot 2" src="https://raw.githubusercontent.com/figment-networks/learn-web3-dapp/main/markdown/__images__/readme-02.png"> <img width="1024" alt="Screen Shot 3" src="https://raw.githubusercontent.com/figment-networks/learn-web3-dapp/main/markdown/__images__/readme-03.png"> # 🧑‍💻 Get started ## 🤖 Using Gitpod (Recommended) The best way to go through those courses is using [Gitpod](https://gitpod.io). Gitpod provides prebuilt developer environments in your browser, powered by VS Code. Just sign in using GitHub and you'll be up and running in seconds without having to do any manual setup 🔥 [**Open this repo on Gitpod**](https://gitpod.io/#https://github.com/figment-networks/learn-web3-dapp) ## 🐑 Clone locally Make sure you have installed [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), [Node.js](https://nodejs.org/en/) (Please install **v14.17.0**, we recommend using [nvm](https://github.com/nvm-sh/nvm)) and [yarn](https://yarnpkg.com/getting-started/install). Then clone the repo, install dependencies and start the server by running all these commands: ```text git clone https://github.com/figment-networks/learn-web3-dapp.git cd learn-web3-dapp yarn yarn dev ``` # 🤝 Feedback and contributing If you encounter any errors during this process, please join our [Discord](https://figment.io/devchat) for help. Feel free to also open an Issue or a Pull Request on the repo itself. We hope you enjoy our Web 3 education dApps 🚀 -- ❤️ The Figment Learn Team # Pathway Smart Contract A [smart contract] written in [Rust] for [figment pathway] # Quick Start Before you compile this code, you will need to install Rust with [correct target] # Exploring The Code 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [rust]: https://www.rust-lang.org/ [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html
i3ima_nep-246
Cargo.toml README.md examples approval-receiver Cargo.toml scripts build.sh src lib.rs multi-token Cargo.toml rustfmt.toml scripts build.sh src lib.rs src event.rs lib.rs multi_token approval approval_impl.rs mod.rs receiver.rs core core_impl.rs mod.rs receiver.rs resolver.rs enumeration enumeration_impl.rs mod.rs events.rs macros.rs metadata.rs mod.rs token.rs utils.rs
# WIP Implementation of [NEP-246](https://github.com/near/NEPs/issues/246) for NEAR Protocol ## What's done * Approvals, Metadata, Enumeration extenstions * Transfers * Resolvers & receivers ## What's not * Tests * Burn
Mykhail_NCD.recruitment
.idea encodings.xml jsLibraryMappings.xml libraries NCD_nomination_node_modules.xml misc.xml modules.xml vcs.xml workspace.xml README.md as-pect.config.js asconfig.json package.json scripts 1.recruitement-deploy.sh 2.post-vacancy.sh 3.get-all-vacancies.sh 4.post-candidate.sh 5.get-all-candidates.sh 6.hire-candidate.sh 7.get-all-hired-candidates.sh 8.cleanup.sh 9.reload.sh README.md src as_types.d.ts recruitment __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # Enthusiastic recruitment _"Good candidates are always near"_ A smart contract developed to provide functionality for a trustworthy recruitment process with the help of Near protocol smart contracts. ## Demo [Youtube link](https://youtu.be/dfXP7HObpSs) ## Problem The hiring of IT talents is one of the most challenging domains for recruiters. Quite often recruitment agencies face issues connected with the dishonest behavior of companies hiring managers, at the same time companies devote huge budgets to the external recruiters not getting the expected results. ## Solution Create a decentralized application based on the Near Protocol to establish a reliable interface for the hiring managers' and recruitment agencies' interactions. ### Users journey 1. IT company "Bug makers" wants to hire a strong Senior dev for their needs. 2. Hiring manager Jason posts vacancy, provides position requirements, and deposits a reward for the recruitment agency (e.g. 50 Near) 3. "Hiring Angels" agency assign a recruiter Linda to work on "Bug makers"'s vacancies pool 4. Linda checks the list of the open vacancies and start looking for available candidates on the market 5. As soon as a candidate is found by Linda, she applies them for a vacancy 6. Jason see a new depersonalized candidate on the candidates list. There is all needed info except name and contact data. 7. Jason checks if the candidate profile suits the requirements and if so clicks "hire" 8. Contact information of the candidate is automatically sent to Jason and appears on the Hired Candidates list 9. Reward for the candidates is sent to Linda's Near account 10. Linda and Jason are happy (at least I hope so) - [Installation](#installation) - [UX Wireframes](#ux-wireframes) - [Contract](#contract) - [Deploying](#deploying) - [Future Development](#future-development) --- ## Installation 1. clone this repo 2. run `yarn install` (or `npm install`) 3. run `./scripts/recruitement-deploy.sh` ## UX Wireframes All UX wireframes can be found in the `wireframes/` folder. Wireframes of the core pages are presented below. **Post a Vacancy** _Hiring manager Jason posts vacancy, provids position requirements and deposit a reward for the recruitment agency (e.g. 50 Near)_ ![post-vacancy](wireframes/1.PostVacancy[hiring_manager_view].png) **Get list of posted vacncies** _Linda checks the list of the open vacancies and start looking for available candidates_ ![get-vacancies-list](wireframes/3.OpenVacancies[recruiter_view].png) **Apply candidate** _As soon as a candidate found Linda applies them to a vacancy_ ![apply-candidate](wireframes/4.ApplyCandidate[recruiter_view].png) **Hire candidate** _Jason see a new depersonalized candidate in the candidates list_ _If the candidate suits requirements and he clicks "hire"_ _Contact information of the candidate is sent to Jason and appear in the Hired Candidates list_ _Reward for the candidates is sent to Linda Near account_ ![hire-candidate](wireframes/6.HireCandidatePopup[hiring_manager_view].png) ## Contract Contract is represented by two primary entities: Vacancy and Candidate. Instance of Vacancy is created by a hiring managers and keep details about this entity: - Position requirements - Reward that company is ready to pay to a recruitement agency Instance of Candidate is created by a recruiter and keep the following details: - Candidate experience - Timezone - English level - Salary expectations
open-web-academy_thegraph
README.md generated schema.ts package.json src mapping.ts tsconfig.json
# NCAR The Graph Example En este apartado encontrarás toda la información necesaria para la implementaicón de un nodo de The Graph que permita indexar la información generada por un contrato inteligente. https://ow-academy.notion.site/Creaci-n-de-un-nodo-en-The-Graph-9ac3f2c4745c479c8c629901e580439b
prakhar728_Medibridge
.gitpod.yml README.md client about.html doctors doctor.html patients.html home.html patients logPatients.html thepatient.html styles.css contract Cargo.toml README.md build.sh deploy.sh neardev dev-account.env src lib.rs frontend App.js assets global.css logo-black.svg logo-white.svg index.html index.js near-wallet.js package-lock.json package.json start.sh ui-components.js index.js package-lock.json package.json rust-toolchain.toml
MediBridge =========== <p align="center"> <img src="frontend/assets/MediBridge_logo.png" alt="Image"> </p> The goal of MediBridge is to develop a decentralized system that facilitates the connection between patients and doctors in the healthcare domain. By leveraging blockchain technology, the project aims to create a secure, transparent, and efficient platform where patients can easily find and connect with doctors while maintaining control over their medical data. Background =========== This project was developed for the Web3 BUILD Hackathon hosted in partnership with NEAR Horizon. It showcases the potential of blockchain technology in the healthcare industry. The goal of this project is to demonstrate how decentralized systems can securely store and share medical records, enabling patients to have greater control over their health information. Please note that this project is a demonstration for the hackathon and may require further refinement for production and real-world implementation. Quick Start =========== If you haven't installed dependencies during setup: npm install Build and deploy your contract to TestNet with a temporary dev account: npm run deploy Test your contract: npm test If you have a frontend, run `npm start`. This will run a dev server. Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages # Health Contract The Health Contract is a smart contract designed to facilitate decentralized interactions between patients and doctors in the healthcare domain. It leverages the power of blockchain technology to ensure secure, transparent, and efficient connectivity while preserving patient privacy and control over medical data. # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Functions ```rust pub fn get_patient(&self, id: &AccountId) -> Result<&Patient, &str> ``` This function returns the patient's information given his account ID. ```rust pub fn get_doctor(&self, id: &AccountId) -> Result<&Patient, &str> ``` This function returns the doctor's information given his account ID. ```rust pub fn get_patient_records(&self, id: &AccountId) -> Vec<MedicalRecord> ``` This function returns a patient's medical records given his account ID. ```rust pub fn register_patient(&mut self, id: &AccountId, name: String) ``` This function allows the registration of a new patient with a specified ID and name. It ensures that the patient ID is unique and associates the patient with an empty list of medical records. The function is payable, meaning a fee is required to execute this operation. ```rust pub fn register_doctor(&mut self, id: &AccountId, name: String) ``` This function enables the registration of a new doctor with a specified ID and name. It checks the uniqueness of the doctor's ID and adds the doctor to the contract's list of doctors. Like `register_patient`, this function is payable. ```rust pub fn store_medical_record( &mut self, id: u64, patient_id: &AccountId, record_data: String, is_public: bool, ) ``` This function allows the storage of a medical record for a patient with a specified ID, record data, and privacy settings. It ensures the uniqueness of the medical record ID and associates the record with the patient. If the record is marked as public, it will be added to the contract's list of public records. Similar to the previous functions, this operation requires a fee to be paid. ```rust pub fn schedule_appointment( &mut self, id: u64, patient_id: &AccountId, doctor_id: &AccountId, timestamp: u64, location: String, ) ``` This function enables the scheduling of an appointment between a patient and a doctor. It verifies the uniqueness of the appointment ID and stores the appointment details, including the patient ID, doctor ID, appointment timestamp, and location. ```rust pub fn pay_doctor(&mut self, doctor_id: &AccountId, amount: Balance) -> Promise ``` This function facilitates the payment of a specified amount to a doctor. It verifies the existence of both the patient and the doctor, checks if the patient has enough balance for the payment, and transfers the specified amount from the patient to the doctor. ```rust pub fn view_scheduled_appointments(&self) -> Vec<Appointment> ``` This function allows patients and doctors to view their scheduled appointments. It retrieves the caller's account ID and checks if the caller is either a patient or a doctor. If authenticated, the function fetches and returns the appointments associated with the caller.
linhleba_ecommerce-platform
Cargo.toml README.md buid.sh src lib.rs order.rs
# ecommerce-platform
maxhr_near--near-sdk-js
.github workflows standalone-examples.yml standalone-unit-test.yml CODE_OF_CONDUCT.md CONTRIBUTING.md README.md cli builder builder.c cli.js post-install.js save_bytecode.js utils.js wasm_to_bytes.js examples __tests__ test-clean-state.ava.js test-counter.ava.js test-cross-contract-call.ava.js test-fungible-token-lockable.ava.js test-fungible-token.ava.js test-non-fungible-token.ava.js test-parking-lot.ava.js test-status-message-collections.ava.js test-status-message.ava.js babel.config.json jsconfig.json package.json src clean-state.js counter-lowlevel.js counter.js counter.ts cross-contract-call.js fungible-token-helper.js fungible-token-lockable.js fungible-token.js log.ts non-fungible-token-receiver.js non-fungible-token.js parking-lot.ts status-message-collections.js status-message.js tsconfig.json jsvm README.md build.sh examples README-CLEAN-STATE.md README-COUNTER.md README-CROSS-CONTRACT-CALL.md README-FT.md README-LOCKABLE-FT.md README-NFT.md README-STATUS-MESSAGE.md README.md __tests__ test-clean-state.ava.js test-counter.ava.js test-cross-contract-call.ava.js test-fungible-token-lockable.ava.js test-fungible-token.ava.js test-non-fungible-token.ava.js test-status-message-collections.ava.js test-status-message.ava.js babel.config.json jsconfig.json package.json src clean-state.js counter.js cross-contract-call.js fungible-token-lockable.js fungible-token.js non-fungible-token.js status-message-collections.js status-message.js test-token-receiver.js jsvm.c tests README.md __tests__ bytes.ava.js function-params.ava.js lookup-map.ava.js lookup-set.ava.js unordered-map.ava.js unordered-set.ava.js vector.ava.js babel.config.json build.sh jsconfig.json package.json src bytes.js function-params.js lookup-map.js lookup-set.js unordered-map.js unordered-set.js vector.js lib api.d.ts api.js build-tools near-bindgen-exporter.d.ts near-bindgen-exporter.js collections index.d.ts index.js lookup-map.d.ts lookup-map.js lookup-set.d.ts lookup-set.js unordered-map.d.ts unordered-map.js unordered-set.d.ts unordered-set.js vector.d.ts vector.js index.d.ts index.js near-bindgen.d.ts near-bindgen.js near-contract.d.ts near-contract.js utils.d.ts utils.js package.json src api.ts build-tools near-bindgen-exporter.js collections index.ts lookup-map.ts lookup-set.ts unordered-map.ts unordered-set.ts vector.ts index.ts near-bindgen.ts near-contract.ts utils.ts tests README.md __tests__ bytes.ava.js function-params.ava.js lookup-map.ava.js lookup-set.ava.js test_context_api.ava.js test_log_panic_api.ava.js test_math_api.ava.js test_promise_api.ava.js test_storage_api.ava.js unordered-map.ava.js unordered-set.ava.js vector.ava.js babel.config.json jsconfig.json package.json src bytes.js context_api.js function-params.js log_panic_api.js lookup-map.js lookup-set.js math_api.js model.js promise_api.js storage_api.js unordered-map.js unordered-set.js vector.js tsconfig.json
# NEAR-SDK-JS (Standalone) ## Installation It is tested on Ubuntu 20.04, M1 Mac and Intel Mac. Other linux should also work but they're not tested. 1. Make sure you have make, cmake and nodejs. On Linux, also make sure you have gcc. 2. `make setup` ## Usage 1. Copy project layout including configurations from `examples/` as a starting point 2. Write smart contracts with JavaScript. You can use most npm packages that uses portable ES2020 features. 3. Build the contract with `yarn build`. 4. If no errors happens, a `<contract-name>.wasm` will be generate at `<project-dir>/build/`. It can be tested with workspaces-js and deployed to a NEAR node. ## Running Examples There are a couple of contract examples in the project: - [Clean contract state](https://github.com/near/near-sdk-js/tree/develop/examples/src/clean-state.js) - [Counter using low level API](https://github.com/near/near-sdk-js/tree/develop/examples/src/counter-lowlevel.js) - [Counter in JavaScript](https://github.com/near/near-sdk-js/tree/develop/examples/src/counter.js) - [Counter in TypeScript](https://github.com/near/near-sdk-js/tree/develop/examples/src/counter.ts) - [Doing cross contract call](https://github.com/near/near-sdk-js/tree/develop/examples/src/cross-contract-call.js) - [Fungible token](https://github.com/near/near-sdk-js/tree/develop/examples/src/fungible-token.js) - [Lockable fungible token](https://github.com/near/near-sdk-js/tree/develop/examples/src/fungible-token-lockable.js) - [Non fungible token](https://github.com/near/near-sdk-js/tree/develop/examples/src/non-fungible-token.js) - [Non fungible token receiver contract](https://github.com/near/near-sdk-js/tree/develop/examples/src/non-fungible-token-receiver.js) - [Status message board](https://github.com/near/near-sdk-js/tree/develop/examples/src/status-message.js) - [Status message board with unique messages](https://github.com/near/near-sdk-js/tree/develop/examples/src/status-message-collections.js) To build all examples, run `yarn build` in `examples/`. To test all examples, run `yarn test`. You can also build and test one specific example with `yarn build:<example-name>` and `yarn test:<example-name>`, see `examples/package.json`. To deploy and call a contract on a NEAR node, use near-cli's `near deploy` and `near call`. ## Error Handling in NEAR-SDK-JS If you want to indicate an error happened and fail the transaction, just throw an error object in JavaScript. The compiled JavaScript contract includes error handling capability. It will catch throwed errors and automatically invoke `panic_utf8` with `"{error.message}\n:{error.stack}"`. As a result, transaction will fail with `"Smart contract panicked: {error.message}\n{error.stack}"` error message. You can also use an error utilities library to organize your errors, such as verror. When your JS code or library throws an error, uncaught, the transaction will also fail with GuestPanic error, with the error message and stacktrace. When call host function with inappropriate type, means incorrect number of arguments or arg is not expected type: - if arguments less than params, remaining argument are set as 'undefined' - if arguments more than params, remaining argument are ignored - if argument is different than the required type, it'll be coerced to required type - if argument is different than the required type but cannot be coerced, will throw runtime type error, also with message and stacktrace ## Test We recommend to use near-workspaces to write tests for your smart contracts. See any of the examples for how tests are setup and written. ## NEAR-SDK-JS API Reference All NEAR blockchain provided functionality (host functions) are defined in `src/api.ts` and exported as `near`. You can use them by: ```js import {near} from 'near-sdk-js' // near.<api doucmented below>. e.g.: let signer = near.signerAccountId() ``` To use nightly host functions, such as `altBn128G1Sum`, your contract need to be built with nightly enabled. Use: ``` export NEAR_NIGHTLY=1 yarn build ``` ### About Type NEAR-SDK-JS is written in TypeScript, so every API function has a type specified by signature that looks familiar to JavaScript/TypeScript Developers. Two types in the signature need a special attention: - Most of the API take `BigInt` instead of Number as type. This because JavaScript Number cannot hold 64 bit and 128 bit integer without losing precision. - `Bytes` in both arguments and return represent a byte buffer, internally it's a JavaScript String Object. Any binary data `0x00-0xff` is stored as the char '\x00-\xff'. This is because QuickJS doesn't have ArrayBuffer in C API. - To ensure correctness, every `Bytes` argument need to be pass in with the `bytes()` function to runtime type check it's indeed a `Bytes`. - If `Bytes` is too long that `bytes()` can cause gas limit problem, such as in factory contract, represents the content of contract to be deployed. In this case you can precheck and guarantee the correctness of the content and use without `bytes()`. ### Context API ``` function currentAccountId(): String; function signerAccountId(): String; function signerAccountPk(): Bytes; function predecessorAccountId(): String; function input(): Bytes; function blockIndex(): BigInt; function blockHeight(): BigInt; function blockTimestamp(): BigInt; function epochHeight(): BigInt; function storageUsage(): BigInt ``` ### Economics API ``` function accountBalance(): BigInt; function accountLockedBalance(): BigInt; function attachedDeposit(): BigInt; function prepaidGas(): BigInt; function usedGas(): BigInt; ``` ### Math API ``` function randomSeed(): Bytes; function sha256(value: Bytes): Bytes; function keccak256(value: Bytes): Bytes; function keccak512(value: Bytes): Bytes; function ripemd160(value: Bytes): Bytes; function ecrecover(hash: Bytes, sign: Bytes, v: BigInt, malleability_flag: BigInt): Bytes | null; ``` ### Miscellaneous API ``` function valueReturn(value: Bytes); function panic(msg?: String); function panicUtf8(msg: Bytes); function log(msg: String); function logUtf8(msg: Bytes); function logUtf16(msg: Bytes); ``` ### Promises API ``` function promiseCreate(account_id: String, method_name: String, arguments: Bytes, amount: BigInt, gas: BigInt): BigInt; function promiseThen(promise_index: BigInt, account_id: String, method_name: String, arguments: Bytes, amount: BigInt, gas: BigInt): BigInt; function promiseAnd(...promise_idx: BigInt): BigInt; function promiseBatchCreate(account_id: String): BigInt; function promiseBatchThen(promise_index: BigInt, account_id: String): BigInt; ``` ### Promise API actions ``` function promiseBatchActionCreateAccount(promise_index: BigInt); function promiseBatchActionDeployContract(promise_index: BigInt, code: Bytes); function promiseBatchActionFunctionCall(promise_index: BigInt, method_name: String, arguments: Bytes, amount: BigInt, gas: BigInt); function promiseBatchActionTransfer(promise_index: BigInt, amount: BigInt); function promiseBatchActionStake(promise_index: BigInt, amount: BigInt, public_key: Bytes); function promiseBatchActionAddKeyWithFullAccess(promise_index: BigInt, public_key: Bytes, nonce: BigInt); function promiseBatchActionAddKeyWithFunctionCall(promise_index: BigInt, public_key: Bytes, nonce: BigInt, allowance: BigInt, receiver_id: String, method_names: String); function promiseBatchActionDeleteKey(promise_index: BigInt, public_key: Bytes); function promiseBatchActionDeleteAccount(promise_index: BigInt, beneficiary_id: String); ``` ### Promise API results ``` function promiseResultsCount(): BigInt; function promiseResult(result_idx: BigInt, register_id: BigInt): BigInt; function promiseReturn(promise_idx: BigInt); ``` ### Storage API ``` function storageWrite(key: Bytes, value: Bytes, register_id: BigInt): BigInt; function storageRead(key: Bytes, register_id: BigInt): BigInt; function storageRemove(key: Bytes, register_id: BigInt): BigInt; function storageHasKey(key: Bytes): BigInt; ``` ### Validator API ``` function validatorStake(account_id: String): BigInt; function validatorTotalStake(): BigInt; ``` ### Alt BN128 ``` function altBn128G1Multiexp(value: Bytes, register_id: BigInt); function altBn128G1Sum(value: Bytes, register_id: BigInt); function altBn128PairingCheck(value: Bytes): BigInt; ``` ### Collections A few useful on-chain persistent collections are provided. All keys, values and elements are of type `Bytes`. #### Vector Vector is an iterable implementation of vector that stores its content on the trie. Usage: ```js import {Vector} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.v = new Vector('my_prefix_') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.v = Object.assign(new Vector, this.v) } someMethod() { // insert this.v.push('abc') this.v.push('def') this.v.push('ghi') // batch insert, extend: this.v.extend(['xyz', '123']) // get let first = this.v.get(0) // remove, move the last element to the given index this.v.swapRemove(0) // replace this.v.replace(1, 'jkl') // remove the last this.v.pop() // len, isEmpty let len = this.v.len() let isEmpty = this.v.isEnpty() // iterate for (let element of this.v) { near.log(element) } // toArray, convert to JavaScript Array let a = this.v.toArray() // clear ths.v.clear() } ``` #### LookupMap LookupMap is an non-iterable implementation of a map that stores its content directly on the trie. It's like a big hash map, but on trie. Usage: ```js import {LookupMap} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.m = new LookupMap('prefix_a') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.m = Object.assign(new LookupMap, this.m) } someMethod() { // insert this.m.set('abc', 'aaa') this.m.set('def', 'bbb') this.m.set('ghi', 'ccc') // batch insert, extend: this.m.extend([['xyz', '123'], ['key2', 'value2']]) // check exist let exist = this.m.containsKey('abc') // get let value = this.m.get('abc') // remove this.m.remove('def') // replace this.m.set('ghi', 'ddd') } ``` #### LookupSet LookupSet is an non-iterable implementation of a set that stores its content directly on the trie. It's like LookupMap, but it only stores whether the value presents. Usage: ```js import {LookupSet} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.s = new LookupSet('prefix_b') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.s = Object.assign(new LookupSet, this.s) } someMethod() { // insert this.s.set('abc') this.s.set('def') this.s.set('ghi') // batch insert, extend: this.s.extend(['xyz', '123']) // check exist let exist = this.s.contains('abc') // remove this.s.remove('def') } ``` #### UnorderedMap UnorderedMap is an iterable implementation of a map that stores its content directly on the trie. Usage: ```js import {UnorderedMap} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.m = new UnorderedMap('prefix_c') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.m.keys = Object.assign(new Vector, this.m.keys) this.m.values = Object.assign(new Vector, this.m.values) this.m = Object.assign(new UnorderedMap, this.m) } someMethod() { // insert this.m.set('abc', 'aaa') this.m.set('def', 'bbb') this.m.set('ghi', 'ccc') // batch insert, extend: this.m.extend([['xyz', '123'], ['key2', 'value2']]) // get let value = this.m.get('abc') // remove this.m.remove('def') // replace this.m.set('ghi', 'ddd') // len, isEmpty let len = this.m.len() let isEmpty = this.m.isEnpty() // iterate for (let [k, v] of this.m) { near.log(k+v) } // toArray, convert to JavaScript Array let a = this.m.toArray() // clear this.m.clear() } ``` #### UnorderedSet UnorderedSet is an iterable implementation of a set that stores its content directly on the trie. It's like UnorderedMap but it only stores whether the value presents. Usage: ```js import {UnorderedSet} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.s = new UnorderedSet('prefix_d') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.s.elements = Object.assign(new Vector, this.s.elements) this.s = Object.assign(new UnorderedSet, this.s) } someMethod() { // insert this.s.set('abc') this.s.set('def') this.s.set('ghi') // batch insert, extend: this.s.extend(['xyz', '123']) // check exist let exist = this.s.contains('abc') // remove this.s.remove('def') // len, isEmpty let len = this.s.len() let isEmpty = this.s.isEnpty() // iterate for (let e of this.s) { near.log(e) } // toArray, convert to JavaScript Array let a = this.s.toArray() // clear this.s.clear() } ``` # NEAR-SDK-JS Tests This tests the functionality of high level APIs of NEAR-SDK-JS. Currently, it directly tests all collections and indirectly tests all decorators, serialization/deserialization, utils, code generation and some important APIs. Majority of near-sdk-js can be seen as tested. # Run tests ``` yarn yarn build yarn test ``` # Add a new test Create a test contract that covers the API you want to test in `src/`. Add a build command in `build.sh`. Write ava test in `__tests__`. # NEAR-SDK-JS (Enclave) ## Getting started with template project The fastest and recommended way to develop with near-sdk-js is to create a project with our github template: https://github.com/near/near-sdk-js-template-project. ## Running examples There are a couple of contract examples in the project: - [Clean contract state](https://github.com/near/near-sdk-js/tree/master/examples/clean-state) - [Doing cross contract call](https://github.com/near/near-sdk-js/tree/master/examples/cross-contract-call) - [Fungible token](https://github.com/near/near-sdk-js/tree/master/examples/fungible-token) - [Lockable fungible token](https://github.com/near/near-sdk-js/tree/master/examples/lockable-fungible-token) - [Non fungible token](https://github.com/near/near-sdk-js/tree/master/examples/non-fungible-token) - [Status message board](https://github.com/near/near-sdk-js/tree/master/examples/status-message) The general steps to run these contracts are same. You can also follow their corresponding READMEs to build, test and run the contracts. ### General steps to run examples locally 1. Use near-cli to deploy `jsvm.wasm` from the `jsvm/build` folder to one of account you controlled. For example, `jsvm.<your-account>`: ```sh export NEAR_ENV=local near deploy <jsvm-account> jsvm/build/jsvm.wasm ``` 2. `cd examples/<example>` 3. `yarn && yarn build` to get <contract>.base64 file (JS smart-contract). 4. Deploy <contract>.base64 file to `JSVM` account from the previous step. ```sh near js deploy --accountId <your-account> --base64File build/<contract-name>.base64 --deposit 0.1 --jsvm <jsvm-account> ``` 5. Interact with your contract using NEAR CLI or `near-api-js`. Encode the parameters and call. If the call cause the state increasement, you also need to attach NEAR to cover the storage deposit for the delta. ```sh near js call <account-that-deployed-js-contract-to-jsvm> <method-name> --accountId <account-performing-call> --args <args> --deposit 0.1 --jsvm <jsvm-account> ``` 6. If you want to remove the js contract and withdraw the storage deposit, use: ```sh near js remove --accountId <your-account> --jsvm <jsvm-account> ``` ### General steps to run examples on testnet 1. `export NEAR_ENV=testnet` 2. `cd examples/<example>` 3. `yarn && yarn build` to get <contract>.base64 file (JS smart-contract). 4. Deploy, call and remove JS contract is same as above, except <jsvm-account> is `jsvm.testnet`. This is also the default value, so you omit `--jsvm`. ## Error Handling in NEAR-SDK-JS If you want to indicate an error happened and fail the transaction, just throw an error object in JavaScript. Our JSVM runtime will detect and automatically invoke `panic_utf8` with `"{error.message}\n:{error.stack}"`. As a result, transaction will fail with `"Smart contract panicked: {error.message}\n{error.stack}"` error message. You can also use an error utilities library to organize your errors, such as verror. When your JS code or library throws an error, uncaught, the transaction will also fail with GuestPanic error, with the error message and stacktrace. When call host function with inappropriate type, means incorrect number of arguments or arg is not expected type: - if arguments less than params, remaining argument are set as 'undefined' - if arguments more than params, remaining argument are ignored - if argument is different than the required type, it'll be coerced to required type - if argument is different than the required type but cannot be coerced, will throw runtime type error, also with message and stacktrace ## Test We recommend to use near-workspaces to write tests for your smart contracts. See any of the examples for how tests are setup and written. ## NEAR-SDK-JS API Reference All NEAR blockchain provided functionality (host functions) are defined in `src/api.js` and exported as `near`. You can use them by: ```js import {near} from 'near-sdk-js' // near.<api doucmented below>. e.g.: let signer = near.signerAccountId() ``` To use nightly host functions, such as `altBn128G1Sum`, the enclave contract need to be built with `make jsvm-nightly` and deployed to a nearcore node that has nightly enabled. ### About Type - In arguments, `Uint64: Number | BigInt`. In return, `Uint64: BigInt`. Because JavaScript Number cannot hold all Uint64 without losing precision. But as arguments, interger number is also allowed for convinience. Same for `Uint128`. - `Bytes` in both arguments and return represent a byte buffer, internally it's a JavaScript String Object. Any binary data `0x00-0xff` is stored as the char '\x00-\xff'. This is because QuickJS doesn't have ArrayBuffer in C API. If the bytes happens to have only 1-byte chars, it happens to be same as the the same content string. ### Context API ``` function signerAccountId(): String; function signerAccountPk(): String; function predecessorAccountId(): String; function blockIndex(): Uint64; function blockHeight(): Uint64; function blockTimestamp(): Uint64; function epochHeight(): Uint64; ``` ### Economics API ``` function attachedDeposit(): Uint128; function prepaidGas(): Uint64; function usedGas(): Uint64; ``` ### Math API ``` function randomSeed(): Bytes; function sha256(value: Bytes): Bytes; function keccak256(value: Bytes): Bytes; function keccak512(value: Bytes): Bytes; function ripemd160(value: Bytes): Bytes; function ecrecover(hash: Bytes, sign: Bytes, v: Uint64, malleability_flag: Uint64): Bytes | null; ``` ### Miscellaneous API ``` function panic(msg?: String); function panicUtf8(msg: Bytes); function log(msg: String); function logUtf8(msg: Bytes); function logUtf16(msg: Bytes); ``` ### Storage API ``` function storageRead(key: Bytes): Bytes | null; function storageHasKey(key: Bytes): bool; ``` ### Validator API ``` function validatorStake(account_id: String): Uint128; function validatorTotalStake(): Uint128; ``` ### Alt BN128 ``` function altBn128G1Multiexp(value: Bytes): Bytes; function altBn128G1Sum(value: Bytes): Bytes; function altBn128PairingCheck(value: Bytes): bool; ``` ### JSVM Specific APIs Due to the design of JavaScript VM Contract, some additonal APIs are provided to obtain context, access storage and cross contract call. Since they're not documented at [NEAR nomicon](https://nomicon.io/). They're explained here. #### Obtain Context ``` function jsvmAccountId(): String; function jsvmJsContractName(): String; function jsvmMethodName(): String; function jsvmArgs(): Bytes; ``` The `jsvmAccountId` returns the JavaScript VM's contract account ID. The `jsvmJsContractName`, when called, returns the JavaScript contract name that are called at the moment. The `jsvmJsContractName` returns the method name being called. The `jsvmArgs` return the arguments passed to the method. #### Storage Access ``` function jsvmStorageWrite(key: Bytes, value: Bytes): bool; function jsvmStorageRead(key: Bytes): Bytes | null; function jsvmStorageRemove(key: Bytes): bool; function jsvmStorageHasKey(key: Bytes): bool; function storageGetEvicted(): Bytes; ``` These are equivalent to `storage*` but access limit to the substate of current JS contract. The `jsvmStorageWrite` and `jsvmStorageRemove` require and refund deposit to cover the storage delta. `jsvmStorage*` access the substate of current JS contract by prefix the key of current JS contract name (deployer's account id). You can use `storageRead` and `storageHasKey` to get code and state of other JS contracts. More specifically: code of `contractA` is stored under the key `contractA/code`. state of `contractA` is stored under `contractA/state/` concat with developer specifid key. And: ``` jsvmStorageRead(k) // equvalent to storageRead(jsvmJsContractName() + '/state/' + k) ``` When `jsvmStorageWrite` write to a key that already exists, the old value would be saved and can be obtained by `storageGetEvicted()`. In this case, jsvmStorageWrite returns `true`. If key doesn't exist before, returns `false`. When `jsvmStroageRemove` remove a key that exists, the old value would be saved and can be obtained by `storageGetEvicted()`. In this case, jsvmStroageRemove returns `true`. If key doesn't exist before, nothing is removed and returns `false`. #### Cross Contract Call ``` function jsvmValueReturn(value: Bytes); function jsvmCall(contract_name: String, method: String, args: Bytes): any; function jsvmCallRaw(contract_name: String, method: String, args: Bytes): Bytes; ``` The `jsvmValueReturn` is the version of `valueReturn` that should be used in all JavaScript contracts. It play well with `jsvmCall`. The `jsvmCall` invoke a synchronous cross contract call, to the given JavaScript `contract_name`, `method` with `args`. And returned the return value parsed as JSON into a JS object. The `jsvmCallRaw` is similar to `jsvmCall`, but return the raw, unparsed Bytes. ### Collections A few useful on-chain persistent collections are provided. All keys, values and elements are of type `Bytes`. #### Vector Vector is an iterable implementation of vector that stores its content on the trie. Usage: ```js import {Vector} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.v = new Vector('my_prefix_') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.v = Object.assign(new Vector, this.v) } someMethod() { // insert this.v.push('abc') this.v.push('def') this.v.push('ghi') // batch insert, extend: this.v.extend(['xyz', '123']) // get let first = this.v.get(0) // remove, move the last element to the given index this.v.swapRemove(0) // replace this.v.replace(1, 'jkl') // remove the last this.v.pop() // len, isEmpty let len = this.v.len() let isEmpty = this.v.isEnpty() // iterate for (let element of this.v) { near.log(element) } // toArray, convert to JavaScript Array let a = this.v.toArray() // clear ths.v.clear() } ``` #### LookupMap LookupMap is an non-iterable implementation of a map that stores its content directly on the trie. It's like a big hash map, but on trie. Usage: ```js import {LookupMap} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.m = new LookupMap('prefix_a') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.m = Object.assign(new LookupMap, this.m) } someMethod() { // insert this.m.set('abc', 'aaa') this.m.set('def', 'bbb') this.m.set('ghi', 'ccc') // batch insert, extend: this.m.extend([['xyz', '123'], ['key2', 'value2']]) // check exist let exist = this.m.containsKey('abc') // get let value = this.m.get('abc') // remove this.m.remove('def') // replace this.m.set('ghi', 'ddd') } ``` #### LookupSet LookupSet is an non-iterable implementation of a set that stores its content directly on the trie. It's like LookupMap, but it only stores whether the value presents. Usage: ```js import {LookupSet} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.s = new LookupSet('prefix_b') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.s = Object.assign(new LookupSet, this.s) } someMethod() { // insert this.s.set('abc') this.s.set('def') this.s.set('ghi') // batch insert, extend: this.s.extend(['xyz', '123']) // check exist let exist = this.s.contains('abc') // remove this.s.remove('def') } ``` #### UnorderedMap UnorderedMap is an iterable implementation of a map that stores its content directly on the trie. Usage: ```js import {UnorderedMap} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.m = new UnorderedMap('prefix_c') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.m.keys = Object.assign(new Vector, this.m.keys) this.m.values = Object.assign(new Vector, this.m.values) this.m = Object.assign(new UnorderedMap, this.m) } someMethod() { // insert this.m.set('abc', 'aaa') this.m.set('def', 'bbb') this.m.set('ghi', 'ccc') // batch insert, extend: this.m.extend([['xyz', '123'], ['key2', 'value2']]) // get let value = this.m.get('abc') // remove this.m.remove('def') // replace this.m.set('ghi', 'ddd') // len, isEmpty let len = this.m.len() let isEmpty = this.m.isEnpty() // iterate for (let [k, v] of this.m) { near.log(k+v) } // toArray, convert to JavaScript Array let a = this.m.toArray() // clear this.m.clear() } ``` #### UnorderedSet UnorderedSet is an iterable implementation of a set that stores its content directly on the trie. It's like UnorderedMap but it only stores whether the value presents. Usage: ```js import {UnorderedSet} from 'near-sdk-js' // in contract class constructor: constructor() { super() this.s = new UnorderedSet('prefix_d') } // Override the deserializer to load vector from chain deserialize() { super.deserialize() this.s.elements = Object.assign(new Vector, this.s.elements) this.s = Object.assign(new UnorderedSet, this.s) } someMethod() { // insert this.s.set('abc') this.s.set('def') this.s.set('ghi') // batch insert, extend: this.s.extend(['xyz', '123']) // check exist let exist = this.s.contains('abc') // remove this.s.remove('def') // len, isEmpty let len = this.s.len() let isEmpty = this.s.isEnpty() // iterate for (let e of this.s) { near.log(e) } // toArray, convert to JavaScript Array let a = this.s.toArray() // clear this.s.clear() } ``` ### APIs not available in JSVM Due to the architecture of the JSVM, some NEAR host functions, part of Standalone SDK or Rust SDK, are not revelant or being replaced by above JSVM specific APIs. Those unavailable APIs are explained here. - The `current_account_id` would always puts the account id of the JavaScript VM contract account in given register. The naming `current_account_id` is therefore confusing and not as helpful as a Rust contract. In some case, developer may want to get JavaScript VM contract account name, for example, determines whether it's running on testnet or mainnet, and behaves differently. So we expose this functionality under `jsvm_account_id()`. - The `input` puts the argument passed to call the contract in given register. In JavaScript VM, this is encoded as `"js_contract_name\0method_name\0args...`. This format isn't very convinient to developer, therefore, separate API `jsvm_js_contract_name`, `jsvm_method_name` and `jsvm_args` are provided. - The `storage_usage` return the storage bytes used by JavaScript VM contract. User doesn't care about the storage usage of the JSVM. Instead, users care about storage usage of a given JavaScript contract. This can be obtained by `storage_read` and count the sum of `register_len`. - The `account_balance` and `account_locked_balance` returns balance and locked_balance of JavaScript VM. Those are also not cared by users. - The `value_return` is a NEAR primitive that puts the value to return in a receipt. However we would want to access it as a JavaScript return value in a cross contract call. So we have a new API `jsvmValueReturn`, which does return the value in receipt and also as a JavaScript value returned by `jsvm_call`. The `jsvmValueReturn` should be used whenever you need `value_return`. - `abort` is intended to mark error location (line number). A full stacktrace with line numbers is provided by QuickJS, available when you throw a JS Error. So this API isn't needed. - Promise APIs act on the JSVM contract and could create subaccount, use the balance from JSVM account.JSVM would be a common VM used by the community instead of a Rust contract owned by the deployer. Therefore, promise APIs are not allowed. - The `storage_write` and `storage_remove` have access to all JavaScript contract codes and states deployed on JSVM. User can only write to their account owned code and state, as a substate of the JSVM. Therefor these two APIs are disallowed. Use `jsvm_storage_write` and `jsvm_storage_remove` instead. Read to other people owned code and state is allowed, as they're public as part of the blockchain anyway. ## Advanced guides ### Manual setup with npm package You can also layout your project by install the npm package manually: ``` yarn add near-sdk-js # or npm install near-sdk-js ``` ### NEAR-SDK-JS contributor setup It is tested on Ubuntu 20.04, Intel Mac and M1 Mac. Other linux should also work but they're not tested. 1. Make sure you have `wget`, `make`, `cmake` and `nodejs`. On Linux, also make sure you have `gcc`. 2. Run `make` to get platform specific `qjsc` and `jsvm` contract in `jsvm/build` folder. ### Run NEAR-SDK-JS tests See https://github.com/near/near-sdk-js/tree/master/tests ### Low level way to invoke NEAR-CLI `near js` subcommand in near-cli is a recent feature. Under the hood, it is encoding a special function call to jsvm contract. #### Deploy a JS contract <details> <summary><strong>The equivalent raw command is:</strong></summary> <p> near call <jsvm-account> deploy_js_contract --accountId <your-account> --args $(cat <contract-name>.base64) --base64 --deposit 0.1 </p> </details> #### Call a JS contract <details> <summary><strong>The equivalent raw command is:</strong></summary> <p> near call <jsvm-account> call_js_contract --accountId <your-account> --args <encoded-args> --base64 # where `<encoded-args>` can be obtained by: node scripts/encode_call.js <your-account> <method-name> <args> </p> </details> #### Remove a JS contract <details> <summary><strong>The equivalent raw command is:</strong></summary> <p> near call <jsvm-account> remove_js_contract --accountId <your-account> </p> </details> # Examples of contracts writen in JS with the use of `near-sdk-js` ## Install dependencies ```bash yarn ``` ## Build contracts ```bash yarn build ``` ## Run Tests ```bash yarn test ``` ## Example specific info - [Status Message](README-STATUS-MESSAGE.md) - [Counter](README-COUNTER.md) - [Cross-contract call](README-CROSS-CONTRACT-CALL.md) - [NFT](README-NFT.md) - [FT](README-FT.md) - [Lockable FT](README-LOCKABLE-FT.md) - [Clean State](README-CLEAN-STATE.md) # NEAR-SDK-JS Tests This tests the functionality of high level APIs of NEAR-SDK-JS. Currently, it directly tests all collections and indirectly tests all decorators, serialization/deserialization, utils, code generation and some important APIs. Majority of near-sdk-js can be seen as tested. # Run tests ``` yarn yarn build yarn test ``` # Add a new test Create a test contract that covers the API you want to test in `src/`. Add a build command in `build.sh`. Write ava test in `__tests__`.
Anand1337_anand
.buildkite pipeline.yml .cargo config.toml .config nextest.toml .github ISSUE_TEMPLATE bug_report.md PULL_REQUEST_TEMPLATE feature_stabilization.md weekly-digest.yml workflows book.yml mac_binary.yml mac_m1_binary.yml .gitpod.yml ATTRIBUTIONS.md CHANGELOG.md CODE_OF_CONDUCT.md CONTRIBUTING.md Cargo.toml README.md SECURITY.md chain chain-primitives Cargo.toml README.md src error.rs lib.rs chain Cargo.toml src block_processing_utils.rs blocks_delay_tracker.rs chain.rs chunks_store.rs crypto_hash_timer.rs doomslug.rs flat_storage_creator.rs lib.rs lightclient.rs metrics.rs migrations.rs missing_chunks.rs state_request_tracker.rs store.rs store_validator.rs store_validator validate.rs test_utils.rs test_utils kv_runtime.rs validator_schedule.rs tests challenges.rs doomslug.rs gc.rs mod.rs simple_chain.rs sync_chain.rs types.rs validate.rs chunks-primitives Cargo.toml README.md src error.rs lib.rs chunks Cargo.toml README.md src chunk_cache.rs client.rs lib.rs logic.rs metrics.rs test_utils.rs client-primitives Cargo.toml src debug.rs lib.rs types.rs client Cargo.toml src adapter.rs adversarial.rs client.rs client_actor.rs config_updater.rs debug.rs info.rs lib.rs metrics.rs rocksdb_metrics.rs sync block.rs epoch.rs header.rs mod.rs state.rs test_utils.rs tests bug_repros.rs catching_up.rs chunks_management.rs consensus.rs cross_shard_tx.rs doomslug.rs maintenance_windows.rs mod.rs process_blocks.rs query_client.rs view_client.rs epoch-manager Cargo.toml README.md src adapter.rs lib.rs proposals.rs reward_calculator.rs shard_assignment.rs test_utils.rs tests mod.rs random_epochs.rs types.rs validator_selection.rs indexer-primitives Cargo.toml README.md src lib.rs indexer CHANGELOG.md Cargo.toml README.md src lib.rs streamer errors.rs fetchers.rs metrics.rs mod.rs utils.rs jsonrpc-adversarial-primitives Cargo.toml src lib.rs jsonrpc-primitives Cargo.toml src errors.rs lib.rs message.rs types blocks.rs changes.rs chunks.rs client_config.rs config.rs gas_price.rs light_client.rs maintenance.rs mod.rs network_info.rs query.rs receipts.rs sandbox.rs split_storage.rs status.rs transactions.rs validator.rs jsonrpc CHANGELOG.md Cargo.toml README.md build_errors_schema.sh client Cargo.toml src lib.rs fuzz Cargo.toml fuzz_targets fuzz_target_1.rs jsonrpc-tests Cargo.toml res genesis_config.json src lib.rs tests http_query.rs rpc_query.rs rpc_transactions.rs res chain_n_chunk_info.html debug.html epoch_info.html last_blocks.html last_blocks.js network_info.css network_info.html network_info.js rpc_errors_schema.json sync.html tier1_network_info.html validator.html src api blocks.rs changes.rs chunks.rs client_config.rs config.rs gas_price.rs light_client.rs maintenance.rs mod.rs network_info.rs query.rs receipts.rs sandbox.rs split_storage.rs status.rs transactions.rs validator.rs lib.rs metrics.rs network Cargo.toml build.rs src accounts_data mod.rs tests.rs actix.rs blacklist.rs broadcast mod.rs tests.rs client.rs concurrency arc_mutex.rs atomic_cell.rs demux.rs mod.rs rate.rs rayon.rs runtime.rs tests.rs config.rs config_json.rs debug.rs lib.rs network_protocol borsh.rs borsh_conv.rs edge.rs mod.rs peer.rs proto_conv account_key.rs crypto.rs handshake.rs mod.rs net.rs peer_message.rs time.rs trace_context.rs util.rs testonly.rs tests.rs peer mod.rs peer_actor.rs stream.rs testonly.rs tests communication.rs mod.rs stream.rs tracker.rs transfer_stats.rs peer_manager connection mod.rs tests.rs mod.rs network_state mod.rs routing.rs tier1.rs peer_manager_actor.rs peer_store mod.rs testonly.rs tests.rs testonly.rs tests accounts_data.rs connection_pool.rs mod.rs nonce.rs routing.rs tier1.rs private_actix.rs raw connection.rs mod.rs tests.rs routing bfs.rs edge.rs graph mod.rs tests.rs mod.rs route_back_cache.rs routing_table_view mod.rs tests.rs sink.rs stats metrics.rs mod.rs store mod.rs schema mod.rs tests.rs testonly.rs tcp.rs test_utils.rs testonly fake_client.rs mod.rs stream.rs time.rs types.rs pool Cargo.toml README.md src lib.rs metrics.rs types.rs rosetta-rpc CHANGELOG.md Cargo.toml README.md src adapters mod.rs transactions.rs validated_operations add_key.rs create_account.rs delete_account.rs delete_key.rs deploy_contract.rs function_call.rs initiate_add_key.rs initiate_create_account.rs initiate_delete_account.rs initiate_delete_key.rs initiate_deploy_contract.rs initiate_function_call.rs mod.rs refund_delete_account.rs stake.rs transfer.rs config.rs errors.rs lib.rs models.rs types.rs utils.rs telemetry Cargo.toml README.md src lib.rs metrics.rs h-1 | core account-id Cargo.toml README.md fuzz Cargo.toml README.md fuzz_targets borsh.rs serde.rs src borsh.rs errors.rs lib.rs serde.rs chain-configs Cargo.toml README.md src client_config.rs genesis_config.rs genesis_validate.rs lib.rs metrics.rs updateable_config.rs crypto Cargo.toml src errors.rs hash.rs key_conversion.rs key_file.rs lib.rs signature.rs signer.rs test_utils.rs traits.rs util.rs vrf.rs dyn-configs Cargo.toml README.md src lib.rs metrics.rs o11y Cargo.toml README.md benches metrics.rs src context.rs io_tracer.rs lib.rs log_config.rs macros.rs metrics.rs pretty.rs testonly.rs testonly tracing_capture.rs primitives-core Cargo.toml src account.rs config.rs contract.rs hash.rs lib.rs parameter.rs profile.rs profile profile_v2.rs runtime fees.rs mod.rs serialize.rs types.rs primitives Cargo.toml benches serialization.rs res README.md src block.rs block_header.rs challenge.rs epoch_manager.rs errors.rs lib.rs merkle.rs network.rs rand.rs receipt.rs runtime apply_state.rs config.rs config_store.rs migration_data.rs mod.rs parameter_table.rs sandbox.rs shard_layout.rs sharding.rs sharding shard_chunk_header_inner.rs state.rs state_part.rs state_record.rs syncing.rs telemetry.rs test_utils.rs time.rs transaction.rs trie_key.rs types.rs upgrade_schedule.rs utils.rs utils min_heap.rs validator_signer.rs version.rs views.rs store Cargo.toml benches store_bench.rs trie_bench.rs src cold_storage.rs columns.rs config.rs db.rs db colddb.rs refcount.rs rocksdb.rs rocksdb instance_tracker.rs snapshot.rs slice.rs testdb.rs flat_state.rs lib.rs metadata.rs metrics.rs migrations.rs opener.rs sync_utils.rs test_utils.rs trie config.rs insert_delete.rs iterator.rs mod.rs nibble_slice.rs prefetching_trie_storage.rs shard_tries.rs split_state.rs state_parts.rs trie_storage.rs trie_tests.rs update.rs update iterator.rs Host functions Actions debug_scripts READEME.md __init__.py request_chain_info.py send_validator_logs.py tests __init__.py send_validator_logs_test.py deny.toml docs README.md SUMMARY.md advanced_configuration networking.md architecture README.md gas README.md estimator.md gas_profile.md parameter_definition.md how README.md cross-shard.md epoch.md gc.md meta-tx.md proofs.md serialization.md sync.md tx_receipts.md tx_routing.md network.md next README.md catchup_and_state_sync.md malicious_chunk_producer_and_phase2.md storage.md storage database.md flat_storage.md flow.md trie.md book.toml images architecture.svg logo.svg misc README.md practices README.md docs.md fast_builds.md protocol_upgrade.md rust.md security_vulnerabilities.md style.md testing README.md python_tests.md test_utils.md tracking_issues.md workflows README.md deploy_a_contract.md gas_estimations.md localnet_on_many_machines.md run_a_node.md genesis-tools README.md genesis-csv-to-json Cargo.toml src csv_parser.rs csv_to_json_configs.rs main.rs serde_with.rs genesis-populate Cargo.toml src lib.rs main.rs state_dump.rs keypair-generator Cargo.toml src main.rs integration-tests Cargo.toml src genesis_helpers.rs lib.rs node mod.rs process_node.rs runtime_node.rs thread_node.rs runtime_utils.rs test_helpers.rs tests client benchmarks.rs challenges.rs chunks_management.rs cold_storage.rs features.rs features access_key_nonce_for_implicit_accounts.rs account_id_in_function_call_permission.rs adversarial_behaviors.rs cap_max_gas_price.rs chunk_nodes_cache.rs fix_contract_loading_cost.rs fix_storage_usage.rs increase_deployment_cost.rs limit_contract_functions_number.rs lower_storage_key_limit.rs restore_receipts_after_fix_apply_chunks.rs wasmer2.rs zero_balance_account.rs flat_storage.rs mod.rs process_blocks.rs runtimes.rs sandbox.rs sharding_upgrade.rs mod.rs nearcore mod.rs node_cluster.rs rpc_error_structs.rs rpc_nodes.rs run_nodes.rs stake_nodes.rs sync_nodes.rs sync_state_nodes.rs track_shards.rs network churn_attack.rs full_network.rs mod.rs peer_handshake.rs runner.rs stress_network.rs runtime deployment.rs mod.rs sanity_checks.rs state_viewer.rs test_evil_contracts.rs standard_cases mod.rs rpc.rs runtime.rs test_catchup.rs test_errors.rs test_overflows.rs test_simple.rs test_tps_regression.rs user mod.rs rpc_user.rs runtime_user.rs nearcore Cargo.toml benches store.rs res example-config-gc.json example-config-no-gc.json src append_only_map.rs cold_storage.rs config.rs download_file.rs dyn_config.rs lib.rs metrics.rs migrations.rs runtime errors.rs mod.rs shard_tracker.rs tests economics.rs neard Cargo.toml build.rs res invalid_proof.json proof_example.json src cli.rs main.rs nightly README.md expensive.txt fuzz.toml nayduck.py nightly.txt pytest-adversarial.txt pytest-contracts.txt pytest-sanity.txt pytest-spec.txt pytest-stress.txt pytest.txt sandbox.txt pytest README.md __init__.py endtoend __init__.py endtoend.py lib __init__.py account.py branches.py cluster.py configured_logger.py data.py key.py lightclient.py messages __init__.py block.py bridge.py crypto.py network.py shard.py tx.py metrics.py mocknet.py mocknet_helpers.py network.py peer.py populate.py proxy.py proxy_instances.py serializer.py transaction.py utils.py remote.json requirements.txt tests __init__.py adversarial fork_sync.py gc_rollback.py malicious_chain.py start_from_genesis.py contracts deploy_call_smart_contract.py gibberish.py infinite_loops.py delete_remote_nodes.py loadtest README.md contract Cargo.toml build.sh src lib.rs loadtest.py setup.py mocknet __init__.py helpers __init__.py genesis_updater.py load_test_spoon_helper.py load_testing_add_and_delete_helper.py state_contract.rs load_test_betanet.py load_test_spoon.py mirror.py run_adversenet.py stop.py replay README.md replay.py sandbox fast_forward.py fast_forward_epoch_boundary.py patch_state.py sanity __init__.py backward_compatible.py block_chunk_signature.py block_production.py block_sync.py block_sync_archival.py concurrent_function_calls.py db_migration.py docker.py epoch_switches.py garbage_collection.py garbage_collection1.py garbage_collection_sharding_upgrade.py gc_after_sync.py gc_after_sync1.py gc_sync_after_sync.py handshake_tie_resolution.py large_messages.py lightclnt.py meta_tx.py network_drop_package.py one_val.py proxy_example.py proxy_restart.py proxy_simple.py recompress_storage.py repro_2916.py restart.py rosetta.py rpc_finality.py rpc_hash.py rpc_light_client_execution_outcome_proof.py rpc_max_gas_burnt.py rpc_state_changes.py rpc_tx_forwarding.py rpc_tx_status.py rpc_tx_submission.py skip_epoch.py spin_up_cluster.py split_storage.py staking1.py staking2.py staking_repro1.py staking_repro2.py state_migration.py state_sync.py state_sync1.py state_sync2.py state_sync3.py state_sync4.py state_sync5.py state_sync_fail.py state_sync_late.py state_sync_massive.py state_sync_massive_validator.py state_sync_routed.py switch_node_key.py sync_ban.py sync_chunks_from_archival.py transactions.py upgradable.py validator_switch.py validator_switch_key.py shardnet README.md __init__.py collect_ips.py restake.py scripts create_account.sh restaked.sh spec network peers_request.py stress hundred_nodes 100_node_block_production.py README.md block_chunks.py collect_logs.py create_gcloud_image.py node_rotation.py start_100_nodes.py watch_fork.py network_stress.py saturate_routing_table.py stress.py tools mirror contract Cargo.toml src lib.rs mirror_utils.py offline_test.py online_test.py runtime CHANGELOG.md near-test-contracts Cargo.toml README.md build.rs contract-for-fuzzing-rs Cargo.toml src lib.rs estimator-contract Cargo.toml src lib.rs src lib.rs test-contract-rs Cargo.toml src lib.rs test-contract-ts README.md assembly index.ts tsconfig.json package-lock.json package.json near-vm-errors Cargo.toml README.md src lib.rs near-vm-logic Cargo.toml README.md src alt_bn128.rs context.rs dependencies.rs gas_counter.rs lib.rs logic.rs mocks mock_external.rs mock_memory.rs mod.rs receipt_manager.rs test_utils.rs tests alt_bn128.rs context.rs ecrecover-tests.json ed25519_verify.rs gas_counter.rs helpers.rs iterators.rs logs.rs miscs.rs mod.rs promises.rs registers.rs storage_read_write.rs storage_usage.rs view_method.rs vm_logic_builder.rs types.rs utils.rs vmstate.rs near-vm-runner Cargo.toml FAQ.md README.md RUNTIMES.md fuzz Cargo.toml fuzz_targets diffrunner.rs runner.rs src lib.rs src cache.rs errors.rs imports.rs instrument.rs instrument gas mod.rs validation.rs rules.rs stack_height max_height.rs mod.rs thunk.rs lib.rs memory.rs prepare.rs runner.rs tests.rs tests cache.rs compile_errors.rs fuzzers.rs rs_contract.rs runtime_errors.rs test_builder.rs ts_contract.rs wasm_validation.rs vm_kind.rs wasmer2_runner.rs wasmer_runner.rs wasmtime_runner.rs runtime-params-estimator Cargo.toml README.md compiler.sh costs.txt emu-cost README.md build.sh counter_plugin counter.c test.c data_builder.py io_cost.sh run.sh estimate.sh estimator-warehouse Cargo.toml README.md src check.rs db.rs estimate.rs import.rs init.sql main.rs zulip.rs setup.sh src action_costs.rs config.rs cost.rs cost_table.rs costs_to_runtime_config.rs estimator_context.rs estimator_params.rs function_call.rs gas_cost.rs gas_metering.rs least_squares.rs lib.rs main.rs qemu.rs replay.rs replay cache_stats.rs fold_db_ops.rs gas_charges.rs rocksdb.rs transaction_builder.rs trie.rs utils.rs vm_estimator.rs runtime Cargo.toml src actions.rs adapter.rs balance_checker.rs config.rs ext.rs genesis.rs lib.rs metrics.rs prefetch.rs state_viewer errors.rs mod.rs verifier.rs tests runtime_group_tools mod.rs random_config.rs test_async_calls.rs |:--------------:| rust-toolchain.toml rustfmt.toml scripts __init__.py check_fuzzing.py check_nightly.py check_pytests.py flaky_test_check.py install_precommit.sh mac-release.sh migrations 10-gas-price-fix.py 107-2-revert-add-account-versioning.py 107-add-account-versioning.py 11-runtime-cost-adjustment.py 12-fix-inflation.py 13-block-merkle-root.py 14-update-wasmer.py 15-col-trie-changes.py 16-expose-validator-method.py 17-lightclient.py 18-exclude-debug-info-from-outcome-proof.py 19-col-chunks-height.py 20-refund-allowance.py 21-minimum-attached-gas.py 22-protocol-upgrade.py 23-delete_action_last.py 24-max-gas-price.py 25-minimum-stake-divisor.py 26-introduce-pessimistic-gas.py 27-add-amount-burnt-to-execution-outcome.py 28-add-executor-id-to-execution-outcome.py 29-add-compile-time-fees.py 5-preserve-height.py 6-state-stake.py 7-account-registrar.py 8-fraction.py 9-state-record-data.py nayduck.py nodelib.py parallel_coverage.py requirements_check.sh run_clippy.sh run_docker.sh setup_hooks.sh state mega-migrate.py split-genesis.py update_res.py testlib.py test-utils actix-test-utils Cargo.toml src lib.rs logger Cargo.toml runtime-tester Cargo.toml README.md fuzz Cargo.toml README.md fuzz_targets runtime_fuzzer.rs src fuzzing.rs lib.rs run_test.rs scenario_builder.rs store-validator Cargo.toml src main.rs testlib Cargo.toml src fees_utils.rs lib.rs process_blocks.rs runtime_utils.rs tools amend-genesis Cargo.toml src cli.rs lib.rs chainsync-loadtest Cargo.toml README.md src concurrency ctx.rs ctx_test.rs mod.rs once.rs rate_limiter.rs scope.rs scope_test.rs weak_map.rs fetch_chain.rs main.rs network.rs cold-store Cargo.toml README.md src cli.rs lib.rs debug-ui README.md package.json public index.html src index.css react-app-env.d.ts tsconfig.json delay-detector Cargo.toml README.md src lib.rs indexer example Cargo.toml README.md src configs.rs main.rs mirror Cargo.toml README.md src chain_tracker.rs cli.rs genesis.rs key_mapping.rs lib.rs metrics.rs offline.rs online.rs secret.rs mock-node Cargo.toml README.md benches README.md sync.rs src lib.rs main.rs setup.rs ping Cargo.toml src cli.rs csv.rs lib.rs metrics.rs restaked Cargo.toml src main.rs rpctypegen core Cargo.toml src lib.rs macro Cargo.toml src lib.rs speedy_sync Cargo.toml README.md src main.rs state-parts Cargo.toml src cli.rs lib.rs state-viewer Cargo.toml README.md src apply_chain_range.rs apply_chunk.rs cli.rs commands.rs contract_accounts.rs dump_state_parts.rs epoch_info.rs lib.rs rocksdb_stats.rs state_dump.rs tx_dump.rs storage-usage-delta-calculator Cargo.toml README.md src main.rs themis Cargo.toml src main.rs rules.rs style.rs types.rs utils.rs shard {} gas used: {} shard {} old chunk (#{}) shard {} chunk missing utils mainnet-res Cargo.toml README.md res mainnet_genesis.json mainnet_restored_receipts.json storage_usage_delta.json src lib.rs tests load_genesis.rs near-cache Cargo.toml benches cache.rs src cell.rs lib.rs sync.rs near-performance-metrics-macros Cargo.toml src lib.rs near-performance-metrics Cargo.toml src actix_disabled.rs actix_enabled.rs lib.rs process.rs stats_disabled.rs stats_enabled.rs near-stable-hasher Cargo.toml README.md src lib.rs stdx Cargo.toml src lib.rs
# Nightly tests lists The directory contains test list files which can be sent to NayDuck to request a run of the tests. Most notably, `nightly.txt` file contains all the tests that NayDuck runs once a day on the head of the master branch of the repository. Nightly build results are available on [NayDuck](https://nayduck.near.org/). ## List file format Each list file is read line-by line. Empty lines and lines starting with a hash are ignored. The rest either specifies a test to run or a list file to include. The general syntax of a line defining a test is: <category> [--skip-build] [--timeout=<timeout>] [--release] [--remote] <args>... [--features <features>] `<category>` specifies the category of the test and can be `pytest` or `expensive`. The meaning of `<args>` depends on the test category. ### pytest tests The `pytest` tests are run by executing a file within `pytest/tests` directory. The file is executed via `python` interpreter (and confusingly not via `pytest` command) so it must actually run the tests rather than only defining them as test functions. In the test specification path to the file needs to be given (excluding the `pytest/tests` prefix) and anything that follows is passed as arguments to the script. For example: pytest sanity/lightclnt.py pytest sanity/state_sync_routed.py manytx 115 Note: NayDuck also handles `mocknet` test category. It is now deprecated and is treated like `pytest` with `--skip-build` flag implicitly set. ### expensive tests The `expensive` tests run a test binary and execute specific test in it. (Test binaries are those built via `cargo test --no-run`). While this can be used to run any Rust test, the intention is to run expensive tests only. Those are the tests which are ignored unless `expensive_tests` crate feature is enabled. Such tests should be marked with a `cfg_attr` macro, e.g.: #[test] #[cfg_attr(not(feature = "expensive_tests"), ignore)] fn test_gc_boundaries_large() { /* ... */ } The arguments of an expensive test specify package in which the test is defined, test binary name and the full path to the test function. For example: expensive nearcore test_tps_regression test::test_highload (Currently the package name is ignored but it may change in the future so make sure it’s set correctly). The path to the test function must match exactly an the test binary is called with `--exact` argument. ### Other arguments As mentioned, there are additional arguments that can go between the test category and the test specification arguments. Those are `--skip-build`, `--timeout`, `--release` and `--remote`. `--skip-build` causes build step to be skipped for the test. This means that the test doesn’t have access to build artefacts (located in `target/debug` or `target/release`) but also doesn’t need to wait for the build to finish and thus can start faster. `--timeout=<timeout>` specifies the time after which the test will be stopped and considered failed. `<timeout>` is an integer with an optional `s`, `m` or `h` suffix. If no suffix is given, `s` is assumed. The default timeout is three minutes. For example, the following increases timeout for a test to four minutes: pytest --timeout=4m sanity/validator_switch.py `--release` makes the build use a release profile rather than a dev profile. In other words, all `cargo` invocations are passed additional `--release` argument. This is supported but currently not used by any of the nightly tests. `--remote` configures pytest tests to use Near test nodes started on spot GCP machines rather than executing a small cluster on host where the test is running. No nightly test uses this feature and to be honest I can’t vouch whether it even works. Lastly, at the end of the test specification line additional features can be given in the form of `--features <features>` arguments. Similarly to `--release`, this results in given features being enabled in builds. Note that the `test_features` Cargo feature is always enabled so there's no need to specify it explicitly. Note that with `--skip-build` switch the `--release` and `--features` flags are essentially ignored since they only affect the build and are not passed to the test. ### Include directive To help organise tests, the file format also supports `./<path>` syntax for including contents of other files in the list. The includes are handled recursively though at the moment there’s a limit of three levels before the parser starts ignoring the includes. For example, `nightly.txt` file may just look as follows: ./sandbox.txt ./pytest.txt ./expensive.txt with individual tests listed in each of the referenced files. This makes the files more tidy. Note that any includes accessible from `nightly.txt` file must live within the `nightly` directory and use `.txt` file extension. Using arbitrary paths and extensions will work locally but it will break NayDuck’s nightly runs. ## Scheduling a run Every 24 hours NayDuck checks if master branch has changed and if it has schedules a new run including all tests listed in the `nightly.txt` file. It’s also possible to request a run manually in which case arbitrary set of tests can be run on an arbitrary commit (so long as it exists in the near/nearcore repository). This can be done with `nayduck.py` script which takes the list file as an argument. For example, to run spec tests one might invoke: ./scripts/nayduck.py -t nightly/pytest-spec.txt With no other arguments the tests will be run against checked out commit in the current branch. It is possible to specify which branch and commit to run against with `--branch` and `--sha` arguments. For full usage refer to `./scripts/nayduck.py --help` output. NayDuck cannot run tests against local working directory or even commits in a private fork of the nearcore repository. To schedule a NayDuck run, the commit must first be pushed to nearcore. The commit does not need to be on master branch; testing a feature branch is supported. On success the script outputs link to the page which can be used to see status of the run. Depending on which tests were scheduled the run can take over an hour to finish. ## Creating new tests New tests can be created either as a Rust test or a pytest. If a Rust test is long-running (or otherwise requires a lot of resources) and intended to be run as a nightly test on NayDuck it should be marked with a `#[cfg(feature = "expensive_tests")]` directive (either on the test function or module containing it). With that, the tests will not be part of a `cargo test` run performed on every commit, but NayDuck will be able to execute them. Apart from that, expensive Rust tests work exactly the same as any other Rust tests. pytests are defined as scripts in the `pytest/tests` directory. As previously mentioned, even though the directory is called pytest, when run on NayDuck they scripts are run directly via `python`. This means that they need to execute the tests when run as the main module rather than just defining the tests function. To make that happen it’s best to define `test_<foo>` functions with test bodies and than execute all those functions in a code fragment guarded by `if __name__ == '__main__'` condition. ### Check scripts Unless tests are included (potentially transitively) in `nightly.txt` file, NayDuck won’t run them. As part of pull request checks, verification is performed to make sure that no test is forgotten and all new tests are included in the nightly list. That’s done by `scripts/check_nightly.txt` and `scripts/check_pytest.txt` scripts. The list all the expensive and pytest tests defined in the repository and then check whether they are all mentioned in the nightly list. The scripts recognise commented out tests so if a test is broken it can be removed from the list by commenting it out. However, such a test must be proceeded by a TODO comment mentioning an issue which tracks the breakage. For example: # TODO(#2411): Enable them when we fix the issue with proxy shutdown #pytest --timeout=900 sanity/sync_ban.py true #pytest --timeout=900 sanity/sync_ban.py false The include directive can be commented out like that as well though crucially there must be no space between the hash sign and dot. For example: # TODO(#2411): Working on a fix. #./bridge.txt ## Core Resource Files Stores resource data which is part of the protocol stable enough to be moved outside of the code. ### `runtime_configs` All parameter value to configure the runtime are defined in `parameters.yaml`. Parameters added or changed in protocol upgrades are defined in differential config files with a naming scheme like `V.yaml`, where `V` is the new version. The content of the base configuration file is one flat list of typed keys and untyped values. Key names are defined in `core/primitives-core/src/parameter.rs`. The format of the differential files is slightly different. Inserting new parameters uses the following syntax: `key: { new: value }`. Parameters that change are specified like this: `key: { old: old_value, new: new_value }`. Removing a previously defined parameter for a new version is done as follows: `key: { old: old_value }`. This causes the parameter value to be undefined in newer versions which generally means the default value is used to fill in the `RuntimeConfig` object. # Introduction Welcome to the nearcore development guide! The target audience of this guide are developers of nearcore itself. If you are a user of NEAR (either a contract developer, or validator running a node), please refer to the user docs at <https://docs.near.org>. This guide is built with [mdBook](https://rust-lang.github.io/mdBook/) from sources in the [nearcore repository](https://github.com/near/nearcore/). You can edit it by pressing the "edit" icon in the top right corner, we welcome all contributions. The guide is hosted at <https://near.github.io/nearcore/>. The guide is organized as a collection of loosely coupled chapters -- you don't need to read them in order, feel free to peruse the TOC, and focus on the interesting bits. The chapters are classified into three parts: * [**Architecture**](./architecture/) talks about how the code works. So, for example, if you are interested in how a transaction flows through the system, look there! * [**Practices**](./practices/) describe, broadly, how we write code. For example, if you want to learn about code style, issue tracking, or debugging performance problems, this is the chapter for you. * Finally, the [**Misc**](./misc/) part holds various assorted bits and pieces. We are trying to bias ourselves towards writing more docs, so, if you want to document something and it doesn't cleanly map to a category above, just put it in misc! If you are unsure, start with [Architecture Overview](./architecture/) and then read [Run a Node](./practices/workflows/run_a_node.md) # Installation with cargo You can install `cargo-fuzz`, with the following command. ```bash cargo install cargo-fuzz ``` # How to see if everything works You can see if it compiles by the following command. ``` cd test-utils/runtime-tester/fuzz/ env RUSTC_BOOTSTRAP=1 'cargo' 'fuzz' 'run' 'runtime_fuzzer' '--' '-len_control=0' '-prefer_small=0' '-max_len=4000000' '-rss_limit_mb=10240' ``` # Runtime Fuzz Currently only one target is present -- runtime_fuzzer. This target will create random scenarios using Arbitrary trait and execute them. This will keep happening, until one scenario fails. To run fuzz test: ```bash RUSTC_BOOTSTRAP=1 cargo fuzz run runtime_fuzzer ``` `max_len` here is to create bigger scenarios. Fuzzing starts from small inputs (Unstructured) and slowly goes to bigger. To go to bigger input faster run fuzz with additional arguments: ```bash RUSTC_BOOTSTRAP=1 cargo fuzz run runtime_fuzzer -- -len_control=0 -prefer_small=0 ``` After finding the failed test, cargo fuzz will show failed Scenario (in debug format) and also will write path to failing input and suggest further debug commands: ```bash Failing input: artifacts/runtime_fuzzer/<id> Output of `std::fmt::Debug`: ** full json for Scenario ** Reproduce with: cargo fuzz run runtime_fuzzer artifacts/runtime_fuzzer/<id> Minimize test case with: cargo fuzz tmin runtime_fuzzer artifacts/runtime_fuzzer/<id> ``` So, reproducing in this case will be: ```bash RUSTC_BOOTSTRAP=1 cargo fuzz run runtime_fuzzer artifacts/runtime_fuzzer/<id> ``` To make a smaller scenario with the same error: ```bash RUSTC_BOOTSTRAP=1 cargo fuzz tmin runtime_fuzzer artifacts/runtime_fuzzer/<id> ``` Writing Scenario to json: ```bash RUSTC_BOOTSTRAP=1 cargo fuzz fmt runtime_fuzzer artifacts/runtime_fuzzer/<id> 2>&1 | sed '1,3d' | tee scenario.json ``` To run specific scenario.json use test from runtime-tester ```bash cargo test test_scenario_json --release -- --ignored ``` # near-chunks This crate cotains functions to handle chunks. In NEAR - the block consists of multiple chunks - at most one per shard. When a chunk is created, the creator encodes its contents using Reed Solomon encoding (ErasureCoding) and adds cross-shard receipts - creating PartialEncodedChunks that are later sent to all the validators (each validator gets a subset of them). This is done for data availability reasons (so that we need only a part of the validators to reconstruct the whole chunk). You can read more about it in the Nightshade paper (https://near.org/nightshade/) A honest validator will only approve a block if it receives its assigned parts for all chunks in the block - which means that for each chunk, it has `has_all_parts()` returning true. For all nodes (validators and non-validators), they will only accept/process a block if the following requirements are satisfied: * for every shard it tracks, a node has to have the full chunk, * for every shard it doesn't track, a node has have the receipts from this shard to all shards If node tracks given shard (that is - it wants to have a whole content of the shard present in the local memory) - it will want to receive the necessary amount of PartialChunks to be able to reconstruct the whole chunk. As we use ReedSolomon, this means that they need `data_shard_count` PartialChunks (which is lower than `total_shard_count`). Afterwards, it will reconstruct the chunk and persist it in the local storage (via chain/client). When the PartialEncodedChunk is received (in chain/client) - it will try to validate it immediately, if the previous block is already available (via `process_partial_encoded_chunk()`) or store it for future validation (via `store_partial_encoded_chunk()`). ## ShardsManager Shard Manager is responsible for: * **fetching partial chunks** - it can ask other nodes for the partial chunks that it is missing. It keeps the track of the requests via RequestPool and can be asked to resend them when someone calls `resend_chunk_requests` (this is done periodically by the client actor). * **storing partial chunks** - it stores partial chunks within local cache before they can be used to reconstruct for the full chunk. `stored_partial_encoded_chunks` stores non-validated partial chunks while `ChunkCache` stores validated partial chunks. (we need the previous block in order to validate partial chunks). This data is also used when other nodes are requesting a partial encoded chunk (see below). * **handling partial chunks requests** - when request for the partial chunk arrives, it handles reading it from store and returning to the requesting node * **validating chunks** - once it receives correct set of partial chunks for the chunk, it can 'accept the chunk' (which means that validator can sign the block if all chunks are accepted) * **storing full chunks** - it stores the full chunk into our storage via `decode_and_persist_encoded_chunk()`, which calls store_update's `save_chunk()` ## ChunkCache Cache for validated partial chunks that we've seen - also stores the mapping from the blocks to the chunk headers. ## RequestPool Tracks the requests for chunks that were sent - so that we don't resend them unless enough time elapsed since the last attempt. # JSON-RPC API for nearcore [JSON-RPC](https://www.jsonrpc.org/) API for nearcore node exposes handles to inspect the data, inspect the network state, and the node state, and allows to submit a transaction. ## Guarantees All the APIs that are compiled by default (default-features) and not prefixed with `EXPERIMENTAL_` are kept stable without breaking changes. We also support `near-api-*` bindings to JSON-RPC in the latest state and propagate deprecation warnings through them. The breaking changes (e.g. removal or change of `EXPERIMENTAL_` APIs) are communicated through the CHANGELOG next to this file. ## Policies for API Changes 1. We only add the APIs to the data that is already available in nearcore storage. 2. We don't violate the guaranties described in the section above. 3. We prefix new APIs with `EXPERIMENTAL_` (see the Experimental API Policies below). 4. We document the API change on [NEAR Docs](https://docs.near.org/api/rpc/introduction) BEFORE merging the change to nearcore. 5. We log changes to the methods and API input/output structures through CHANGELOG.md file in the jsonrpc crate. ## Experimental API Policies When you introduce new API, we may go two ways: 1. Hide the new API behind a feature-flag that is not enabled by default 2. Prefix the method name with `EXPERIMENTAL_` Either way, we need to document the new API in our [RPC endpoint docs](https://docs.near.org/api/rpc/introduction). Stabilization of the Experimental APIs is multistage: 1. Once the `EXPERIMENTAL_` prefixed API handler lands to master it starts its lifetime. While the API is Experimental, we have the freedom to play with it and iterate fearlessly (if we need to change it, we change it and only record the change in the CHANGELOG and update the documentation, no need for backwards compatibility). 2. If we feel that the API is stable (being in **use** for a while), we need to release a new API method without the `EXPERIMENTAL_` prefix while keeping the old method name as an alias for the transition period. 3. Drop the `EXPERIMENTAL_` alias completely when nearcore version with the stable method name is deployed to the majority nodes in the network, and most (all) clients have transitioned to the new API. ## Transaction Mirror This is some code that tries to help with the following: We have some chain, let's call it the "source chain", producing blocks and chunks with transactions as usual, and we have another chain, let's call it the "target chain" that starts from state forked from the source chain. Usually this would be done by using the `neard view-state dump-state` command, and using the resulting genesis and records file as the start of the target chain. What we want is to then periodically send the transactions appearing in the source chain after the fork point to the target chain. Ideally, the traffic we see in the target chain will be very similar to the traffic in the source chain. The first approach we might try is to just send the source chain transactions byte-for-byte unaltered to the target chain. This almost works, but not quite, because the `block_hash` field in the transactions will be rejected. This means we have no choice but to replace the accounts' public keys in the original forked state, so that we can sign transactions with a valid `block_hash` field. So the way we'll use this is that we'll generate the forked state from the source chain using the usual `dump-state` command, and then run: ``` $ mirror prepare --records-file-in "~/.near/output/records.json" --records-file-out "~/.near/output/mapped-records.json" ``` This command will output a records file where the keys have been replaced. And then the logic we end up with when running the transaction generator is something like this: ``` loop { sleep(ONE_SECOND); source_block = fetch_block(source_chain_view_client, height); for chunk in block: for tx in chunk: private_key = map_key(tx.public_key) block_hash = fetch_head_hash(target_chain_view_client) new_tx = sign_tx(private_key, tx.actions, block_hash) send_tx(target_chain_client, new_tx) } ``` So then the question is what does `map_key()` do?. If we don't care about the security of these accounts in the target chain (for example if the target chain is just some throwaway test chain that nobody would have any incentive to mess with), we can just use the bytes of the public key directly as the private key. If we do care somewhat about security, then we pass a `--secret-key-file` argument to the `prepare` command, and pass it as an argument to `map_key()`. Using that makes things a little bit more delicate, since if the generated secret is ever lost, then it will no longer be possible to mirror any traffic to the target chain. known problems: keys in the source chain added with the `promise_batch_action_add_key*` host functions will not be mapped in the target chain. Maybe a solution could be to replace those keys manually or something? # near-account-id This crate provides a type for representing a syntactically valid, unique account identifier on the [NEAR](https://near.org) network, according to the [NEAR Account ID](https://docs.near.org/concepts/basics/account#account-id-rules) rules. [![crates.io](https://img.shields.io/crates/v/near-account-id?label=latest)](https://crates.io/crates/near-account-id) [![Documentation](https://docs.rs/near-account-id/badge.svg)](https://docs.rs/near-account-id) ![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/near-account-id.svg) ## Usage ```rust use near_account_id::AccountId; let alice: AccountId = "alice.near".parse()?; assert!("ƒelicia.near".parse::<AccountId>().is_err()); // (ƒ is not f) ``` See the [docs](https://docs.rs/near-account-id) for more information. ## License Licensed under either of - Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <http://www.apache.org/licenses/LICENSE-2.0>) - MIT license ([LICENSE-MIT](LICENSE-MIT) or <http://opensource.org/licenses/MIT>) at your option. ## Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. # mock-node This crate hosts libraries to start a test env for a single node by replacing the network module with a mock network environment. The mock network environment simulates the interaction that the client will usually have with other nodes by responding to the client's network messages and broadcasting new blocks. The mock network reads a pre-generated chain history from storage. ## Quick Start ```console $ cargo run --release -p mock-node -F mock_node -- ~/.near/localnet/node0 ``` where the `node0` directory contains some pre-generated chain history in storage. You can find two examples in the ./benches directory. If you are running a mock node for mainnet or testnet on a GCP node, you want to place the new client home dir on a SSD disk for optimal rocksdb performance. Note that the default booting disk of GCP notes are HDD, so you need to mount a new SSD disk on your node and put the mock node's home dir there. See https://cloud.google.com/compute/docs/disks/add-persistent-disk for how to attach and mount a new disk to an existing GCP node. See `$ cargo run -p mock-node -F mock_node -- --help` for the list of available options and their documentation. ## Examples #### Replay localnet history ```console $ cargo r -r -p mock-node -F mock_node -- ~/.near/localnet/node0 ``` Here we take the home dir of an existing node in a localnet as chain history home dir, so the mock network will reproduce the client catching up with the entire history of the localnet from genesis. #### Replay mainnet history from a certain height To replay mainnet or testnet history, in most use cases, we want to start replaying from a certain height, instead of from genesis block. The following comment replays mainnet history from block height 60925880 to block height 60925900. ```console $ cargo r -r -p mock-node -F mock_node -- ~/.near ~/mock_node_home_dir --start_height 60925880 --target-height 60925900 ``` By providing a starting height, the binary will set up the data dir before starting the client, by copying the state at the specified height and other chain info necessary for processing the blocks afterwards (such as block headers and blocks). This initial setup may take a long time (The exact depends on your source dir, my experiment takes about an hour from a non-archival source dir. Copying from archival node source dir may take longer as rocksdb is slower). So we suggest specifying a client dir (the `~/mock_node_home_dir` argument) so you can reuse it again without having to copy the state again. Note that the start height must be the last block of an epoch. Once you have the source dir already set up, you can run the command without `--start_height`, ```console $ cargo r -r -p mock-node -F mock_node -- ~/.near ~/mock_node_home_dir --target-height 60926000 ``` Without `--starting_height`, the binary will not modify the client home dir before starting the mock node. Therefore, the mock node will start from the chain head stored in the client dir. ## Mock Network Configuration Certain details around how the mock network behaves can be configured with the file `mock.json` in the chain history home directory. Currently, the only supported configuration options tell how long to wait before replying to requests (the same as the --network_delay flag), and how often to send unrequested blocks and chunk part requests. By default, no such unrequested messages are sent, but the following config file will have the mock code produce unrequested blocks every 100 milliseconds, and chunk part requests every 50 milliseconds. ```json { "response_delay": { "secs": 0, "nanos": 100000000 }, "incoming_requests": { "block": { "interval": { "secs": 0, "nanos": 100000000 } }, "chunk_request": { "interval": { "secs": 0, "nanos": 50000000 } } } } ``` # Near Telemetry A small utility (TelemetryActor), that tries to send the telemetry (metrics) information as JSON over HTTP-post to selected list of servers. Telemetry is sent from all the nearcore binaries (that enabled it in the config.json) - like validators, RPC nodes etc. The data that is sent over is of type TelemetryInfo, and is signed with the server's key. It contains info about the code (release version), server (cpu, memory and network speeds), and chain (node_id, status, peer connected, block height etc). TODO: add pointer to the code, that is used by the receiving server. # Overview This chapter holds various assorted bits of docs. If you want to document something, but don't know where to put it, put it here! ## Crate Versioning and Publishing While all the crates in the workspace are directly unversioned (`v0.0.0`), they all share a unified variable version in the [workspace manifest](Cargo.toml). This keeps versions consistent across the workspace and informs their versions at the moment of publishing. We also have CI infrastructure set up to automate the publishing process to crates.io. So, on every merge to master, if there's a version change, it is automatically applied to all the crates in the workspace and it attempts to publish the new versions of all non-private crates. All crates that should be exempt from this process should be marked `private`. That is, they should have the `publish = false` specification in their package manifest. This process is managed by [cargo-workspaces](https://github.com/pksunkara/cargo-workspaces), with a [bit of magic](https://github.com/pksunkara/cargo-workspaces/compare/master...miraclx:grouping-and-exclusion#files_bucket) sprinkled on top. ## Issue Labels Issue labels are of the following format `<type>-<content>` where `<type>` is a capital letter indicating the type of the label and `<content>` is a hyphened phrase indicating what is label is about. For example, in the label `C-bug`, `C` means category and `bug` means that the label is about bugs. Common types include `C`, which means category, `A`, which means area, `T`, which means team. An issue can have multiple labels including which area it touches, which team should be responsible for the issue, and so on. Each issue should have at least one label attached to it after it is triaged and the label could be a general one, such as `C-enhancement` or `C-bug`. # near-pool crate This crate holds TransactionPool (with PoolIterator), that is used to keep track of transactions that were not yet accepted into the blockchain. # near-vm-runner An engine that run smart contracts compiled to Wasm. This is the main crate of the "contract runtime" part of nearcore. "Running smart contracts" is: - Wasm instrumentation for gas metering and various safety checks (`prepare.rs`). - Compiling Wasm to a particular VM representation (`cache.rs`). - Exposing blockchain-specific functionality to Wasm code. That is, defining a corresponding host function for each funcition in `near-vm-logic` (`imports.rs`). - Actual code execution (`wasmer_runner.rs`). A particular runtime used for Wasm execution is an implementation detail. At the moment we support Wasmer 0.x, Wasmer 2.0 and Wasmtime, with Wasmer 2.0 being default. The primary client of Wasm execution services is the blockchain proper. The second client is the contract sdk tooling. vm-runner provides additional API for contract developers to, for example, get a gas costs breakdown. See the [FAQ][./faq.md] document for high-leven design constraints discussion. ## Entry Point The entry point is the `runner::run` function. ## Testing There's a bunch of unit-tests in this crate. You can run them with ```console $ cargo t -p near-vm-runner --features wasmer0_vm,wasmer2_vm,wasmtime_vm ``` The tests use either a short wasm snippets specified inline, or a couple of larger test contracts from the `near-test-contracts` crate. We also have fuzzing setup: ```console $ cd runtime/near-vm-runner && RUSTC_BOOTSTRAP=1 cargo fuzz run runner ``` ## Profiling `tracing` crate is used to collect Rust code profile data via manual instrumentation. If you want to know how long a particular function executes, use the following pattern: ```ignore fn compute_thing() { let _span = tracing::debug_span!(target: "vm", "compute_thing").entered(); for i in 0..99 { do_work() } } ``` This will record when the `_span` object is created and dropped, including the time diff between the two events. To get a human readable output out of these events, you can use the built-in tracing subscriber: ```ignore tracing_subscriber::fmt::Subscriber::builder() .with_max_level(tracing::level_filters::LevelFilter::DEBUG) .with_span_events(tracing_subscriber::fmt::format::FmtSpan::CLOSE) .init(); code_to_profile_here(); ``` Alternatively, there's an alternative hierarchical profiler ```ignore tracing_span_tree::span_tree().enable(); code_to_profile_here(); ``` The result would look like this: ```text 112.33ms deserialize_wasmer 2.64ms run_wasmer/instantiate 96.34µs run_wasmer/call 123.15ms run_wasmer 123.17ms run_vm ``` # Shardnet tools ## restake.py Manages restaking of shardnet network participants. Uses `restaked` to regularly restake if a node is kicked. Runs `restaked` on each of the remote machines. Gets the `restaked` binary from AWS. Optionally creates accounts for the remote nodes, but requires public and private keys of account `near`. ## Example ``` python3 tests/shardnet/restake.py --delay-sec 60 --near-pk $NEAR_PUBLIC_KEY --near-sk $NEAR_PRIVATE_KEY ``` NEAR Indexer Simple Logger Example ================================== This is an example project featuring [NEAR Indexer Framework](https://github.com/nearprotocol/nearcore/tree/master/chain/indexer). This Indexer prints out all the blocks, chunks, transactions, receipts, execution outcomes, and state changes block by block immediately once it gets finalized in the network. Refer to the NEAR Indexer Framework README to learn how to run this example. # Runtime Parameter Estimator Warehouse A wrapper application around a SQLite database. SQLite uses a single file to store it's data and only requires minimal tools to be installed. The warehouse acts as middleman between the output of the parameter estimator and analytic tools that work with the data. Type `cargo run -- help` for an up-to-date list of available commands and their documentation. ## Examples ### estimator-warehouse import ``` $ target/release/runtime-params-estimator --json-output --metric time --iters 5 --warmup-iters 1 --costs WriteMemoryBase \ | target/release/estimator-warehouse import --commit-hash `git rev-parse HEAD` ``` ### estimator-warehouse stats ``` $ cargo run -- --db $SQLI_DB stats ========================= Warehouse statistics ========================= metric records last updated ------ ------- ------------ icount 163 2022-03-23 15:50:58 time 48 2022-03-23 11:14:00 parameter 0 never ============================== END STATS =============================== ``` ### estimator-warehouse check ``` $ cargo run -- --db $SQLI_DB check --metric time RelativeChange(RelativeChange { estimation: "WriteMemoryBase", before: 191132060000.0, after: 130098178000.0 }) ``` # Continuous Estimation This folder contains some scripts for automated parameter estimation and tracking of the results. ## How can I observe results? 1. Check [Zulip # pagoda/contract-runtime/ce](https://near.zulipchat.com/#narrow/stream/319057-pagoda.2Fcontract-runtime.2Fce) for significant changes in gas cost estimations on the master branch. 1. Browse [near.github.io/parameter-estimator-reports](https://near.github.io/parameter-estimator-reports) for a history of gas cost estimations and how it compares to protocol parameters. ## Understanding the Data flow 1. The estimator produces JSON output with gas costs and extra details. 1. JSON output is fed to the `estimator-warehouse`, which is a wrapper around an SQLite database file. This file is stored as a buildkite artifact. 1. The estimator-warehouse pushes notifications to Zulip. 1. (TODO[jakmeier]) The estimator-warehouse pushes JSON reports to near/parameter-estimator-reports. 1. (TODO[jakmeier]) A vanilla JavaScript frontend at reads the JSON files hosted by GitHub pages and displays them at [near.github.io/parameter-estimator-reports](https://near.github.io/parameter-estimator-reports). ## Running in CI TODO[jakmeier]: Install a daily buildkite job and document the necessary steps to prepare the full environment. ## Running locally Use `cargo run -- estimate` to run estimations on the current version in your working directory. Then use [estimator-warehouse](../estimator-warehouse) to interact with the data. ## Configuration The script running estimations can be configured to use where it should store the estimated data, where * SQLI_DB="/path/to/db.sqlite" * ESTIMATOR_NEAR_HOME="/path/to/near/home" * Use this if a persistent near state should be used. Useful for testing with large stores. But make sure the deployed test contracts are up-to-date. * REPO_UNDER_TEST="/path/to/another/repository" * If you want to run the estimator on a repository clone other than the current directory. Useful to run estimation on older commits, which do not have the continuous estimation scripts. # near-chunk-primitives This crate hosts NEAR chunks-related error types. # Storage delta calculator A small tool to compare the actual storage use with the one saved within the state. Useful as a sanity check if we do any updates (as storage has impact on the amount of tokens that we lock in people's accounts). WARNING: This was built as a one-time tool - to debug the issue during migration in May 2021 - and is no longer maintained. # near-chain-primitives This crate hosts NEAR chain-related error types. # Overview This document describes the high-level architecture of nearcore. The focus here is on the implementation of the blockchain protocol, not the protocol itself. For reference documentation of the protocol, please refer to [nomicon](https://nomicon.io/) Some parts of our architecture are also covered in this [video series on YouTube](https://www.youtube.com/playlist?list=PL9tzQn_TEuFV4qlts0tVgndnytFs4QSYo). ## Bird's Eye View If we put the entirety of nearcore onto one picture, we get something like this: ![](../images/architecture.svg) Don't worry if this doesn't yet make a lot of sense: hopefully, by the end of this document the above picture would become much clearer! ## Overall Operation `nearcore` is a blockchain node -- it's a single binary (`neard`) which runs on some machine and talks to other similar binaries running elsewhere. Together, the nodes agree (using a distributed consensus algorithm) on a particular sequence of transactions. Once transaction sequence is established, each node applies transactions to the current state. Because transactions are fully deterministic, each node in the network ends up with identical state. To allow greater scalability, NEAR protocol uses sharding, which allows a node to hold only a small subset (shard) of the whole state. `neard` is a stateful, restartable process. When `neard` starts, the node connects to the network and starts processing blocks (block is a batch of transactions, processed together; transactions are batched into blocks for greater efficiency). The results of processing are persisted in the database. RocksDB is used for storage. Usually, the node's data is found in the `~/.near` directory. The node can be stopped at any moment and be restarted later. While the node is offline it misses the block, so, after a restart, the sync process kicks in which brings the node up-to-speed with the network by downloading the missing bits of history from more up-to-date peer nodes. Major components of nearcore: * **JSON RPC**. This HTTP RPC interface is how `neard` communicates with non-blockchain outside world. For example, to submit a transaction, some client sends an RPC request with it to some node in the network. From that node, the transaction propagates through the network, until it is included in some block. Similarly, a client can send an HTTP request to a node to learn about current state of the blockchain. The **JSON RPC** interface is documented [here](https://docs.near.org/api/rpc/introduction). * **Network**. If RPC is aimed "outside" the blockchain, "network" is how peer `neard` nodes communicate with each other within the blockchain. RPC carries requests from users of the blockchain, while network carries various messages needed to implement consensus. Two directly connected nodes communicate by sending protobuf-encoded messages over TCP. A node also includes logic to route messages for indirect peers through intermediaries. Oversimplifying a lot, it's enough for a new node to know an IP address of just one other network participant. From this bootstrap connection, the node learns how to communicate with any other node in the network. * **Client**. Somewhat confusingly named, **client** is the logical state of the blockchain. After receiving and decoding a request, both **RPC** and **network** usually forward it in the parsed form to the **client**. Internally, **client** is split in two somewhat independent components: **chain** and **runtime**. * **Chain**. The job of **chain**, in a nutshell, is to determine a global order of transactions. **Chain** builds and maintains the blockchain data structure. This includes block and chunk production and processing, consensus, and validator selection. However, **chain** is not responsible for actually applying transactions and receipts. * **Runtime**. If **chain** selects the _order_ of transactions, **Runtime** applies transaction to the state. **Chain** guarantees that everyone agrees on the order and content of transactions, and **Runtime** guarantees that each transaction is fully deterministic. It follows that everyone agrees on the "current state" of the blockchain. Some transactions are as simple as "transfer X tokens from Alice to Bob". But a much more powerful class of transactions is supported: "run this arbitrary WebAssembly code in the context of the current state of the chain". Running such "smart contract" transactions securely and efficiently is a major part of what **Runtime** does. Today, **Runtime** uses a JIT compiler to do that. * **Storage**. **Storage** is more of a cross-cutting concern, than an isolated component. Many parts of a node want to durably persist various bits of state to disk. One notable case is the logical state of the blockchain, and, in particular, data associated with each account. Logically, the state of an account on a chain is a key-value map: `HashMap<Vec<u8>, Vec<u8>>`. But there is a twist: it should be possible to provide a succinct proof that a particular key indeed holds a particular value. To allow that internally the state is implemented as a persistent (in both senses, "functional" and "on disk") merkle-patricia trie. * **Parameter Estimator**. One kind of transaction we support is "run this arbitrary, Turing-complete computation". To protect from a `loop {}` transaction halting the whole network, **Runtime** implements resource limiting: each transaction runs with a certain finite amount of "gas", and each operation costs a certain amount of gas to perform. **Parameter estimator** is essentially a set of benchmarks used to estimate relative gas costs of various operations. ## Entry Points `neard/src/main.rs` contains the main function that starts a blockchain node. However, this file mostly only contains the logic to parse arguments and dispatch different commands. `start_with_config` in `nearcore/src/lib.rs` is the actual entry point and it starts all the actors. `JsonRpcHandler::process` in the `jsonrpc` crate is the RPC entry point. It implements the public API of a node, which is documented [here](https://docs.near.org/api/rpc/introduction). `PeerManagerActor::spawn` in the `network` is an entry for the other point of contract with the outside world -- the peer-to-peer network. `Runtime::apply` in the `runtime` crate is the entry point for transaction processing logic. This is where state transitions actually happen, after chain decided, according to distributed consensus, which transitions need to happen. ## Code Map This section contains some high-level overview of important crates and data structures. ### `core/primitives` This crate contains most of the types that are shared across different crates. ### `core/primitives-core` This crate contains types needed for runtime. ### `core/store/trie` This directory contains the MPT state implementation. Note that we usually use `TrieUpdate` to interact with the state. ### `chain/chain` This crate contains most of the chain logic (consensus, block processing, etc). `ChainUpdate::process_block` is where most of the block processing logic happens. **Architecture Invariant**: interface between chain and runtime is defined by `RuntimeWithEpochManagerAdapter`. All invocations of runtime go through `RuntimeWithEpochManagerAdapter` **State update** The blockchain state of a node can be changed in the following two ways: * Applying a chunk. This is how the state is normally updated: through `Runtime::apply`. * State sync. State sync can happen in two cases: * A node is far enough behind the most recent block and triggers state sync to fast forward to the state of a very recent block without having to apply blocks in the middle. * A node is about to become validator for some shard in the next epoch, but it does not yet have the state for that shard. In this case, it would run state sync through the `catchup` routine. ### `chain/chunks` This crate contains most of the sharding logic which includes chunk creation, distribution, and processing. `ShardsManager` is the main struct that orchestrates everything here. ### `chain/client` This crate defines two important structs, `Client` and `ViewClient`. `Client` includes everything necessary for the chain (without network and runtime) to function and runs in a single thread. `ViewClient` is a "read-only" client that answers queries without interfering with the operations of `Client`. `ViewClient` runs in multiple threads. ### `chain/network` This crate contains the entire implementation of the p2p network used by NEAR blockchain nodes. Two important structs here: `PeerManagerActor` and `Peer`. Peer manager orchestrates all the communications from network to other components and from other components to network. `Peer` is responsible for low-level network communications from and to a given peer. Peer manager runs in one thread while each `Peer` runs in its own thread. <!--TODO: Maybe add more clarification about what Peer is? --> **Architecture Invariant**: Network communicates to `Client` through `NetworkClientMessages` and to `ViewClient` through `NetworkViewClientMessages`. Conversely, `Client` and `ViewClient` communicates to network through `NetworkRequests`. ### `chain/epoch_manager` This crate is responsible for determining validators and other epoch related information such as epoch id for each epoch. **Note**: `EpochManager` is constructed in `NightshadeRuntime` rather than in `Chain`, partially because we had this idea of making epoch manager a smart contract. ### `chain/jsonrpc` This crate implements [JSON-RPC](https://www.jsonrpc.org/) API server to enable submission of new transactions and inspection of the blockchain data, the network state, and the node status. When a request is processed, it generates a message to either `ClientActor` or `ViewClientActor` to interact with the blockchain. For queries of blockchain data, such as block, chunk, account, etc, the request usually generates a message to `ViewClientActor`. Transactions, on the other hand, are sent to `ClientActor` for further processing. ### `runtime/runtime` This crate contains the main entry point to runtime -- `Runtime::apply`. This function takes `ApplyState`, which contains necessary information passed from chain to runtime, a list of `SignedTransaction` and a list of `Receipt`, and returns a `ApplyResult`, which includes state changes, execution outcomes, etc. **Architecture Invariant**: The state update is only finalized at the end of `apply`. During all intermediate steps state changes can be reverted. ### `runtime/near-vm-logic` `VMLogic` contains all the implementations of host functions and is the interface between runtime and wasm. `VMLogic` is constructed when runtime applies function call actions. In `VMLogic`, interaction with NEAR blockchain happens in the following two ways: * `VMContext`, which contains lightweight information such as current block hash, current block height, epoch id, etc. * `External`, which is a trait that contains functions to interact with blockchain by either reading some nontrivial data, or writing to the blockchain. ### `runtime/near-vm-runner` `run` function in `runner.rs` is the entry point to the vm runner. This function essentially spins up the vm and executes some function in a contract. It supports different wasm compilers including wasmer0, wasmer2, and wasmtime through compile-time feature flags. Currently we use wasmer0 and wasmer2 in production. The `imports` module exposes host functions defined in `near-vm-logic` to WASM code. In other words, it defines the ABI of the contracts on NEAR. ### `neard` As mentioned before, `neard` is the crate that contains that main entry points. All the actors are spawned in `start_with_config`. It is also worth noting that `NightshadeRuntime` is the struct that implements `RuntimeWithEpochManagerAdapter`. <!-- TODO: Maybe add RuntimeWithEpochManagerAdapter mention or explanation in runtime/runtime chapter? --> ### `core/store/src/db.rs` This file contains the schema (DBCol) of our internal RocksDB storage - a good starting point when reading the code base. ## Cross Cutting Concerns ### Observability The [tracing](https://tracing.rs) crate is used for structured, hierarchical event output and logging. We also integrate [Prometheus](https://prometheus.io) for light-weight metric output. See the [style](./style.md) documentation for more information on the usage. ### Testing Rust has built-in support for writing unit tests by marking functions with the `#[test]` directive. Take full advantage of that! Testing not only confirms that what was written works the way it was intended to but also helps during refactoring since it catches unintended behaviour changes. Not all tests are created equal though and while some may only need milliseconds to run, others may run for several seconds or even minutes. Tests that take a long time should be marked as such with an `expensive_tests` feature, for example: ```rust #[test] #[cfg_attr(not(feature = "expensive_tests"), ignore)] fn test_catchup_random_single_part_sync() { test_catchup_random_single_part_sync_common(false, false, 13) } ``` Such tests will be ignored by default and can be executed by using `--ignored` or `--include-ignored` flag as in `cargo test -- --ignored` or by compiling the tests with `expensive_tests` feature enabled. Because expensive tests are not run by default, they are also not run in CI. Instead, they are run nightly and need to be explicitly included in `nightly/expensive.txt` file; for example: ```text expensive --timeout=1800 near-client near_client tests::catching_up::test_catchup_random_single_part_sync expensive --timeout=1800 near-client near_client tests::catching_up::test_catchup_random_single_part_sync --features nightly ``` For more details regarding nightly tests see `nightly/README.md`. Note that what counts as a slow test isn’t exactly defined as of now. If it takes just a couple seconds than it’s probably fine. Anything slower should probably be classified as an expensive test. In particular, if libtest complains the test takes more than 60 seconds than it definitely is and expensive test. # near-vm-logic This crate implements the specification of the interface that Near blockchain exposes to the smart contracts. It is not dependent on the specific way the smart contract code is executed, e.g. through Wasmer or whatnot, and therefore can be used for unit tests in smart contracts. Note, this logic assumes the little endian byte ordering of the memory used by the smart contract. # Run tests `cargo test --features mocks` # Runtime Parameters Estimator Use this tool to measure the running time of elementary runtime operations that have associated fees. 1. Run the estimator ```bash cargo run --release --package runtime-params-estimator --features required --bin runtime-params-estimator -- --accounts-num 20000 --additional-accounts-num 200000 --iters 1 --warmup-iters 1 --metric time ``` With the given parameters above estimator will run relatively fast. Note the `--metric time` flag: it instructs the estimator to use wall-clock time for estimation, which is quick, but highly variable between runs and physical machines. To get more robust estimates, use these arguments: ```bash --accounts-num 20000 --additional-accounts-num 200000 --iters 1 --warmup-iters 1 \ --docker --metric icount ``` This will run and build the estimator inside a docker container, using QEMU to precisely count the number of executed instructions. We will be using different parameters to do the actual parameter estimation. The instructions in [`emu-cost/README.md`](./emu-cost/README.md) should be followed to get the real data. 2. The result of the estimator run is the `costs-$timestamp$.txt` file, which contains human-readable representation of the costs. It can be compared with `costs.txt` file in the repository, which contains the current costs we are using. Note that, at the moment, `costs.txt` is *not* the source of truth. Rather, the costs are hard-codded in the `Default` impl for `RuntimeConfig`. You can run `cargo run --package runtime-params-estimator --bin runtime-params-estimator -- --costs-file costs.txt` to convert cost table into `RuntimeConfig`. 3. **Continuous Estimation**: Take a look at [`continuous-estimation/README.md`](./continuous-estimation/README.md) to learn about the automated setup around the parameter estimator. Note, if you use the plotting functionality you would need to install [gnuplot](http://gnuplot.info/) to see the graphs. ## Replaying IO traces Compiling `neard` with `--features=io_trace` and then running it with `--record-io-trace=my_trace.log` produces a trace of all storage and database accesses. This trace can be replayed by the estimator. For now only to get statistics. But the plan is that it will also give gas estimations based on replaying traces. Example: ``` cargo run -p runtime-params-estimator -- replay my_trace.log cache-stats GET 193 Block 193 BlockHeader 101 BlockHeight 100 BlockInfo 2 BlockMisc 11 CachedContractCode 98 ChunkExtra 95 Chunks 4 EpochInfo 98 IncomingReceipts 30092 State SET 1 CachedContractCode DB GET 30987 requests for a total of 391093512 B DB SET 1 requests for a total of 10379357 B STORAGE READ 153001 requests for a total of 2523227 B STORAGE WRITE 151412 requests for a total of 2512012 B TRIE NODES 8878276 /375708 /27383 (chunk-cache/shard-cache/DB) SHARD CACHE 93.21% hit rate, 93.21% if removing 15 too large nodes from total CHUNK CACHE 95.66% hit rate, 99.69% if removing 375708 shard cache hits from total ``` For a list of all options, run `cargo run -p runtime-params-estimator -- replay --help`. ### IO trace tests The test input files `./res/*.io_trace` have been generated based on real mainnet traffic. ```bash cargo build --release -p neard --features=io_trace for shard in 0 1 2 3 do target/release/neard \ --record-io-trace=75220100-75220101.s${shard}.io_trace view-state \ apply-range --start-index 75220100 --end-index 75220101 \ --sequential --shard-id ${shard} done ``` When running these command, make sure to run with `sequential` and to disable prefetching is disabled, or else the the replaying modes that match requests to receipts will not work properly. ```js // config.json "store": { "enable_receipt_prefetching": false } ``` # Genesis Tools * `genesis-populate` -- tool for creating genesis state dump populated with large number of accounts; * TODO `genesis-rebase`-- tool for rebasing the entire chain to a new genesis; * TODO `genesis-mainnet` -- tool for creating the main genesis used at the mainnet launch; ## `genesis-populate` Performance of our node varies drastically depending on the size of the trie it operates with. As the baseline, we agreed to take the trie roughly equivalent to the trie of Ethereum (as of October 2019) in its complexity. We achieve it by populating trie with 80M accounts and uploading a smart contract to each of them. We also make sure the trie does not compress the subtrees due to similar account names. We then use this trie for benchmarking, loadtesting, and estimating system parameters. To start node with 20k accounts first create configs: ```bash cargo run --package neard --bin neard -- init --test-seed=alice.near --account-id=test.near --fast ``` Then create state dump with how many accounts you want: ```bash cargo run --package genesis-populate --bin genesis-populate -- --additional-accounts-num=20000 ``` This will produce `state_dump` and `genesis_roots` files in home directory of the node, you can also use `--home` on all commands here to specify an absolute path to a home directory to use. Then start a single local node: ```bash cargo run --package neard --bin neard -- run --boot-nodes= ``` # Cold Storage testing tool ## Workflow Start by trying something on localnet, then move on to test you code on a dedicated machine with real archival data. Ideally, on archival machine we only need to do every step before experimenting once. But accidents happen, and we should be mindful of the time it takes us to recover from them. ### Localnet #### Prepare data - Run `neard init` / `neard localnet`. - Change your config (`--home-dir`/config.json) to not garbage collect larger number of epochs. By default it is 5, but you can change `gc_num_epochs_to_keep` to 1000, for example. Epoch lasts 500 blocks on localnet (as specified in `--home-dir`/genesis.json). That will give you sort of archival storage of max 500'0000 blocks. #### Produce blocks Start localnet node as usual `neard run`. Run node WITHOUT `--archive` parameter. In archive mode node will not save `TrieChanges` and they are needed to copy blocks to cold. You can monitor the current height and make sure that we didn't cross `gc_num_epochs_to_keep` epoch bound. Otherwise, garbage collection will start, and we cannot copy the block that has been garbage collected. #### Migrate and experiment Migration can be done block by block, because we have `TrieChanges` from the very start. ### Real archival data #### Prepare machine **TODO** #### Prepare data - Download snapshot of archival db to the machine. - Create a patched binary that overrides `save_trie_changes: false` for archival storage. #### Produce blocks Run that binary for at least `gc_num_epochs_to_keep`. Epochs are larger on testnet/mainnet, so just give it a few days. You can use `sudo systemctl start neard` to run `/home/ubuntu/neard` and `jornalctl -u neard` to check logs. Be careful with what binary is at `/home/ubuntu/neard`. **TODO** some kind of system to maintain a bunch of local binaries. #### Migrate After archival storage is populated with enough of `TrieChanges`, it should not be experimented with. If we need more block, we should run a binary with that and only that one path -- saving of `TrieChanges`. Archival storage should probably lie NOT in `/home/ubuntu/.neard/data`. That means, that unlike in real life, hot storage will not be mutated archival storage, but rather a brand new one, originally populated by copying of archival storage. And we WILL need a semi-proper migration to start split storage. By semi-proper I mean everything, but accurately changing State column in hot. That is not needed to start experimenting. ## Subcommands Before every subcommand the `NodeStorage` is opened with `NearConfig` from `home_dir`. ### Open Check that `NodeStorage::has_cold` is true. ### Head Print `HEAD` of cold storage, `FINAL_HEAD` from hot storage and also `HEAD` for hot storage. It is useful to check that some blocks has been copied/produced. ### CopyNextBlocks Does `num_of_blocks`(1 by default) iterations of - Copy block at height "cold HEAD + 1" to cold storage. - Update cold storage `HEAD`. ### (TODO) CopyAllBlocks Initial population of cold storage, where we copy all cold column to cold storage, plus set misc data like genesis hash and head. ### (TODO) GCHotSimpleAll Initial garbage collection of hot storage, where we just delete all the gc columns but `State` up to head of cold storage. ### (TODO) GCState Initial gc of `State` for hot storage ### (TODO) CompareHotState Takes hot storage and rpc storage, performs some manipulation using `TrieChanges` to makes their tail and head match, compares State column (should be exactly the same). # Gas Cost Parameters Gas in NEAR Protocol solves two problems. 1. To avoid spam, validator nodes only perform work if a user's tokens are burned. Tokens are automatically converted to gas using the current gas price. 2. To synchronize shards, they must all produce chunks following a strict schedule of 1 second execution time. Gas is used to measure how heavy the workload of a transaction is, so that the number of transactions that fit in a block can be deterministically computed by all nodes. In other words, each transaction costs a fixed amount of gas. This gas cost determines how much a user has to pay and how much time nearcore has to execute the transaction. What happens if nearcore executes a transaction too slowly? Chunk production for the shard gets delayed, which delays block production for the entire blockchain, reducing latency and throughput for everybody. If the chunk is really late, the block producer will decide to not include the chunk at all and inserts an empty chunk. The chunk may be included in the next block. By now, you probably wonder how we can know the time it takes to execute a transaction, given that validators use hardware of their choice. Getting these timings right is indeed a difficult problem. Or flipping the problem, assuming the timings are already known, then we must implement nearcore such that it guarantees to operate within the given time constraints. How we tackle this is the topic of this chapter. If you want to learn more about Gas from a user perspective, [Gas basic concepts](https://docs.near.org/concepts/basics/transactions/gas), [Gas advanced concepts](https://docs.near.org/concepts/basics/transactions/gas-advanced), and [the runtime fee specification](https://nomicon.io/RuntimeSpec/Fees/) are good places to dig deeper. ## Hardware and Timing Assumptions For timing to make sense at all, we must first define hardware constraints. The official hardware requirements for a validator is published on [near-nodes.io/validator/hardware](https://near-nodes.io/validator/hardware). It may change over time but the main principle is that a moderately configured, cloud-hosted virtual machine suffices. For our gas computation, we assume the minimum required hardware. Then we define 10<sup>15</sup> gas to be executed in at most 1s. We commonly use 1 Tgas (= 10<sup>12</sup> gas) in conversation, which corresponds to 1ms execution time. Obviously, this definition means that a validator running more powerful hardware will execute the transactions faster. That is perfectly okay, as far as the protocol is concerned we just need to make sure the chunk is available in time. If it is ready even faster, no problem. Less obviously, this means that even a minimally configured validator is often idle. Why is that? Well, the hardware must be prepared to execute chunks that are always full. But that is rarely the case, as the gas price increases exponentially when chunks are full, which would cause traffic to go back eventually. Futhermore, the hardware has to be ready for transactions of all types, including transactions chosen by a malicious actor selecting only the most complex transactions. Those transactions can also be unbalanced in what bottlenecks they hit. For example, a chunk can be filled with transactions that fully utilize the CPU's floating point units. Or they could be using all the available disk IO bandwidth. Because the minimum required hardware needs to meet the timing requirements for any of those scenarios, the typical, more balanced case is usually computed faster than the gas rule states. ## Transaction Gas Cost Model A transaction is essentially just a list of actions to be executed on the same account. For example it could be `CreateAccount` combined with `FunctionCall("hello_world")`. The [reference for available action](https://nomicon.io/RuntimeSpec/Actions) shows the conclusive list of possible actions. The protocol defines fixed fees for each of them. More details on [actions fees](#action-costs) follow below. Fixed fees are an important design decision. It means that a given action will always cost the exact same amount of gas, no matter on what hardware it executes. But the content of the action can impact the cost, for example a `DeployContract` action's cost scales with the size of the contract code. So, to be more precise, the protocol defines fixed gas cost *parameters* for each action, together with a formula to compute the gas cost for the action. All actions today either use a single fixed gas cost or they use a base cost and a linear scaling parameter. With one important exception, `FunctionCall`, which shall be discussed [further below](#fn-call-costs). There is an entire section on [Parameter Definitions](./parameter_definition.md) that explains how to find the source of truth for parameter values in the nearcore repository, how they can be referenced in code, and what steps are necessary to add a new parameter. Let us dwell a bit more on the linear scaling factors. The fact that contract deployment cost, which includes code compilation, scales linearly limits the compiler to use only algorithms of linear complexity. Either that, or the parameters must be set to match the 1ms = 1Tgas rule at the largest possible contract size. Today, we limit ourselves to linear-time algorithms in the compiler. Likewise, an action that has no scaling parameters must only use constant time to execute. Taking the `CreateAcccount` action as an example, with a cost of 0.1 Tgas, it has to execute within 0.1ms. Technically, the execution time depends ever so slightly on the account name length. But there is a fairly low upper limit on that length and it makes sense to absorb all the cost in the constant base cost. This concept of picking parameters according to algorithmic complexity is key. If you understand this, you know how to think about gas as a nearcore developer. This should be enough background to understand what the estimator does. The [runtime parameter estimator](./estimator.md) is a separate binary within the nearcore repository. It contains benchmarking-like code used to validate existing parameters values against the 1ms = 1 Tgas rule. When implementing new features, code should be added there to estimate the safe values of the new parameters. This section is for you if you are adding new features such as a new pre-compiled method or other host functions. Next up are more details on the specific costs that occur when executing NEAR transactions, which helps to understand existing parameters and how they are organized. ## Action Costs Actions are executed in two steps. First, an action is verified and inserted to an action receipt, which is sent to the receiver of the action. The `send` fee is paid for this. It is charged either in `fn process_transaction(..)` if the action is part of a fresh transaction, or inside [logic.rs](https://github.com/near/nearcore/blob/14b8ae2c7465444c9b672a23b044c00be98f6e34/runtime/near-vm-logic/src/logic.rs) through `fn pay_action_base(..)` if the action is generated by a function call. The send fee is meant to cover the cost to validate an action and transmit it over the network. The second step is action execution. It is charged in `fn apply_action(..)`. The execution cost has to cover everything required to apply the action to the blockchain's state. These two steps are done on the same shard for local receipts. Local receipts are defined as those where the sender account is also the receiver, abbreviated as `sir` which stands for "sender is receiver". For remote receipts, which is any receipt where the sender and receiver accounts are different, we charge a different fee since sending between shards is extra work. Notably, we charge that extra work even if the accounts are on the same shard. In terms of gas costs, each account is conceptually its own shard. This makes dynamic resharding possible without user-observable impact. When the send step is performed, the minimum required gas to start execution of that action is known. Thus, if the receipt has not enough gas, it can be aborted instead of forwarding it. Here we have to introduce the concept of used gas. `gas_used` is different from `gas_burnt`. The former includes the gas that needs to be reserved for the execution step whereas the latter only includes the gas that has been burnt in the current chunk. The difference between the two is sometimes also called prepaid gas, as this amount of gas is paid for during the send step and it is available in the execution step for free. If execution fails, the prepaid cost that has not been burned will be refunded. But this is not the reason why it must burn on the receiver shard instead of the sender shard. The reason is to properly compute the gas limits on the chunk that does the execution work. In conclusion, each action parameter is split into three costs, `send_sir`, `send_not_sir`, and `execution`. Local receipts charge the first and last parameters, remote receipts charge the second and third. They should be estimated, defined, and charged separately. But the reality is that today almost all actions are estimated as a whole and the parameters are split 50/50 between send and execution cost, without discrimination on local vs remote receipts i.e. `send_sir` cost is the same as `send_not_sir`. The [Gas Profile](./gas_profile.md) section goes into more details on how gas costs of a transaction are tracked in nearcore. ## Dynamic Function Call Costs <a name="fn-call-costs"></a> Costs that occur while executing a function call on a deployed WASM app (a.k.a. smart contract) are charged only at the receiver. Thus, they have only one value to define them, in contrast to action costs. The most fundamental dynamic gas cost is `wasm_regular_op_cost`. It is multiplied with the exact number of WASM operations executed. You can read about [Gas Instrumentation](https://nomicon.io/RuntimeSpec/Preparation#gas-instrumentation) if you are curious how we count WASM ops. Currently, all operations are charged the same, although it could be more efficient to charge less for opcodes like `i32.add` compared to `f64.sqrt`. The remaining dynamic costs are for work done during host function calls. Each host function charges a base cost. Either the general `wasm_base` cost, or a specific cost such as `wasm_utf8_decoding_base`, or sometimes both. New host function calls should define a separate base cost and not charge `wasm_base`. Additional host-side costs can be scaled per input byte, such as `wasm_sha256_byte`, or costs related to moving data between host and guest, or any other cost that is specific to the host function. Each host function must clearly define what its costs are and how they depend on the input. ## Non-gas parameters Not all runtime parameters are directly related to gas costs. Here is a brief overview. - **Gas economics config**: Defines the conversion rate when purchasing gas with NEAR tokens and how gas rewards are split. - **Storage usage config**: Costs in tokens, not gas, for storing data on chain. - **Account creation config**: Rules for account creation. - **Smart contract limits**: Rules for WASM execution. None of the above define any gas costs directly. But there can be interplay between those parameters and gas costs. For example, the limits on smart contracts changes the assumptions for how slow a contract compilation could be, hence it affects the deploy action costs. Dynamic config helpers for the NEAR codebase. This crate contains utilities that allow to reconfigure the node while it is running. ## How to: ### Logging and tracing Make changes to `log_config.json` and send `SIGHUP` signal to the `neard` process. ### Other config values Makes changes to `config.json` and send `SIGHUP` signal to the `neard` process. #### Fields of config that can be changed while the node is running: - `expected_shutdown`: the specified block height neard will gracefully shutdown at. #### Changing other fields of `config.json` The changes to other fields of `config.json` will be silently ignored as long as `config.json` remains a valid json object and passes internal validation. Please be careful about making changes to `config.json` because when a node starts (or restarts), it checks the validity of the config files and crashes if detects any issues. # NEAR Indexer NEAR Indexer is a micro-framework, which provides you with a stream of blocks that are recorded on NEAR network. It is useful to handle real-time "events" on the chain. ## Rationale As scaling dApps enter NEAR’s mainnet, an issue may arise: how do they quickly and efficiently access state from our deployed smart contracts, and cut out the cruft? Contracts may grow to have complex data structures and querying the network RPC may not be the optimal way to access state data. The NEAR Indexer Framework allows for streams to be captured and indexed in a customized manner. The typical use-case is for this data to make its way to a relational database. Seeing as this is custom per project, there is engineering work involved in using this framework. NEAR Indexer is already in use for several new projects, namely, we index all the events for NEAR Blockchain Explorer, and we also dig into Access Keys and index all of them for NEAR Wallet passphrase recovery and multi-factor authentication. With NEAR Indexer you can do high-level aggregation as well as low-level introspection of all the events inside the blockchain. We are going to build more Indexers in the future, and will also consider building Indexer integrations with streaming solutions like Kafka, RabbitMQ, ZeroMQ, and NoSQL databases. Feel free to [join our discussions](https://github.com/nearprotocol/nearcore/issues/2996). See the [example](https://github.com/nearprotocol/nearcore/tree/master/tools/indexer/example) for further technical details. ## How to set up and test NEAR Indexer Before you proceed, make sure you have the following software installed: * [rustup](https://rustup.rs/) or Rust version that is mentioned in `rust-toolchain` file in the root of nearcore project. ### localnet Clone [nearcore](https://github.com/nearprotocol/nearcore) To run the NEAR Indexer connected to a network we need to have configs and keys prepopulated. To generate configs for localnet do the following ```bash $ git clone git@github.com:nearprotocol/nearcore.git $ cd nearcore/tools/indexer/example $ cargo run --release -- --home-dir ~/.near/localnet init ``` The above commands should initialize necessary configs and keys to run localnet in `~/.near/localnet`. ```bash $ cargo run --release -- --home-dir ~/.near/localnet/ run ``` After the node is started, you should see logs of every block produced in your localnet. Get back to the code to implement any custom handling of the data flowing into the indexer. Use [near-shell](https://github.com/near/near-shell) to submit transactions. For example, to create a new user you run the following command: ```bash $ NEAR_ENV=local near --keyPath ~/.near/localnet/validator_key.json \ create_account new-account.test.near --masterAccount test.near ``` ### testnet / betanet To run the NEAR Indexer connected to testnet or betanet we need to have configs and keys prepopulated, you can get them with the NEAR Indexer Example like above with a little change. Follow the instructions below to run non-validating node (leaving account ID empty). ```bash $ cargo run --release -- --home-dir ~/.near/testnet init --chain-id testnet --download ``` The above code will download the official genesis config and generate necessary configs. You can replace `testnet` in the command above to different network ID `betanet`. **NB!** According to changes in `nearcore` config generation we don't fill all the necessary fields in the config file. While this issue is open <https://github.com/nearprotocol/nearcore/issues/3156> you need to download config you want and replace the generated one manually. - [testnet config.json](https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/testnet/config.json) - [betanet config.json](https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/betanet/config.json) - [mainnet config.json](https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/mainnet/config.json) Replace `config.json` in your `--home-dir` (e.g. `~/.near/testnet/config.json`) with downloaded one. Configs for the specified network are in the `--home-dir` provided folder. We need to ensure that NEAR Indexer follows all the necessary shards, so `"tracked_shards"` parameters in `~/.near/testnet/config.json` needs to be configured properly. For example, with a single shared network, you just add the shard #0 to the list: ```text ... "tracked_shards": [0], ... ``` Hint: See the Tweaks section below to learn more about further configuration options. After that we can run NEAR Indexer. ```bash $ cargo run --release -- --home-dir ~/.near/testnet run ``` After the network is synced, you should see logs of every block produced in Testnet. Get back to the code to implement any custom handling of the data flowing into the indexer. ## Tweaks By default, nearcore is configured to do as little work as possible while still operating on an up-to-date state. Indexers may have different requirements, so there is no solution that would work for everyone, and thus we are going to provide you with the set of knobs you can tune for your requirements. As already has been mentioned above, the most common tweak you need to apply is listing all the shards you want to index data from; to do that, you should ensure that `"tracked_shards"` in the `config.json` lists all the shard IDs, e.g. for the current betanet and testnet, which have a single shard: ```json ... "tracked_shards": [0], ... ``` You can choose Indexer Framework sync mode by setting what to stream: - `LatestSynced` - Real-time syncing, always taking the latest finalized block to stream - `FromInterruption` - Starts syncing from the block NEAR Indexer was interrupted last time - `BlockHeight(u64)` - Specific block height to start syncing from Refer to `main()` function in [Indexer Example](https://github.com/nearprotocol/nearcore/blob/master/tools/indexer/example/src/main.rs) Indexer Framework also exposes access to the internal APIs (see `Indexer::client_actors` method), so you can fetch data about any block, transaction, etc, yet by default, nearcore is configured to remove old data (garbage collection), so querying the data that was observed a few epochs before may return an error saying that the data is not found. If you only need blocks streaming, you don't need this tweak, but if you need access to the historical data right from your Indexer, consider updating `"archive"` setting in `config.json` to `true`: ```json ... "archive": true, ... ``` ## Who is using NEAR Indexer? *This list is not exhaustive, feel free to submit your project by sending a pull request.* * [Indexer for NEAR Wallet](https://github.com/near/near-indexer-for-wallet) * [Indexer for NEAR Explorer](https://github.com/near/near-indexer-for-explorer) # `near-stable-hasher` `near-stable-hasher` is a library that is essentially a wrapper around, now deprecated, `std::hash::SipHasher`. Its purpose is to provide a stable hash function, which doesn't change depending on `rust_version`, `architecture`, `platform`, `time`, etc. In addition, note that `SipHasher` is deprecated since `Rust` `1.13.0`. Eventually `SipHasher` will be removed from `Rust`. We need to ensure, nothing breaks during this transition period. ## Structs This crate provides only one struct. See `StableHasher`. ### Example: ```rust fn test_stable_hasher() { let mut sh = StableHasher::new(); sh.write(&[1, 2, 3, 4, 5]); let finish = sh.finish(); assert_eq!(finish, 12661990674860217757) } ``` # near-vm-errors Error that can occur inside Near Runtime encapsulated in a separate crate. Might merge it later. # How neard works This chapter describes how neard works with a focus on implementation details and practical scenarios. To get a better understanding of how the protocol works, please refer to [nomicon](https://nomicon.io). For a high-level code map of nearcore, please refer to this [document](../). ## High level overview On the high level, neard is a daemon that periodically receives messages from the network and sends messages to peers based on different triggers. Neard is implemented using an [actor framework](https://en.wikipedia.org/wiki/Actor_model) called [actix](https://docs.rs/actix). **Note**: Using actix was decided in the early days of the implementation of nearcore and by no means represents our confidence in actix. On the contrary, we have noticed a number of issues with actix and are considering implementing an actor framework in house. There are several important actors in neard: * `PeerActor` - Each peer is represented by one peer actor and runs in a separate thread. It is responsible for sending messages to and receiving messages from a given peer. After `PeerActor` receives a message, it will route it to `ClientActor`, `ViewClientActor`, or `PeerManagerActor` depending on the type of the message. * `PeerManagerActor` - Peer Manager is responsible for receiving messages to send to the network from either `ClientActor` or `ViewClientActor` and routing them to the right `PeerActor` to send the bytes over the wire. It is also responsible for handling some types of network messages received and routed through `PeerActor`. For the purpose of this document, we only need to know that `PeerManagerActor` handles `RoutedMessage`s. Peer manager would decide whether the `RoutedMessage`s should be routed to `ClientActor` or `ViewClientActor`. * `ClientActor` - Client actor is the “core” of neard. It contains all the main logic including consensus, block and chunk processing, state transition, garbage collection, etc. Client actor is single threaded. * `ViewClientActor` - View client actor can be thought of as a read-only interface to **client**. It only accesses data stored in a node’s storage and does not mutate any state. It is used for two purposes: * Answering RPC requests by fetching the relevant piece of data from storage. * Handling some network requests that do not require any changes to the storage, such as header sync, state sync, and block sync requests. `ViewClientActor` runs in four threads by default but this number is configurable. ## Data flow within `neard` Flow for incoming messages: ![](https://user-images.githubusercontent.com/1711539/195619986-25798cde-8a91-4721-86bd-93fa924b483a.png) Flow for outgoing messages: ![](https://user-images.githubusercontent.com/1711539/195626792-7697129b-7f9c-4953-b939-0b9bcacaf72c.png) ## How neard operates when it is fully synced When a node is fully synced, the main logic of the node operates in the following way (the node is assumed to track all shards, as most nodes on mainnet do today): 1. A block is produced by some block producer and sent to the node through broadcasting. 2. The node receives a block and tries to process it. If the node is synced it presumably has the previous block and the state before the current block to apply. It then checks whether it has all the chunks available. If the node is not a validator node, it won’t have any chunk parts and therefore won’t have the chunks available. If the node is a validator node, it may already have chunk parts through chunk parts forwarding from other nodes and therefore may have already reconstructed some chunks. Regardless, if the node doesn’t have all chunks for all shards, it will request them from peers by parts. 3. The chunk requests are sent and the node waits for enough chunk parts to be received to reconstruct the chunks. For each chunk, 1/3 of all the parts <!-- TODO: Is 100 the number of all the parts or one third of all the parts? --> (100) is sufficient to reconstruct a chunk. If new blocks arrive while waiting for chunk parts, they will be put into a `OrphanPool`, waiting to be processed. If a chunk part request is not responded to within `chunk_request_retry_period`, which is set to 400ms by default, then a request for the same chunk part would be sent again. 4. After all chunks are reconstructed, the node processes the current block by applying transactions and receipts from the chunks. Afterwards, it will update the head according to the fork choice rule, which only looks at block height. In other words, if the newly processed block is of higher height than the current head of the node, the head is updated. 5. The node checks whether any blocks in the `OrphanPool` are ready to be processed in a BFS order and processes all of them until none can be processed any more. Note that a block is put into the `OrphanPool` if and only if its previous block is not accepted. 6. Upon acceptance of a block, the node would check whether it needs to run garbage collection. If it needs to, it would garbage collect two blocks worth of data at a time. The logic of garbage collection is complicated and could be found [here](./gc.md). 7. If the node is a validator node, it would start a timer after the current block is accepted. After `min_block_production_delay` which is currently configured to be 1.3s on mainnet, it would send an approval to the block producer of the next block (current block height + 1). The main logic is illustrated below: ![](https://user-images.githubusercontent.com/1711539/195635652-f0c7ebae-a2e5-423f-8e62-b853b815fcec.png) ## How neard works when it is synchronizing `PeerManagerActor` periodically sends a `NetworkInfo` message to `ClientActor` to update it on the latest peer information, which includes the height of each peer. Once `ClientActor` realizes that it is more than `sync_height_threshold` (which by default is set to 1) behind the highest height among peers, it starts to sync. The synchronization process is done in three steps: 1. Header sync. The node first identifies the headers it needs to sync through a [`get_locator`](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/*client/src/sync.rs#L332) calculation. This is essentially an exponential backoff computation that tries to identify commonly known headers between the node and its peers. Then it would request headers from different peers, at most `MAX_BLOCK_HEADER_HASHES` (which is 512) headers at a time. 2. After the headers are synced, the node would determine whether it needs to run state sync. The exact condition can be found [here](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/client/src/sync.rs#L458) but basically a node would do state sync if it is more than 2 epochs behind the head of the network. State sync is a very complex process and warrants its own section. We will give a high level overview here. 1. First, the node computes `sync_hash` which is the hash of the block that identifies the state that the node wants to sync. This is guaranteed to be the first block of the most recent epoch. In fact, there is a [check](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/chain/src/chain.rs#L4292) on the receiver side that this is indeed the case. The node would also request the block whose hash is `sync_hash` 2. The node [deletes basically all data (blocks, chunks, state) from its storage](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/chain/src/chain.rs#L1809). This is not an optimal solution, but it makes the implementation for combining state easier when there is no stale data in storage. 3. For the state of each shard that the node needs to download, it first requests a [header](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/core/primitives/src/syncing.rs#L40) that contains some metadata the node needs to know about. Then the node computes the number of state parts it needs to download and requests those parts from different peers who track the shard. 4. After all parts are downloaded, the node [combines those state parts](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/client/src/client_actor.rs#L1877) and then [finalizes](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/chain/src/chain.rs#L3065) the state sync by applying the last chunk included in or before the sync block so that the node has the state after applying sync block to be able to apply the next block. 5. The node [resets heads](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/chain/src/chain.rs#L1874) properly after state sync. 3. Block Sync. The node first gets the block with highest height that is on the canonical chain and request from there `MAX_BLOCK_REQUESTS` (which is set to 5) blocks from different peers in a round robin order. The block sync routine runs again if head has changed (progress is made) or if a timeout (which is set to 2s) has happened. **Note**: when a block is received and its height is no more than 500 + the node’s current head height, then the node would request its previous block automatically. This is called orphan sync and helps to speed up the syncing process. If, on the other hand, the height is more than 500 + the node’s current head height, the block is simply dropped. <!-- TODO: Either this note is incorrect or the block processing diagram is. --> ## How `ClientActor` works ClientActor has some periodically running routines that are worth noting: * [Doomslug timer](https://github.com/near/nearcore/blob/fa78002a1b4119e5efe277c3073b3f333f451ffc/chain/client/src/client_actor.rs#L1198) - This routine runs every `doosmslug_step_period` (set to 100ms by default) and updates consensus information. If the node is a validator node, it also sends approvals when necessary. * [Block production](https://github.com/near/nearcore/blob/fa78002a1b4119e5efe277c3073b3f333f451ffc/chain/client/src/client_actor.rs#L991) - This routine runs every `block_production_tracking_delay` (which is set to 100ms by default) and checks if the node should produce a block. * [Log summary](https://github.com/near/nearcore/blob/fa78002a1b4119e5efe277c3073b3f333f451ffc/chain/client/src/client_actor.rs#L1790) - Prints a log line that summarizes block rate, average gas used, the height of the node, etc. every 10 seconds. * [Resend chunk requests](https://github.com/near/nearcore/blob/fa78002a1b4119e5efe277c3073b3f333f451ffc/chain/chunks/src/lib.rs#L910) - This routine runs every `chunk_request_retry_period` (which is set to 400ms). It resends the chunk part requests for those that are not yet responded to. * [Sync](https://github.com/near/nearcore/blob/fa78002a1b4119e5efe277c3073b3f333f451ffc/chain/client/src/client_actor.rs#L1629) - This routine runs every `sync_step_period` (which is set to 10ms by default) and checks whether the node needs to sync from its peers and, if needed, also starts the syncing process. * [Catch up](https://github.com/near/nearcore/blob/fa78002a1b4119e5efe277c3073b3f333f451ffc/chain/client/src/client_actor.rs#L1581) - This routine runs every `catchup_step_period` (which is set to 100ms by default) and runs the catch up process. This only applies if a node validates shard A in epoch X and is going to validate a different shard B in epoch X+1. In this case, the node would start downloading the state for shard B at the beginning of epoch X. After the state downloading is complete, it would apply all blocks in the current epoch (epoch X) for shard B to ensure that the node has the state needed to validate shard B when epoch X+1 starts. # Runtime test Framework for creating and executing runtime scenarios. You can create [`Scenario`] in rust code or load it from a JSON file. [`fuzzing`] module provides [`libfuzzer_sys::arbitrary::Arbitrary`] trait for [`Scenario`], thus enabling creating random scenarios. ## Scenario Runtime test is described by a [`Scenario`] object. Currently, scenario supports only one client, but you can specify number of accounts through [`NetworkConfig`]. Scenario can be loaded from a json file or constructed in rust code. ```ignore pub fn from_file(path: &Path) -> io::Result<Scenario>; ``` [`Scenario::run`] tries to produce all the described blocks and if succeeded returns [`run_test::RuntimeStats`] wrapped in a [`run_test::ScenarioResult`]. ```ignore pub fn run(&self) -> ScenarioResult<RuntimeStats, Error>; ``` [`run_test::RuntimeStats`] contain stats for every produced block. Currently, only block production time is supported. ```ignore #[derive(Serialize, Deserialize, Default, Debug)] pub struct RuntimeStats { pub blocks_stats: Vec<BlockStats>, } #[derive(Serialize, Deserialize, Default, Debug)] pub struct BlockStats { pub height: u64, pub block_production_time: Duration, } ``` [`run_test::ScenarioResult`] is a wrapper around a `Result` type which adds a `homedir` field: ```ignore pub struct ScenarioResult<T, E> { pub result: std::result::Result<T, E>, pub homedir: Option<tempfile::TempDir>, } ``` The `homedir` is populated if scenario is configured to use on-disk storage (i.e. if `use_in_memory_store` is `false`) and allows the caller to locate the store. Be careful to remember, that block height should be positive and ascending. ## Scenario Builder To easily create new scenarios in rust code use [`ScenarioBuilder`]. Usage example can be found in `src/scenario_builder.rs` file. # near-indexer-primitives This crate holds the types that is used in NEAR Indexer Framework to allow other projects to use them without a need to depend on entire `nearcore`. ## Core Resource Files Stores resource data which is part of the protocol stable enough to be moved outside of the code. ### `mainnet_genesis.json` Stores genesis of mainnet. ### `mainnet_restored_receipts.json` Stores receipts restored after the fix of applying chunks. See [#4248](https://github.com/near/nearcore/pull/4248) for more details. ### `storage_usage_delta.json` Stores difference of storage usage applied to mainnet after observed bug related to delete key action. See [#3824](https://github.com/near/nearcore/issues/3824) for more details. # Nearcore Debug UI ## How to Use Clone nearcore, go to this directory, run `npm install` (only needed for first time), and then ``` npm start ``` This will serve the UI at localhost:3000. Go to `http://localhost:3000/<RPC address>` to look at the debug UI of a near node. The RPC address can be either IP:port, or just IP (which will default to port 3030). ## How to deploy in production TBD. ## Development The code is written in TypeScript with the React framework. The one thing most unintuitive about React is React Hooks (the useState, useMemo, useCallback, useEffect, etc.) Understanding how hooks work is a **must**: https://reactjs.org/docs/hooks-intro.html A few less-well-known hooks that are used often in this codebase: * `useMemo(func, [deps])` (from core React): returns func(), but only recomputing func()if any deps change from the last invocation (by shallow equality of the each dep). * `useEffect(func, [deps])` (from core React): similar to useMemo, but instead of returning func(), just executes it, and func() is allowed to have side effects by mutating state (calling setXXX (that comes from `const [XXX, setXXX] = useState(...);`)). * `useQuery([keys], () => promise)` (from react-query): returns `{data, error, isLoading}` which represents fetching some data using the given promise. This is used to render asynchronously fetched data. While the data is loading, `isLoading` is true; if there is an error, `error` is truthy; and finally when there is data, `data` is truthy. This can be used to then render each state accordingly. The keys given to the query are used to memoize the query, so that queries with the same keys are only fetched once. It's also helpful to understand at a high level how the react-router library works; this is used to support deep-linking in the URL (e.g. `/127.0.0.1/cluster` leads to the cluster page), allowing the UI to be served as a single application. # Exact gas price estimator ## Theory of operations Operation execution cost (aka gas cost) is computed basing on the number of userland x86 instructions required to perform the particular operation in current NEAR runtime implementation. To compute this cost, we use instrumented QEMU binary translating engine to execute required operations in the userland Linux simulator. Thus, to measure the execution cost we have to compile NEAR runtime benchmark for Linux, execute the benchmark under instrumented QEMU running in Docker, and count how many x86 instructions are executed between start and end of execution. Instrumentation of QEMU is implemented in the following manner. We install instrumentation callback which conditionally increments the instruction counter on every instruction during translation by QEMU's JIT, TCG. We activate counting when specific Linux syscall (currently, 0 aka sys_read) is executed with the certain arguments (file descriptor argument == 0xcafebabe or 0xcafebabf). On start event we clear instruction counter, on stop event we stop counting and return counted instructions into the buffer provided to read syscall. As result, NEAR benchmark will know the exact instruction counter passed between two moments and this value is the pure function of Docker image used, Rust compiler version and the NEAR implementation and is fully reproducible. ## Usage We build and run the cost estimator in the Docker container to make sure config is fully reproducible. Please make sure that Docker is given at least 4G of RAM, as running under emulator is rather resource consuming. Note that for Mac the limit is configured in the desktop client, and default value most likely will be very low. First fetch appropriate base image, with `docker pull rust`. Then create a Docker image with `build.sh`, it will create a Docker image with additional build deps. Set `HOST_DIR` environment variable to local folder where relevant sources are present. It will be mounted under `/host` in the Docker container. Start container and build estimator with: host> ./run.sh docker> cd /host/nearcore docker> cd /host/nearcore/runtime/runtime-params-estimator docker> pushd ./test-contract && ./build.sh && popd docker> cargo build --release --package runtime-params-estimator --features required Now start the estimator under QEMU with the counter plugin enabled (note, that Rust compiler produces SSE4, so specify recent CPU): docker> ./emu-cost/counter_plugin/qemu-x86_64 -cpu Westmere-v1 -plugin file=./emu-cost/counter_plugin/libcounter.so \ ../../target/release/runtime-params-estimator --accounts-num 20000 --additional-accounts-num 200000 --iters 1 --warmup-iters 1 ### Notes * Estimation may take some time, as we execute instrumented code under the binary translator. * You may observe tangible differences between instructions number got by `params-estimator` and the actual number of instructions executed by production nodes. This is explained by the LTO (Link Time Optimization) which is disabled by default for release builds to reduce compilation time. To get better results, enable LTO via environment variable: CARGO_PROFILE_RELEASE_LTO=fat CARGO_PROFILE_RELEASE_CODEGEN_UNITS=1 export CARGO_PROFILE_RELEASE_LTO CARGO_PROFILE_RELEASE_CODEGEN_UNITS See [#4678](https://github.com/near/nearcore/issues/4678) for more details. * You also may observe slight differences in different launches, because number of instructions operating with disk cache is not fully determined, as well as weight of RocksDB operations. To improve estimation, you can launch it several times and take the worst result. ## IO cost calibration We need to calibrate IO operations cost to instruction counts. Technically instruction count and IO costs are orthogonal, however, as we measure our gas in instructions, we have to compute abstract scaling coefficients binding the number of bytes read/written in IO to instructions executed. We do that by computing following operation: ./emu-cost/counter_plugin/qemu-x86_64 -d plugin -cpu Westmere-v1 -plugin file=./emu-cost/counter_plugin/libcounter.so \ ../../target/release/genesis-populate --home /tmp/data --additional-accounts-num <NUM_ACCOUNTS> and checking how much data to be read/written depending on number of create accounts. Then we could figure out: * 1 account creation cost in instructions * 1 account creation cost in bytes read and written For example, experiments performed in mid Oct 2020 shown the following numbers: 10M accounts: * 6_817_684_914_212 instructions executed * 168_398_590_013 bytes read * 48_486_537_178 bytes written Thus 1 account approximately costs: * 681_768 instructions executed * 16840 bytes read * 4849 bytes written Let's presume that execution, read and write each takes following shares in account cost creation. * Execution: *3/6* * Read: *2/6* * Write: *1/6* Then we could conclude that: * 1 byte read costs 681768 * 2 / 3 / 16840 = 27 instructions * 1 byte written costs 681768 * 1 / 3 / 4849 = 47 instructions Thus, when measuring costs we set the operation cost to be: cost = number_of_instructions + bytes_read * 27 + bytes_written * 47 ## Optional: re-building QEMU and the instruction counter plugin We ship prebuilt QEMU and TCG instruction counter plugin, so in many cases one doesn't have to build it. However, in case you still want to build it - use the following steps. Important: we build QEMU and the TCG plugin inside the container, so execute following commands inside Docker. Set environment variable HOST_DIR (on the host) to location where both QEMU and nearcore source code is checked out, it will be mounted as `/host` inside the Docker container. Start container with: ./run.sh To build QEMU use: cd /host/qemu ./configure --disable-system --enable-user --enable-plugins --prefix=/host/qemu-linux --target-list=x86_64-linux-user make && make install Then build and test the QEMU's JIT plugin: cd /host/nearcore/runtime/runtime-params-estimator/emu-cost/counter_plugin cp /host/qemu-linux/bin/qemu-x86_64 ./ make QEMU_DIR=/host/qemu make test To execute commands in already running container first find its id with: > docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e9dcb52cc91b ubuntu-emu "/usr/bin/env bash" 2 hours ago Up 2 hours 0.0.0.0:5000->22/tcp reverent_carson and the use container ID for `docker exec` command, like: docker exec -it e9dcb52cc91b /host/qemu-linux/bin/qemu-x86_64 -d plugin -plugin file=/host/qemu-linux/plugins/libcounter.so /host/nearcore/runtime/runtime-params-estimator/emu-cost/counter_plugin/test_binary # Replay The replay script is able to take a dumped genesis file and set up a new localnet directory for local replays. Once the localnet is running (launched separately), it can also send to the localnet node a dumped tranaction trace, generated by `dump_tx`. Prerequisites: * `[PREREQ1]` path of a dumped genesis file representing a state (generated by `dump_state`, docs: https://github.com/near/nearcore/blob/master/tools/state-viewer/README.md) * `[PREREQ2]` path of a dumped transaction trace (generated by `dump_tx`, docs: https://github.com/near/nearcore/blob/master/tools/state-viewer/README.md) * `[PREREQ3]` a localnet home path containing directory `node0/`, under which a valid `config.json` file exists for localnet use In order to set up and launch replay, we take the following steps: Make sure you have the right enviroment variables: ```shell export PYTHONPATH=./pytest/lib ``` First, build the binary. ```shell cargo build -p neard --release ``` Second, run `generate`: ```shell python3 pytest/tests/replay/replay.py generate --genesis [PREREQ1] --home-dir [PREREQ3] ``` Third, launch the localnet node using outputs from `generate` under `--home-dir`. Hints on the screen. Fourth, run `send`: ```shell python3 pytest/tests/replay/replay.py send --tx-json [PREREQ2] --home-dir [PREREQ3] ``` Now the localnet db contains a dense trace of the txs dumped, replayed on the dumped genesis. # `neard view_state` `neard view_state` is a tool that helps you look into the state of the blockchain, which includes: * apply old blocks with a new version of the code or of the protocol * generate a genesis file from the current state of the blockchain ## Functions TODO: Fill out documentation for all available commands ### `apply_range` Basic example: ```bash make neard ./target/release/neard --home ~/.near/ view_state apply_range \ --shard-id=0 --start-index=42376889 --end_index=423770101 \ --verbose-output --csv-file=./apply_range.csv ``` This command will: * build `neard` with link-time optimizations * open the blockchain state at the location provided by `--home` * for each block with height between `--start-index` and `--end-index` * Run `apply_transactions` function * Print individual outcomes if `--verbose-output` is provided. Useful for finding and debugging differences in replaying the history. * Print a csv file if `--csv-file` is provided. The csv file contains per-block statistics such as, timestamp of the block, gas per block, delayed receipts per block. Useful for debugging performance issues. Don't forget to sort your data before making charts using this data. If you want to re-apply all the blocks in the available blockchain then omit both the `--start-index` and `--end-index` flags. Missing `--start-index` means use chain state starting from the genesis. Missing `--end-index` means using blocks up to the latest block available in the blockchain. Enable debug output to print extra details such as individual outcomes: ```bash ./target/release/neard view_state apply_range --verbose ... ``` To make more precise time estimations, enable `--sequential` flag, which will also cause slowdown proportional to the number of rayon threads. #### Running for the whole `mainnet` history As of today you need approximately 2TB of disk space for the whole history of `mainnet`, and the most practical way of obtaining this whole history is the following: * Patch <https://github.com/near/near-ops/pull/591> to define your own GCP instance in project `rpc-prod`. * Make sure to change `machine-name` and `role` to something unique. * Make a Pull Request and ask Mario (@mhalambek) or Sandi (@chefsale) for review. * Ask Mario or Sandi to grant you permissions to the GCP project `rpc-prod`. * Run `terraform init` and `terraform apply` to start an instance. This instance will have a running `neard` systemd service, with `/home/ubuntu/.near` as the home directory. Follow the `terraform` CLI [installation guide](https://learn.hashicorp.com/tutorials/terraform/install-cli) if needed. * SSH using `gcloud compute ssh <machine_name>" --project=rpc-prod`. Follow the `gcloud` CLI [installation guide](https://cloud.google.com/sdk/docs/install) if needed. * It is recommended to run all the following commands as user `ubuntu`: `sudo su ubuntu`. * Install tools be able to compile `neard`: * Install development packages: <https://near-nodes.io/validator/compile-and-run-a-node> * Install Rust: <https://rustup.rs/> * Clone the git repository: `git clone http://github.com/near/nearcore` * `make neard` * `sudo systemctl stop neard`, because a running node has a LOCK over the database. * Run `neard view_state` as described above * Enjoy #### Checking Predicates It's hard to know in advance which predicates will be of interest. If you want to check that none of function calls use more than X gas, feel free to add the check yourself. ### `view_chain` If called without arguments this command will print the block header of tip of the chain, and chunk extras for that block. Flags: * `--height` gets the block header and chunk extras for a block at a certain height. * `--block` displays contents of the block itself, such as timestamp, outcome_root, challenges, and many more. * `--chunk` displays contents of the chunk, such as transactions and receipts. ### `dump_state` Saves the current state of the network in a new genesis file. Flags: * `--height` takes state from the genesis up to and including the given height. By default, the tool dumps all available states. * `--account-ids`, if set, specifies the only accounts that will appear in the output genesis file, except for validators, who will always be included. Example: ```shell ./target/release/neard --home ~/.near/mainnet/ view_state dump_state --height 68874690 --account-ids near ``` ### `dump_tx` Saves all transactions of a range of blocks [start, end] to a file. Flags: * `--start-height` specifies the start block by its height, inclusive. * `--end-height` specifies the end block by its height, inclusive. * `--account-ids` specifies the accounts as receivers of the transactions that need to be dumped. By default, all transactions will be dumped if this parameter is not set. Example: ```shell ./target/release/neard --home ~/.near/mainnet/ view_state dump_tx --start-height 68701890 --end-height 68701890 --account-ids near ``` ### `rocksdb_stats` Tool for measuring statistics of the store for each column: - number of entries - column size - total keys size - total values size Before running, install `sst_dump` tool as follows: ```shell git clone https://github.com/facebook/rocksdb.git cd rocksdb make sst_dump sudo cp sst_dump /usr/local/bin/ ``` Should take ~2m for RPC node and 45m for archival node as of 4 Jan 2022. #### Output List of statistics for each column sorted by column size. #### Running on macOS ```bash brew install --cask google-cloud-sdk export PATH=/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin:$PATH gcloud beta compute ssh --zone "europe-west4-a" "<machine>" --project "rpc-prod" ``` Check running instances at <https://console.cloud.google.com/compute/instances?project=rpc-prod> to see the machine name and datacenter. <br /> <br /> <p align="center"> <img src="docs/images/logo.svg" width="240"> </p> <br /> <br /> ## Reference implementation of NEAR Protocol ![Buildkite](https://img.shields.io/buildkite/0eae07525f8e44a19b48fa937813e2c21ee04aa351361cd851) ![Stable Status][stable-release] ![Prerelease Status][prerelease] [![codecov][codecov-badge]][codecov-url] [![Discord chat][discord-badge]][discord-url] [![Telegram Group][telegram-badge]][telegram-url] [stable-release]: https://img.shields.io/github/v/release/nearprotocol/nearcore?label=stable [prerelease]: https://img.shields.io/github/v/release/nearprotocol/nearcore?include_prereleases&label=prerelease [ci-badge-master]: https://badge.buildkite.com/a81147cb62c585cc434459eedd1d25e521453120ead9ee6c64.svg?branch=master [ci-url]: https://buildkite.com/nearprotocol/nearcore [codecov-badge]: https://codecov.io/gh/nearprotocol/nearcore/branch/master/graph/badge.svg [codecov-url]: https://codecov.io/gh/nearprotocol/nearcore [discord-badge]: https://img.shields.io/discord/490367152054992913.svg [discord-url]: https://near.chat [telegram-badge]: https://cdn.jsdelivr.net/gh/Patrolavia/telegram-badge@8fe3382b3fd3a1c533ba270e608035a27e430c2e/chat.svg [telegram-url]: https://t.me/cryptonear ## About NEAR NEAR's purpose is to enable community-driven innovation to benefit people around the world. To achieve this purpose, *NEAR* provides a developer platform where developers and entrepreneurs can create apps that put users back in control of their data and assets, which is the foundation of ["Open Web" movement][open-web-url]. One of the components of *NEAR* is the NEAR Protocol, an infrastructure for server-less applications and smart contracts powered by a blockchain. NEAR Protocol is built to deliver usability and scalability of modern PaaS like Firebase at fraction of the prices that blockchains like Ethereum charge. Overall, *NEAR* provides a wide range of tools for developers to easily build applications: - [JS Client library][js-api] to connect to NEAR Protocol from your applications. - [Rust][rust-sdk] and [AssemblyScript][as-sdk] SDKs to write smart contracts and stateful server-less functions. - [Numerous examples][examples-url] with links to hack on them right inside your browser. - [Lots of documentation][docs-url], with [Tutorials][tutorials-url] and [API docs][api-docs-url]. [open-web-url]: https://techcrunch.com/2016/04/10/1301496/ [js-api]: https://github.com/near/near-api-js [rust-sdk]: https://github.com/near/near-sdk-rs [as-sdk]: https://github.com/near/near-sdk-as [examples-url]: https://near.dev [docs-url]: https://docs.near.org [tutorials-url]: https://docs.near.org/tutorials/welcome [api-docs-url]: https://docs.near.org/api/rpc/introduction ## Join the Network The easiest way to join the network, is by using the `nearup` command, which you can install as follows: ```bash pip3 install --user nearup ``` You can join all the active networks: * mainnet: `nearup run mainnet` * testnet: `nearup run testnet` * betanet: `nearup run betanet` Check the `nearup` repository for [more details](https://github.com/near/nearup) on how to run with or without docker. To learn how to become validator, checkout [documentation](https://docs.near.org/docs/develop/node/validator/staking-and-delegation). ## Contributing The workflow and details of setup to contribute are described in [CONTRIBUTING.md](CONTRIBUTING.md), and security policy is described in [SECURITY.md](SECURITY.md). To propose new protocol changes or standards use [Specification & Standards repository](https://github.com/nearprotocol/NEPs). ## Getting in Touch We use Zulip for semi-synchronous technical discussion, feel free to chime in: https://near.zulipchat.com/ For non-technical discussion and overall direction of the project, see our Discourse forum: https://gov.near.org ## Fuzzing `near-account-id` ### Setup First, ensure [`cargo-fuzz`](https://github.com/rust-fuzz/cargo-fuzz) is installed: ```console cargo install cargo-fuzz ``` ### Execution Finally, there are two fuzzing targets available: one for [`serde`](https://github.com/serde-rs/serde) and another for [`borsh`](https://github.com/near/borsh-rs). You can run both tests with: ```console cd core/account-id/fuzz RUSTC_BOOTSTRAP=1 cargo fuzz run serde RUSTC_BOOTSTRAP=1 cargo fuzz run borsh ``` By default each fuzz test runs infinitely. To specify how many runs each test is allowed, you can use this: ```console RUSTC_BOOTSTRAP=1 cargo fuzz run serde -runs=1000000000 RUSTC_BOOTSTRAP=1 cargo fuzz run borsh -runs=1000000000 ``` A collection of smart-contract used in nearcore tests. Rust contracts are built via `build.rs`, the Assembly Script contract is build manually and committed to the git repository. `res/near_evm.wasm` and `res/ZombieOwnership.bin` are taken from <https://github.com/near/near-evm/tree/a651e9be680b59cca9aad86c1f9e9e9b54ad9c06> and it's for reproduce a performance issue encountered in EVM contracts. If you want to use a contract from rust core, add ```toml [dev-dependencies] near-test-contracts = { path = "../near-test-contracts" } ``` to the Cargo.toml and use `near_test_contract::rs_contract()`. If you want to use a contract from an integration test, you can read the wasm file directly from the `./res` directory. To populate `./res`, you need to make sure that this crate was compiled. The benchmarks in this directory use the mock node framework to define benchmarks that measure the time taken to sync from an empty home dir to a particular height in the chain defined by the sample home directory archives included here. To run all the benchmarks: ```shell $ cargo bench -p mock-node -F mock_node ``` This will take quite a while though, as each iteration of the benchmark `mock_node_sync_full` takes several minutes, and it's run 10 times. To run just the quicker one: ```shell $ cargo bench -p mock-node -F mock_node -- mock_node_sync_empty ``` You can pretty easily define and run your own benchmark based on some other source home directory by creating a gzipped tar archive and moving it to, say, `tools/mock_node/benches/foo.tar.gz`, and modifying the code like so: ```diff --- a/tools/mock_node/benches/sync.rs +++ b/tools/mock_node/benches/sync.rs @@ -123,5 +123,9 @@ fn sync_full_chunks(c: &mut Criterion) { do_bench(c, "./benches/full.tar.gz", Some(100)) } -criterion_group!(benches, sync_empty_chunks, sync_full_chunks); +fn sync_foo_chunks(c: &mut Criterion) { + do_bench(c, "./benches/foo.tar.gz", Some(123)) +} + +criterion_group!(benches, sync_empty_chunks, sync_full_chunks, sync_foo_chunks); ``` Observability (o11y) helpers for the NEAR codebase. This crate contains all sorts of utilities to enable a more convenient observability implementation in the NEAR codebase. The are three infrastructures: * `tracing`, for structured, hierarchical logging of events (see [`default_subscriber`] function function in particular) * `metrics` -- convenience wrappers around prometheus metric, for reporting statistics. * `io-tracer` -- custom infrastructure for observing DB accesses in particular (mostly for parameter estimator) # General principles 1. Every PR needs to have test coverage in place. Sending the code change and deferring tests for a future change is not acceptable. 2. Tests need to either be sufficiently simple to follow, or have good documentation to explain why certain actions are made and conditions are expected. 3. When implementing a PR, **make sure to run the new tests with the change disabled and confirm that they fail**! It is extremely common to have tests that pass without the change that is being tested. 4. The general rule of thumb for a reviewer is to first review the tests, and ensure that they can convince themselves that the code change that passes the tests must be correct. Only then the code should be reviewed. 5. Have the assertions in the tests as specific as possible, however do not make the tests change-detectors of the concrete implementation. (assert only properties which are required for correctness). For example, do not do `assert!(result.is_err())`, expect the specific error instead. # Tests hierarchy In NEAR Reference Client we largely split tests into three categories: 1. Relatively cheap sanity or fast fuzz tests. It includes all the `#[test]` Rust tests not decorated by features. Our repo is configured in such a way that all such tests are ran on every PR, and failing at least one of them is blocking the PR from being merged. To run such tests locally run `cargo nextest run --all`. It requires nextest harness which can be installed by running `cargo install cargo-nextest` first. 2. Expensive tests. This includes all the fuzzy tests that run many iterations, as well as tests that spin up multiple nodes and run them until they reach a certain condition. Such tests are decorated with `#[cfg(feature="expensive-tests")]`. It is not trivial to enable features that are not declared in the top level crate, and thus the easiest way to run such tests is to enable all the features by passing `--all-features` to `cargo nextest run`, e.g: `cargo nextest run --package near-client --test cross_shard_tx tests::test_cross_shard_tx --all-features` 3. Python tests. We have an infrastructure to spin up nodes, both locally and remotely, in python, and interact with them using RPC. The infrastructure and the tests are located in `pytest` folder. The infrastructure is relatively straightforward, see for example `block_production.py` [here](https://github.com/nearprotocol/nearcore/blob/master/pytest/tests/sanity/block_production.py). See the `Test infrastructure` section below for details. Expensive and python tests are not part of CI, and are run by a custom nightly runner. The results of the latest runs are available [here](http://nightly.neartest.com/). With today tests runs launch approximately every 5-6 hours. For the latest results look at the **second** run, since the first one has some tests still scheduled to run. # Test infrastructure Different levels of the reference implementation have different infrastructure available to test them. ## Client Client is separated from the runtime via a `RuntimeWithEpochManagerAdapter` trait. In production it uses `NightshadeRuntime` that uses real runtime and epoch managers. To test client without instantiating runtime and epoch manager, we have a mock runtime `KeyValueRuntime`. Most of the tests in the client work by setting up either a single node (via `setup_mock()`) or multiple nodes (via `setup_mock_all_validators()`) and then launching the nodes and waiting for a particular message to occur, with a predefined timeout. For the most basic example of using this infrastructure see `produce_two_blocks` in [`tests/process_blocks.rs`](https://github.com/nearprotocol/nearcore/blob/master/chain/client/tests/process_blocks.rs). 1. The callback (`Box::new(move |msg, _ctx, _| { ...`) is what is executed whenever the client sends a message. The return value of the callback is sent back to the client, which allows testing relatively complex scenarios. The tests generally expect a particular message to occur, in this case the tests expects two blocks to be produced. `System::current().stop();` is the way to stop the test and mark it as passed. 2. `near_network::test_utils::wait_or_panic(5000);` is how the timeout for the test is set (in milliseconds). For an example of a test that launches multiple nodes, see `chunks_produced_and_distributed_common` in [tests/chunks_management.rs](https://github.com/nearprotocol/nearcore/blob/master/chain/client/tests/chunks_management.rs). The `setup_mock_all_validators` function is the key piece of infrastructure here. ## Runtime Tests for Runtime are listed in [tests/test_cases_runtime.rs](https://github.com/near/nearcore/blob/master/tests/test_cases_runtime.rs). To run a test, usually a mock `RuntimeNode` is created via `create_runtime_node()`. In its constructor the `Runtime` is created in the `get_runtime_and_trie_from_genesis` function. Inside a test an abstraction `User` is used for sending specific actions to the runtime client. The helper functions `function_call`, `deploy_contract`, etc. eventually lead to the `Runtime.apply` method call. For setting usernames during playing with transactions, use default names `alice_account`, `bob_account`, `eve_dot_alice_account`, etc. ## Network TODO: explain the `runner` here ## Chain, Epoch Manager, Runtime and other low level changes When building new features in the `chain`, `epoch_manager`, `network`, make sure to build new components sufficiently abstract so that they can be tested without relying on other components. For example, see tests for doomslug [here](https://github.com/nearprotocol/nearcore/blob/master/chain/chain/tests/doomslug.rs), for network cache [here](https://github.com/nearprotocol/nearcore/blob/master/chain/network/tests/cache_edges.rs), or for promises in runtime [here](https://github.com/nearprotocol/nearcore/blob/master/runtime/near-vm-logic/tests/test_promises.rs). ## Python tests See [this page](https://github.com/nearprotocol/nearcore/wiki/Writing-integration-tests-for-nearcore) for a detailed coverage of how to write a python test. We have a python library that allows one to create and run python tests. To run python tests, from the `nearcore` repo the first time do the following: ``` cd pytest virtualenv . --python=python3 pip install -r requirements.txt . .env/bin/activate python tests/sanity/block_production.py ``` After the first time: ``` cd pytest . .env/bin/activate python tests/sanity/block_production.py ``` Use `pytest/tests/sanity/block_production.py` as the basic example of starting a cluster with multiple nodes, and doing RPC calls. See `pytest/tests/sanity/deploy_call_smart_contract.py` to see how contracts can be deployed, or transactions called. See `pytest/tests/sanity/staking1.py` to see how staking transactions can be issued See `pytest/tests/sanity/state_sync.py` to see how to delay the launch of the whole cluster by using `init_cluster` instead of `start_cluster`, and then launching nodes manually. ### Enabling adversarial behavior To allow testing adversarial behavior, or generally behaviors that a node should not normally exercise, we have certain features in the code decorated with `#[cfg(feature="adversarial")]`. The binary normally is compiled with the feature disabled, and when compiled with the feature enabled, it traces a warning on launch. The nightly runner runs all the python tests against the binary compiled with the feature enabled, and thus the python tests can make the binary perform actions that it normally would not perform. The actions can include lying about the known chain height, producing multiple blocks for the same height, or disabling doomslug. See all the tests under `pytest/tests/adversarial` for the examples. # Overview This chapter describes various development processes and best practices employed at nearcore. # Chain fetcher This binary takes a hash of a block as an input and then fetches from the network the following: 1. All block headers up to the newest header (or until block-limit is reached). 1. All blocks for the headers fetched. 1. All fragments of all chunks for all blocks fetched. The binary doesn't interpret the data it received (except for checking what it should fetch next), but rather discards it immediately. This way it is able to benchmark the raw throughput of the network from the point of view of a single node. Flags: * chain-id - the name of the chain. The binary fetches the config file of the chain automatically. The binary doesn't use the genesis file at all (it has the genesis file hashes hardcoded instead) TODO: add a flag for genesis file hash. * start-block-hash - the Base58 encoded block hash. The binary will fetch everything starting with this block up to the newest block (or until block-limit is reached). * qps-limit - maximum number of requests per second that the binary is allowed to send. This is a global limit (NOT per connection). The requests are distributed uniformly across all the connections that the program establishes. Peer discovery works the same way as for neard. * block-limit - number of blocks to fetch ## Example usage 1. Go to [https://explorer.testnet.near.org/blocks]. 1. Select the block from which you would like to start fetching from. 1. Copy the hash of the block. 1. run cargo run -- --chain-id=testnet --qps-limit=200 --block-limit=2000 --start-block-hash=<block hash> 1. First you will see that the program is establishing connections. 1. Once there are enough connections it will start sending the requests. 1. Every few seconds you will see a log indicating the progress: * how many requests have been sent, how many responses received * how many headers/blocks/chunks are being fetched, how many have been successfully fetched. 1. Once everything is fetched, the program will print final stats, then "Fetch completed" and terminate. # Workflows This chapter documents various way you can run `neard` during development: running a local net, joining a test net, doing benchmarking and load testing. # Rosetta API Extension for nearcore Rosetta is a public API spec defined to be a common denominator for blockchain projects. Rosetta RPC is built into nearcore and it happily co-exist with JSON RPC. - [Rosetta Homepage](https://www.rosetta-api.org/docs/welcome.html) - [Rosetta API Specification](https://github.com/coinbase/rosetta-specifications) - [Rosetta Tooling](https://github.com/coinbase/rosetta-cli) You can view Rosetta API specification in [OpenAPI (Swagger) UI](https://petstore.swagger.io/) passing the link to Rosetta OpenAPI specification: <https://raw.githubusercontent.com/coinbase/rosetta-specifications/master/api.json>. Also, Rosetta implementation in nearcore exposes auto-generated OpenAPI specification that has some extra comments regarding to the particular implementation, and you can always access it from the running node at <http://localhost:3040/api/spec>. ## Supported Features Our current goal is to have a minimal yet feature-complete implementation of Rosetta RPC serving [the main use-case Rosetta was designed for](https://community.rosetta-api.org/t/what-is-rosetta-main-use-case/92/2), that is exposing balance-changing operations in a consistent way enabling reconciliation through tracking individual blocks and transactions. The Rosetta APIs are organized into two distinct categories, the Data API and the Construction API. Simply put, the Data API is for retrieving data from a blockchain network and the Construction API is for constructing and submitting transactions to a blockchain network. | Feature | Status | | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | Data API | Feature-complete with some quirks | | - `/network/list` | Done | | - `/network/status` | Done | | - `/network/options` | Done | | - `/block` | Feature-complete (exposes only balance-changing operations) | | - `/block/transaction` | Feature-complete (exposes only balance-changing operations and the implementation is suboptimal from the performance point of view) | | - `/account/balance` | Done (properly exposes liquid, liquid for storage, and locked (staked) balances through sub-accounts) | | - `/mempool` | Not implemented as mempool does not hold transactions for any meaningful time | | - `/mempool/transaction` | Not implemented (see above) | | Construction API | Done | | - `/construction/derive` | Done (used for implicit accounts) | | - `/construction/preprocess` | Done | | - `/construction/metadata` | Done | | - `/construction/payloads` | Done | | - `/construction/combine` | Done | | - `/construction/parse` | Done | | - `/construction/hash` | Done | | - `/construction/submit` | Done | ## API Compliance You can verify the API compliance in each network differently. You can run the commands below to check `Data` and `Construction` compliances mentioned in [Rosetta Testing](https://www.rosetta-api.org/docs/rosetta_test.html#run-the-tool). Each network has it's own `.ros` and `.cfg` files that you can configure and run. ```bash rosetta-cli check:data --configuration-file=./rosetta-<mainnet|testnet|localnet>.cfg rosetta-cli check:construction --configuration-file=./rosetta-<mainnet|testnet|localnet>.cfg ``` ##### Localnet For `localnet` you can use the account `test.near` to run the tests. You should replace the `<privateKey>` value in `rosetta-localnet.cfg` with the `privateKey` of `test.near` which you can find in `~/.near-credentials/local` in the `test.near.json` file. ```json ... "prefunded_accounts": [{ "privkey": "<privateKey>", "account_identifier": { "address": "test.near" }, ... ``` After replacing the `privateKey` you will need to replace the `test-chain-I4wNe` with the name of your localnet in `rosetta-localnet.cfg`. ```json "network": { "blockchain": "nearprotocol", "network": "test-chain-I4wNe" }, ``` ##### Testnet To run it against testnet or mainnet would require to also have the `pre-funded accounts` as well as network set to a proper value in the `.ros` and `rosetta-<mainnet|testnet>.cfg` files. Start by [creating an account](https://docs.near.org/docs/tools/near-cli#near-create-account). Created account will be placed in `~/.near-credentials/testnet/<accountname>.testnet.json`. Change `<privateKey>` with the private key of newly created account and `<accountName>` with the account name of the newly created account in `rosetta-testnet.cfg`. ```json ... "prefunded_accounts": [{ "privkey": "<privateKey>", "account_identifier": { "address": "<accountName>" }, ... ``` Next you will need to change the `faucet` with `{"address":"<accountName>"}` in `nearprotocol-testnet.ros`. Now you are ready to run the test in testnet. ##### Mainnet For mainnet you can follow the same steps that you have followed in Testnet documentation. The difference is that the configuration files are named `rosetta-mainnet.cfg` and `nearprotocol-mainnet.ros`. The credentials can be found in `~/.near-credentials/mainnet/<accountname>.near.json`. ## How to Compile To compile the `neard` executable you’ll need Rust and make installed. With those dependencies fulfilled, simply invoke `make neard` to build fully optimised executable. Such executable is adequate for running in production and will be located at `./target/release/neard`. Alternatively, during development and testing it may be better to follow the method recommended when [contributing to nearcore](https://docs.near.org/docs/community/contribute/contribute-nearcore) which creates a slightly less optimised executable but does it faster: ```bash cargo build --release --package neard --bin neard ``` ## How to Configure You need `neard` binary to proceed; if you compiled it from the source code (see above), you can find it in `./target/release/neard`. ### Initial Configuration #### mainnet ```bash neard --home ~/.near/mainnet init --chain-id mainnet --download-genesis --download-config ``` #### testnet ```bash neard --home ~/.near/testnet init --chain-id testnet --download-genesis --download-config ``` NOTE: The genesis of testnet is around 5GB, so it will take a while to download it. #### localnet (for local development) ```bash neard --home ~/.near/localnet init ``` ### Tuning You are free to configure your node the way you feel necessary through the config file: `~/.near/<chain-id>/config.json`. Here are some useful configuration options for Rosetta API. #### Enable Rosetta Server By default, Rosetta API is disabled even if `neard` is compiled with the feature enabled. Thus, we need to add the following section to the top-level of the `config.json` (next to the `"rpc"` section): ```json ... "rosetta_rpc": { "addr": "0.0.0.0:3040", "cors_allowed_origins": [ "*" ] }, ... ``` #### Keep Track of Everything By default, nearcore is configured to do as little work as possible while still operating on an up-to-date state. Indexers may have different requirements, so there is no solution that would work for everyone, and thus we are going to provide you with the set of knobs you can tune for your requirements. As already has been mentioned in this README, the most common tweak you need to apply is listing all the shards you want to index data from; to do that, you should ensure that `"tracked_shards"` lists all the shard IDs, e.g. for the current betanet and testnet, which have a single shard: ```json ... "tracked_shards": [0], ... ``` By default, nearcore is configured to automatically clean old data (performs garbage collection), so querying the data that was observed a few epochs before may return an error saying that the data is missing. If you only need recent blocks, you don't need this tweak, but if you need access to the historical data, consider updating `"archive"` setting in `config.json` to `true`: ```json ... "archive": true, ... ``` ## How to Run Once you have configured the node, just execute `neard` with the relevant home dir: ```bash neard --home ~/.near/mainnet run ``` To confirm that everything is fine, you should be able to query the Rosetta API: ```bash curl http://127.0.0.1:3040/network/list --header 'Content-Type: application/json' --data '{"metadata": {}}' ``` Expect to see the following response: ```json { "network_identifiers": [{ "blockchain": "nearprotocol", "network": "mainnet" }] } ``` The `network` value should reflect the chain id you specified during configuration (`mainnet`, `testnet`, `betanet`, or a random string like `test-chain-ztmbv` for localnet development). # near-epoch-manager crate Epoch manager crate is responsible for code related to epochs and epoch switching. An epoch is a unit of time when the set of validators of the network remain constant. You can read more about the epoch here: https://docs.near.org/concepts/basics/epoch You can read more about Epoch finalization and Epoch changes here: https://github.com/near/NEPs/blob/master/specs/BlockchainLayer/EpochManager/EpochManager.md ## EpochManager Main class that has two main functions: * it creates new epochs (EpochIds) * allows accessing information about past and current epochs (who is producing/approving given blocks, info about validators/fishermen etc ). ### New Epoch Creation When 'finalize_epoch' is called, the EpochManager will do all the necessary processing (like computing validator rewards for the current epoch (T), selecting validators for the next next epoch (T+2) etc) and create the new EpochId/EpochInfo. ### Accessing epoch information EpochManager has also a lot of methords that allows you to fetch information from different past and present epochs (like who is the chunk/block producer for a given chunk/block, whether the block is at the end of epoch boundary and requires more signatures etc) ## RewardCalculator RewardCalculator is responsible for computing rewards for the validators at the end of the epoch, based on their block/chunk productions. You can see more details on the https://nomicon.io/Economics/README.html#validator-rewards-calculation ## Validator Selection / proposals / proposals_to_epoch_info These files/functions are responsible for selecting the validators for the next epoch (and internally - also deciding which validator will produce which block and which chunk). We've recently (Dec 2021) introduced a new algorithm for validator selection (AliasValidatorSelectionAlgorithm), which is the reason why you can see both the old and the new implementation present in the code - with new code existing in `validator_selection.rs`, while old code in `proposals.rs`. ## Shard assignments This code is responsible for assigning validators (and chunk producers) to shards (chunks). This wil be used only once we enable `chunk_only_producers` feature (as before, we're simply assigning all the validators to validate each chunk). ## Epoch info aggregator This is the class that keeps 'stats' for a given epoch (for example: info on how many blocks/chunks did the validators produce in the epoch, protocol version that validators support etc.). It is used to compute the validator rewards and new validators at the end of the epoch. A smart contract written in AssemblyScript that can be used to make sure near runtime is compatible with AssemblyScript smart contracts. # Pre-requisites Switch to the smart contract directory and point npm to AssemblyScript: ```bash npm install --save-dev AssemblyScript/assemblyscript ``` Then install dependencies with ```bash npm install ``` # Building Build smart contract with: ```bash npm run asbuild:untouched ``` And copy the smart contract into `res` directory: ```bash cp build/untouched.wasm ../res/test_contract_ts.wasm ``` Then run the Rust integration test with: ```bash cargo test --package near-vm-runner --test test_ts_contract "" -- --nocapture ``` # Delay Detector Delay Detector is a library that can be used to measure time spent in different functions. Internally it measures time that passed between its creation and descruction. ## Example ``` { let d_ = DelayDetector::new("my function"); my_function(); // d_ goes out of scope and prints the time information into the log. } ``` More advanced example: ``` { let d = DelayDetector::new("long computation"); part1(); d.snaphot("part1") part2(); d.snapshot("part2") part3(); d.shapshot("part3") // d goes out of scope and prints the total time information and time between each 'snapshot' call. } ``` # Loadtest This test requires a few steps. Firstly, build the binary: ```shell make neard-release ``` Secondly, initialise your own localnet: ```shell ./target/release/neard --home ~/.near_tmp init --chain-id localnet --num-shards=5 ``` Thirdly, create accounts and deploy the contract: ```shell python3 pytest/tests/loadtest/setup.py --home ~/.near_tmp --num_accounts=5 ``` And lastly, run the test: ```shell python3 pytest/tests/loadtest/loadtest.py --home ~/.near_tmp --num_accounts=5 --num_requests=1000 ``` # Speedy sync (a.k.a PoorMan's EpochSync) The goal of the speedy sync is to allow people to cathup quickly with mainnet, before we have fully implemented the EpochSync feature. Currently, in order to catchup with mainnet there are two possible options: * download a DB backup that Pagoda provides (around 200GB) * sync from scrach - which can take couple days. With SpeedySync, you're able to catchup with mainnet in around 2-3 hours. # How does it work? With regular sync, your job needs to download all the headers from the genesis block until now (so around 60 million headers - as of May 2022). This of course will take a lot of time (possibly even days). The real fix, will come once we finish building EpochSync - which would require the system to load only a single block per epoch (therefore would limit number of blocks needed by a factor of 40k - to around 12k blocks). But as EpochSync is not there yet, you can use SpeedySync in the meantime. SpeedySync uses a small checkpoint (around 50kb), that contains the necessary information about the state of the chain at a given epoch. Therefore your job can continue syncing from that moment, rather than directly from genesis. ## Is it safe? Yes, but with small caveat: If someone provides you with a fake checkpoint, your future block hashes will not match, that's why **You should verify the block headers after your job is synced, to make sure that they match other blocks on the mainnet**. # How do I use it? ## Creating a checkpoint To create a checkpoint, please run: ``` cargo build -p speedy_sync ./speedy_sync create --home $PATH_TO_RUNNING_NEAR_NODE --destination-dir $PATH_TO_PLACE_WHERE_TO_PUT_CHECKPOINT ``` ## Loading a checkpoint If your new HOME dir doesn't have a node_key.json file, you can generate a random one using: ``` cargo run -p keypair-generator -- --home /tmp/bar --generate-config node-key ``` To load a checkpoint, please run: ``` cargo build -p speedy_sync ./speedy_sync load --source-dir $PATH_TO_CHECKPOINT_DIR --target-home $PATH_TO_HOME_DIR_OF_A_NEW_NODE ``` ### After running speedy **Important:** After running the 'load' command, you must still copy the 'node_key.json' file into that directory, before running neard. Please also check and verify the config.json file. Afterwards you can start the neard with the new homedir and let it sync: ``` ./neard --home $PATH_TO_HOME_DIR_OF_A_NEW_NODE ``` # How neard will work The documents under this chapter are talking about the future of NEAR - what we're planning on improving and how. (This also means that they can get out of date quickly :-). If you have comments, suggestions or want to help us designing and implementing some of these things here - please reach out on Zulip or github. # Steps to run hundred node tests ## Prerequisites - Have a [gcloud cli installed](https://cloud.google.com/sdk/install), [being logged in](https://cloud.google.com/sdk/gcloud/reference/auth/login), default project set. (If you can `gcloud compute instances list` and your account can create gcloud instances then it's good) - python3, virtualenv, pip3 - Locally compile these packages (used for create config, keys and genesis) ```bash cargo build -p neard --release cargo build -p genesis-csv-to-json --release cargo build -p keypair-generator --release ``` ## Steps 1. Install python dependencies: ```bash sudo apt install python3-dev cd pytest virtualenv venv -p `which python3` # First time only . venv/bin/activate pip install -r requirements.txt ``` Note: You need python3.6 or greater. 2. Create a gcloud (vm disk) image that has compiled near binary ```bash # This will build from current branch, image name as near-<branch>-YYYYMMDD-<username>, cargo build -p near --release python tests/stress/hundred_nodes/create_gcloud_image.py # If you want different branch, image name or additional flags passed to cargo python tests/stress/hundred_nodes/create_gcloud_image image_name branch 'additional flags' ``` 3. Start hundred nodes ```bash # will use the near-<branch>-YYYYMMDD-<username> image, instance name will be pytest-node-<username>-0 to 99 python tests/stress/hundred_nodes/start_100_nodes.py # If you have a different image name, or want different instance name ... start_100_nodes.py image_name instance_name_prefix ``` Nodes are running after this step 4. Access every node ```bash gcloud compute ssh pytest-node-<i> tmux a ``` ## Clean up - Logs are stored in each instance in `/tmp/python-rc.log`. You can collect them all by `tests/stress/hundred_nodes/collect_logs.py`. - Delete all instances quickly with `tests/delete_remote_nodes.py [prefix]` ## Some notes If you have volatile or slow ssh access to gcloud instances, these scripts can fail at any step. I recommend create an instance on digitalocean, mosh to digitalocean instance (mosh is reliable), running all pytest script there (access gcloud from digital ocean is fast). In the reliable office network or mosh-digitalocean over an unreliable network, scripts never failed. # Chain configs crate This crate provides typed interfaces to the NEAR Genesis and Client Configs, together with the functions to validate their correctness. ## Genesis config Genesis config is the one that 'defines' the chain. It was set at the beginning and generally is not mutable. ## Client config Client config is the part of the config that client can configure on their own - it controls things like: how many peers it should connect to before syncing, which shards to track etc. ## Protocol config This is the type that is spanning over GenesisConfig and RuntimeConfig. People should not use it directly, but use the ProtocolConfigView class instead. # Python-based tests The directory contains Python-based tests. The tests are run as part of nightly testing on NayDuck though they can be run locally as well. There is no set format of what the tests do but they typical start a local test cluster using neard binary at `../target/debug/neard`. There is also some capacity of starting the cluster on remote machines. ## Running tests ### Running tests locally To run tests locally first compile a debug build of the nearcore package, make sure that all required Python packages are installed and then execute the test file using python. For example: cargo build cd pytest python3 -m pip install -U -r requirements.txt python3 tests/sanity/one_val.py After the test finishes, log files and other result data from running each node will be located in a `~/.near/test#_finished` directory (where `#` is index of the node starting with zero). Note that running the tests using `pytest` command is not supported and won’t work reliably. Furthermore, running multiple tests at once is not supported either because tests often use hard-coded paths (e.g. `~/.node/test#` for node home directories) and port numbers ### Ruining tests on NayDuck As mentioned, the tests are normally run nightly on NayDuck. To schedule a run on NayDuck manual the `../scripts/nayduck.py` script is used. The `../nightly/README.md` file describes this is more detail. ### Running pytest remotely The test library has code for executing tests while running the nodes on remote Google Cloud machines. Presumably that code worked in the past but I, mina86, haven’t tried it and am a bit sceptical as to whether it is still functional. Regardless, for anyone who wants to try it out, the instructions are as follows: Prerequisites: 1. Same as local pytest 2. gcloud cli in PATH Steps: 1. Choose or upload a near binary here: https://console.cloud.google.com/storage/browser/nearprotocol_nearcore_release?project=near-core 2. Fill the binary filename in remote.json. Modify zones as needed, they’ll be used in round-robin manner. 3. `NEAR_PYTEST_CONFIG=remote.json python tests/...` 4. Run `python tests/delete_remote_nodes.py` to make sure the remote nodes are shut down properly (especially if tests failed early). ## Creating new tests To add a test simply create a Python script inside of the `tests` directory and add it to a test set file in `../nightly` directory. See `../nightly/README.md` file for detailed documentation of the test set files. Note that if you add a test file but don’t include it in nightly test set the pull request check will fail. Even though this directory is called `pytest`, the tests need to work when executed via `python3`. This means that they need to execute the tests when run as the main module rather than just defining the tests function. To make that happen it’s best to implement the tests using the python's unittest framework but trigger them manually from within the `__main__` condition like so: if __name__ == "__main__": unittest.main() Alternatively, using the legacy way, the tests can be defined as `test_<foo>` functions with test bodies and than executed in a code fragment guarded by `if __name__ == '__main__'` condition. If the test operates on the nodes running in a cluster, it will very likely want to make use of `start_cluster` function defined in the `lib/cluster.py` module. Rather than assuming location a temporary directory, well-behaved test should use `tempfile` module instead which will automatically take `TEMPDIR` variable into consideration. This is especially important for NayDuck which will automatically cleanup after a test which respects `TEMPDIR` directory even if that tests ends up not cleaning up its temporary files. For example, a simple test for checking implementation of `max_gas_burnt_view` could be located in `tests/sanity/rpc_max_gas_burnt.py` and look as follows: """Test max_gas_burnt_view client configuration. Spins up two nodes with different max_gas_burnt_view client configuration, deploys a smart contract and finally calls a view function against both nodes expecting the one with low max_gas_burnt_view limit to fail. """ import sys import base58 import base64 import json import pathlib sys.path.append(str(pathlib.Path(__file__).resolve().parents[2] / 'lib')) from cluster import start_cluster from utils import load_binary_file import transaction def test_max_gas_burnt_view(): nodes = start_cluster(2, 0, 1, config=None, genesis_config_changes=[], client_config_changes={ 1: {'max_gas_burnt_view': int(5e10)} }) contract_key = nodes[0].signer_key contract = load_binary_file( '../runtime/near-test-contracts/res/test_contract_rs.wasm') # Deploy the fib smart contract latest_block_hash = nodes[0].get_latest_block().hash deploy_contract_tx = transaction.sign_deploy_contract_tx( contract_key, contract, 10, base58.b58decode(latest_block_hash.encode('utf8'))) deploy_contract_response = ( nodes[0].send_tx_and_wait(deploy_contract_tx, 10)) def call_fib(node, n): args = base64.b64encode(bytes([n])).decode('ascii') return node.call_function( contract_key.account_id, 'fibonacci', args, timeout=10 ).get('result') # Call view function of the smart contract via the first # node. This should succeed. result = call_fib(nodes[0], 25) assert 'result' in result and 'error' not in result, ( 'Expected "result" and no "error" in response, got: {}' .format(result)) # Same but against the second node. This should fail. result = call_fib(nodes[1], 25) assert 'result' not in result and 'error' in result, ( 'Expected "error" and no "result" in response, got: {}' .format(result)) error = result['error'] assert 'HostError(GasLimitExceeded)' in error, ( 'Expected error due to GasLimitExceeded but got: {}'.format(error)) if __name__ == '__main__': test_max_gas_burnt_view() ### NayDuck environment When executed on NayDuck, tests have access to `neard`, `genesis-populate` and `restaked` binaries in `../target/debug` or `../target/release` directory (depending if the test has been scheduled with `--release` flag) just as if they were executed on local machine. Similarly, freshly built NEAR test contracts will be located in `../runtime/near-test-contracts/res` directory. The `NAYDUCK=1`, `NIGHTLY_RUNNER=1` and `NAYDUCK_TIMEOUT=<timeout>` environment variables are set when tests are run on NayDuck. If necessary and no other option exists, the first two can be used to change test’s behaviour to accommodate it running on the testing infrastructure as opposed to local machine. Meanwhile, `NAYDUCK_TIMEOUT` specifies how much time in seconds test has to run before NayDuck decides the test failed. ### Code Style To automate formatting and avoid excessive bike shedding, we're using YAPF to format Python source code in the pytest directory. It can be installed from Python Package Index (PyPI) using `pip` tool: python3 -m pip install yapf Once installed, it can be run either on a single file, for example with the following command: python3 -m yapf -pi lib/cluster.py or the entire directory with command as seen below: python3 -m yapf -pir . The `-p` switch enables parallelism and `-i` applies the changes in place. Without the latter switch the tool will write formatted file to standard output instead. The command should be executed in the `pytest` directory so that it’ll pick up configuration from the `.style.yapf` file.
near_sdk-rs-gas-benchmark
.github ISSUE_TEMPLATE BOUNTY.yml deploy-contract Cargo.toml build.sh src lib.rs expensive-calc Cargo.toml build.sh src lib.rs highlevel-collection Cargo.toml build.sh src lib.rs highlevel-minimal Cargo.toml build.sh src lib.rs lowlevel-api Cargo.toml build.sh src lib.rs lowlevel-minimal Cargo.toml build.sh src lib.rs status-message Cargo.toml build.sh src lib.rs
near_dkim-auth
.github ISSUE_TEMPLATE BOUNTY.yml README.md control-delegator Cargo.toml build.sh deploy.sh src lib.rs dkim-controller Cargo.toml build.sh deploy.sh src lib.rs dkim Cargo.toml src bytes.rs canonicalization.rs dns.rs errors.rs hash.rs header.rs lib.rs parser.rs public_key.rs result.rs roundtrip_test.rs sign.rs test keys 2022.txt email-relayer Cargo.toml src main.rs
# Proof-of-Concept for email-based authentication for NEAR The goal of this repo is to show the Proof of concept of using the DKIM signatures (added by default to emails) as a way to authenticate transactions. This would allow users to control their NEAR account via email - by setting the command that they would like to execute in the subject, and then sending the email to one of the recipients. Email would be signed by the sender's server (in current design, we only support gmail) - and this signature can be verified by the contract. ## High level design The setup consists of 3 sub-projects: control-delegator contract, dkim-controller contract and email-relayer server. ### control-delegator contract This is the contract that is running on the 'users' account - to handle delegated requests coming from the dkim-controller contract. ### dkim-controller contract This is the main contract that takes are of validating DKIM messages - and passing them to workers (and creating workers accounts). ### email-relayer server This is the job that gets emails from the imap server - and sends them as transactions. IMPORTANT: server doesn't actually have any special powers. It is acting more like a relayer - that takes the incoming email and executes the Near function call. If it tried to change anything in the email contents, then the signature verification in contract would have failed.
Learn-NEAR_NCD--redirector-extension
README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts main.ts model.ts tsconfig.json babel.config.js manifest.json neardev shared-test-staging test.near.json shared-test test.near.json package-lock.json package.json src background core.ts index.html index.ts common customWalletConnection.ts helpers.ts types.ts pairing index.html popup index.html near.svg tests integration App-integration.test.js typesPatch.d.ts tsconfig.json webpack.common.js webpack.dev.js webpack.prod.js
# Redirector Browser Extension ## Concept Redirector is an extension suggesting possible forwards for user-generated resources. It is like HTTP 301/302 (Redirect) but initiated by extension, not by the server. It allows creating a redirect for some URL even if the hoster doesn’t provide this functionality. In the MVP the list of redirects is stored on [GitHub](https://github.com/dapplets/community-redirector-registry). Please create a pull request to add more redirects. In the next versions of the Redirector, the list of redirects will be decentralized and community-driven. ## Getting Started ### Installation 1. Clone this repo 2. Run `yarn install` (or `npm install`) 3. Run `yarn dev` (or `npm run dev`) 4. Run `yarn test` (or `npm run test`) ### How to use? 1. Open the popup ![](https://github.com/dapplets/community-redirector-extension/blob/master/docs/images/popup-no-account.png?raw=true) 2. Sign in to your NEAR account ![](https://github.com/dapplets/community-redirector-extension/blob/master/docs/images/popup-signed-in.png?raw=true) 3. Create new redirect. 3.1. Open a web page, which you want to redirect from. 3.2. Type to the search bar (omnibox) `redirect` and press `tab`. 3.3. Paste a URL of target web page, where you want to be redirected. 3.4. Select the suggestion `Create redirection to <url> from <url>` ![](https://github.com/dapplets/community-redirector-extension/blob/master/docs/images/omnibox-create-redirect.png?raw=true) 3.5. Type a message for a user, who will be redirected. ![](https://github.com/dapplets/community-redirector-extension/blob/master/docs/images/omnibox-prompt-message.png?raw=true) 3.6. After successfull redirect creation an alert will be shown. ![](https://github.com/dapplets/community-redirector-extension/blob/master/docs/images/omnibox-success-alert.png?raw=true) 4. Test your redirect. 4.1. Open a web page, which you want to redirect from. The redirect window will be shown, click "Redirect". ![](https://github.com/dapplets/community-redirector-extension/blob/master/docs/images/window-redirect.png?raw=true) ## Authors * **Dmitry Palchun** - *Initial work* - [ethernian](https://github.com/ethernian) * **Alexander Sakhaev** - *Initial work* - [alsakhaev](https://github.com/alsakhaev)
mehul-da_Near-CyberTicket
.gitpod.yml README.md babel.config.js contract Cargo.toml README.md compile.js src lib.rs target .rustc_info.json debug .fingerprint Inflector-421fec9da2450b13 lib-inflector.json autocfg-10f050a60f0851d4 lib-autocfg.json borsh-derive-2c9bae8573031336 lib-borsh-derive.json borsh-derive-internal-1bd62c8c7c1e7b81 lib-borsh-derive-internal.json borsh-schema-derive-internal-28b251e824bdb6f1 lib-borsh-schema-derive-internal.json byteorder-2fee559003480541 build-script-build-script-build.json convert_case-fd3f3eda7d128328 lib-convert_case.json derive_more-48d6873a850c1327 lib-derive_more.json generic-array-2779262564493281 build-script-build-script-build.json hashbrown-20b80f628e4bd520 lib-hashbrown.json hashbrown-2ebe92614a34df13 run-build-script-build-script-build.json hashbrown-da37d4392ec6455b build-script-build-script-build.json indexmap-608f0cdbf4a2f0c2 run-build-script-build-script-build.json indexmap-85a831240cca7b1e build-script-build-script-build.json indexmap-a1928c98df72f240 lib-indexmap.json itoa-0c4afa5ddf3b01b6 lib-itoa.json memchr-6e67e2699653af0a build-script-build-script-build.json near-rpc-error-core-e321ce3900c316a1 lib-near-rpc-error-core.json near-rpc-error-macro-08f88f4b8af1d95a lib-near-rpc-error-macro.json near-sdk-core-69000b972137e98d lib-near-sdk-core.json near-sdk-macros-3095662dc8cddf72 lib-near-sdk-macros.json num-bigint-428aa12d461281d5 build-script-build-script-build.json num-integer-c2fd55dd6ca94d3f build-script-build-script-build.json num-rational-821f0488a696a932 build-script-build-script-build.json num-traits-6b8559fb0f812e5b build-script-build-script-build.json proc-macro-crate-51faf7a6969293f2 lib-proc-macro-crate.json proc-macro2-6e989d10b7180836 run-build-script-build-script-build.json proc-macro2-c6b528058b5f4112 build-script-build-script-build.json proc-macro2-e10d1cfce506c4e3 lib-proc-macro2.json quote-8824194b103d456f lib-quote.json ryu-434de6aa728fcc03 build-script-build-script-build.json ryu-48e88b40ca9eb950 run-build-script-build-script-build.json ryu-f933a429278f7624 lib-ryu.json serde-14553aa89352af97 run-build-script-build-script-build.json serde-3e32a2ee4bbf5b18 lib-serde.json serde-42ea2b9ba199bc16 build-script-build-script-build.json serde_derive-3b3203750f84de75 run-build-script-build-script-build.json serde_derive-87755857f00aee87 lib-serde_derive.json serde_derive-c290a943d95858e2 build-script-build-script-build.json serde_json-99151f6afe727bb0 build-script-build-script-build.json serde_json-cd102e09047f5f79 run-build-script-build-script-build.json serde_json-e9d7ae203685f39f lib-serde_json.json syn-296bb421c99d9f34 run-build-script-build-script-build.json syn-48bfcd71b3413f31 build-script-build-script-build.json syn-688e66ae8cc99712 lib-syn.json toml-6ce6bce51eba75db lib-toml.json typenum-c0b6e4bed433a45d build-script-build-script-main.json unicode-xid-a6b77c0954f215e8 lib-unicode-xid.json version_check-2914e3c918269eb0 lib-version_check.json wee_alloc-2bf4422d6e538e1a build-script-build-script-build.json wasm32-unknown-unknown debug .fingerprint ahash-fa2210e97843f4ad lib-ahash.json aho-corasick-e2cfe3876d2b84a5 lib-aho_corasick.json base64-254d02c6dbc97ed3 lib-base64.json block-buffer-59aff2a7c02a1af8 lib-block-buffer.json block-buffer-6edd453b3a1e92ea lib-block-buffer.json block-padding-87c5d9cd96df8533 lib-block-padding.json borsh-5c92855b0956eabb lib-borsh.json bs58-23a386e28e880a89 lib-bs58.json byte-tools-5268c94bb2c5b087 lib-byte-tools.json byteorder-95b689927f60073d run-build-script-build-script-build.json byteorder-bf9b3aa34efc69c7 lib-byteorder.json cfg-if-1cd5b12b2c7578f3 lib-cfg-if.json cfg-if-e930fb202b63612a lib-cfg-if.json digest-27094599b98e3eda lib-digest.json digest-970e9f03114856ad lib-digest.json generic-array-2d59ad8d42dc365a lib-generic_array.json generic-array-495f8487c630bc76 run-build-script-build-script-build.json generic-array-68c81b3be9373d29 lib-generic_array.json greeter-98ace302c5fafa12 lib-greeter.json hashbrown-8c96461529b90c64 lib-hashbrown.json hashbrown-af97098fabe7e2ca lib-hashbrown.json hashbrown-b14ea15f0c0b571d run-build-script-build-script-build.json hex-361568f410c04db4 lib-hex.json indexmap-64ede45148d7aaec run-build-script-build-script-build.json indexmap-dd32e2efc564b802 lib-indexmap.json itoa-9df98163022f3db2 lib-itoa.json keccak-10f3fe4d4661ea51 lib-keccak.json lazy_static-7ba39ce008ae1372 lib-lazy_static.json memchr-20fff08d42b23e68 lib-memchr.json memchr-83393503b9d8e72c run-build-script-build-script-build.json memory_units-7758907f042da43e lib-memory_units.json near-primitives-core-787d77ecb8c20538 lib-near-primitives-core.json near-runtime-utils-e2142c28a4b722ae lib-near-runtime-utils.json near-sdk-d5ff1a9d4b70d2d5 lib-near-sdk.json near-vm-errors-580d912f4b6e7827 lib-near-vm-errors.json near-vm-logic-f88860cf47e7a7b1 lib-near-vm-logic.json num-bigint-836d97744bbf06e2 run-build-script-build-script-build.json num-bigint-e7963ed3db7ed147 lib-num-bigint.json num-integer-42fbc122e49f8da8 run-build-script-build-script-build.json num-integer-a2926d0e455a171d lib-num-integer.json num-rational-344b1ecd750faeff run-build-script-build-script-build.json num-rational-3abefbe3af24c03a lib-num-rational.json num-traits-161c05c92fd2f027 lib-num-traits.json num-traits-678857698c836f20 run-build-script-build-script-build.json opaque-debug-7df52cb9aab9a7c7 lib-opaque-debug.json opaque-debug-b0777fe7745352ee lib-opaque-debug.json regex-580d16f0249bb7e7 lib-regex.json regex-syntax-a4fa08328ed07945 lib-regex-syntax.json ryu-23905b90ecf58c0d run-build-script-build-script-build.json ryu-5ac909b8b9f903b5 lib-ryu.json serde-3bc3b2e3a91367d5 run-build-script-build-script-build.json serde-af477425fa64ff15 lib-serde.json serde_json-422fd7b1dabbc896 lib-serde_json.json serde_json-d781bc3f541cdacc run-build-script-build-script-build.json sha2-8b7e9ad97c21c747 lib-sha2.json sha3-b6c649e7ca647d88 lib-sha3.json typenum-25ff754cfe73c661 lib-typenum.json typenum-e5a0178000d0f867 run-build-script-build-script-main.json wee_alloc-b6544f05f7ef506d lib-wee_alloc.json wee_alloc-e258b329bcce949c run-build-script-build-script-build.json build num-bigint-836d97744bbf06e2 out radix_bases.rs typenum-e5a0178000d0f867 out consts.rs op.rs tests.rs wee_alloc-e258b329bcce949c out wee_alloc_static_array_backend_size_bytes.txt package.json src App.js Fundraising.js Home Home.js Launchpad Launchpad.js Marketplace Marketplace.js __mocks__ fileMock.js assets logo-black.svg logo-white.svg config.js global.css index.html index.js jest.init.js main.test.js utils.js wallet login index.html
cyber-ticket Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html # cyber-ticket This [React] app was initialized with [create-near-app] # Quick Start To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. # Exploring The Code 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. # Deploy Every smart contract in NEAR has its [own associated account][near accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. ## Step 0: Install near-cli (optional) [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) ## Step 1: Create an account for the contract Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `cyber-ticket.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `cyber-ticket.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account cyber-ticket.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet ## Step 2: set contract name in code Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'cyber-ticket.YOUR-NAME.testnet' ## Step 3: deploy! One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. # Troubleshooting On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [react]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [near accounts]: https://docs.near.org/docs/concepts/account [near wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages
amirsaran3_guest-book-example
.eslintrc.yml .github dependabot.yml workflows deploy.yml tests.yml .gitpod.yml .travis.yml README-Gitpod.md README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts guestbook.spec.ts as_types.d.ts main.ts model.ts tsconfig.json babel.config.js neardev shared-test-staging test.near.json shared-test test.near.json package-lock.json package.json src App.js config.js index.html index.js tests integration App-integration.test.js ui App-ui.test.js
Guest Book ========== [![Build Status](https://travis-ci.com/near-examples/guest-book.svg?branch=master)](https://travis-ci.com/near-examples/guest-book) [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/near-examples/guest-book) <!-- MAGIC COMMENT: DO NOT DELETE! Everything above this line is hidden on NEAR Examples page --> Sign in with [NEAR] and add a message to the guest book! A starter app built with an [AssemblyScript] backend and a [React] frontend. Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you have Node.js ≥ 12 installed (https://nodejs.org), then use it to install [yarn]: `npm install --global yarn` (or just `npm i -g yarn`) 2. Run the local development server: `yarn && yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Running `yarn dev` will tell you the URL you can visit in your browser to see the app. Exploring The Code ================== 1. The backend code lives in the `/assembly` folder. This code gets deployed to the NEAR blockchain when you run `yarn deploy:contract`. This sort of code-that-runs-on-a-blockchain is called a "smart contract" – [learn more about NEAR smart contracts][smart contract docs]. 2. The frontend code lives in the `/src` folder. [/src/index.html](/src/index.html) is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and backend. The backend code gets tested with the [asp] command for running the backend AssemblyScript tests, and [jest] for running frontend tests. You can run both of these at once with `yarn test`. Both contract and client-side code will auto-reload as you change source files. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contracts get deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli -------------------------- You need near-cli installed globally. Here's how: npm install --global near-cli This will give you the `near` [CLI] tool. Ensure that it's installed with: near --version Step 1: Create an account for the contract ------------------------------------------ Visit [NEAR Wallet] and make a new account. You'll be deploying these smart contracts to this new account. Now authorize NEAR CLI for this new account, and follow the instructions it gives you: near login Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'your-account-here!' Step 3: change remote URL if you cloned this repo ------------------------- Unless you forked this repository you will need to change the remote URL to a repo that you have commit access to. This will allow auto deployment to GitHub Pages from the command line. 1) go to GitHub and create a new repository for this project 2) open your terminal and in the root of this project enter the following: $ `git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY.git` Step 4: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contracts to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. [NEAR]: https://near.org/ [yarn]: https://yarnpkg.com/ [AssemblyScript]: https://www.assemblyscript.org/introduction.html [React]: https://reactjs.org [smart contract docs]: https://docs.near.org/docs/develop/contracts/overview [asp]: https://www.npmjs.com/package/@as-pect/cli [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.near.org [near-cli]: https://github.com/near/near-cli [CLI]: https://www.w3schools.com/whatis/whatis_cli.asp [create-near-app]: https://github.com/near/create-near-app [gh-pages]: https://github.com/tschaub/gh-pages
PrimeLabCore_group
.github workflows ci.yml CHANGELOG.md Cargo.toml README.md src cofactor.rs lib.rs prime.rs tests mod.rs wnaf.rs
# group [![Crates.io](https://img.shields.io/crates/v/group.svg)](https://crates.io/crates/group) # `group` is a crate for working with groups over elliptic curves. ## License Licensed under either of * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
gabarod_near-CareConnect
.gitpod.yml README.md contract Cargo.toml README.md build.sh deploy.sh neardev dev-account.env src lib.rs frontend App.js assets global.css logo-black.svg logo-white.svg index.html index.js near-interface.js near-wallet.js package-lock.json package.json start.sh ui-components.js integration-tests Cargo.toml src tests.rs package-lock.json package.json
near-care-connect-project ================== This app was initialized with [create-near-app] Quick Start =========== If you haven't installed dependencies during setup: npm install Build and deploy your contract to TestNet with a temporary dev account: npm run deploy Test your contract: npm test If you have a frontend, run `npm start`. This will run a dev server. Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"greeting":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
NEARBuilders_weighted-voting
Cargo.toml readme.md src lib.rs
Hashimdev-spec_We3-NEAR-Blockchain-Dapp
.gitpod.yml README.md contract README.md babel.config.json build.sh deploy.sh package.json src contract.ts tsconfig.json frontend assets global.css logo-black.svg logo-white.svg index.html index.js near-wallet.js package-lock.json package.json start.sh integration-tests package.json src main.ava.ts package-lock.json package.json
# Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```ts @NearBindgen({}) class HelloNear { greeting: string = "Hello"; @view // This method is read-only and can be called for free get_greeting(): string { return this.greeting; } @call // This method changes the state, for which it cost gas set_greeting({ greeting }: { greeting: string }): void { // Record a log permanently to the blockchain! near.log(`Saving greeting ${greeting}`); this.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [node.js](https://nodejs.org/en/download/package-manager/) >= 16. 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash npm run deploy ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `call` method. `Call` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"greeting":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # NEAR Blockchain Dapp with a Simple Smart Contract 👋 [![](https://img.shields.io/badge/⋈%20Examples-basics-green)](https://docs.near.org/tutorials/welcome) [![](https://img.shields.io/badge/Gitpod-ready-orange)](https://gitpod.io/#/https://github.com/near-examples/hello-near-js) [![](https://img.shields.io/badge/Contract-js-yellow)](https://docs.near.org/develop/contracts/anatomy) [![](https://img.shields.io/badge/Frontend-js-yellow)](https://docs.near.org/develop/integrate/frontend) [![](https://img.shields.io/github/workflow/status/near-examples/hello-near-js/Tests/master?color=green&label=Tests)](https://github.com/near-examples/hello-near-js/actions/workflows/tests.yml)
mhassanist_wallet-selection-guestbook
.env README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts guestbook.spec.ts as_types.d.ts main.ts model.ts tsconfig.json babel.config.js neardev shared-test-staging test.near.json shared-test test.near.json package.json src environments environment.prod.ts environment.ts index.html interfaces.ts polyfills.ts tsconfig.json
# wallet-selection-guestbook
nearvndev_staking-contract-rs
Cargo.toml README.md build.sh src account.rs core_impl.rs enumeration.rs internal.rs lib.rs util.rs tests simulation-tests main.rs
# Staking FT Contract ## Roadmap - [ ]
hdriqi_near-textile-indexer-example
README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts memento.spec.ts user.spec.ts as_types.d.ts event.ts main.ts tsconfig.json example-query.js gen-account.js index.js package.json
# near-textile-indexer-example This repo is example on how to use [near-textile-indexer](https://github.com/hdriqi/near-textile-indexer) This example is using AssemblyScript-based smart contract. The indexer is also available for Rust-based smart contract as long it is satisfy the required `methods` and the `Event type` ## How to 1. Clone this repo ```bash git clone https://github.com/hdriqi/near-textile-indexer-example ``` 2. Install dependency ```bash yarn install or npm install ``` # Env Setup Create `env` based on the `env.sample`. Here's the basic setup for NEAR testnet ``` NETWORK_ID=default NODE_URL=https://rpc.testnet.near.org ``` ## Contract You need to deploy the smart contract to NEAR blockchain. ```bash yarn deploy:dev ``` or use the already deployed contract on testnet at [dev-1601093501138-5843386](https://explorer.testnet.near.org/accounts/dev-1601093501138-5843386) update the `env` ``` CONTRACT_NAME=dev-1601093501138-5843386 ``` ## Generate Textile API Key You need to generate `user group` API key from Textile and put it in `env` [Read from Textile docs](https://docs.textile.io/hub/apis/) OPTIONAL: You also can generate another `user group` API key from Textile to be a client-side API & public use. Make sure both of them are on the same org. ``` TEXTILE_API_KEY=xxx(required) PUBLIC_TEXTILE_API_KEY=xxx(optional) ``` ## Generate Account You need to provide a master account that can write to the database. ```bash node gen-account.js ``` Copy the privateKey and put it in to `env` variables for indexer authentication. ``` ADMIN_PRIVATE_KEY=xxx(required) ``` Copy the publicKey and put it in in writeValidator function in `index.js`. ```js const writeValidator = (writer) => { // only allow admin public key to write new data to thread if (writer == 'UPDATE_PUBLIC_KEY') { return true } return false } ``` Notes: `writeValidator` only does not support external variable, so you can only update the pubKey via string. ## Run Indexer When everything is ready, you can run the indexer by using: ```bash node index.js ``` ## Query After indexing, you can query the data using the example query provided on `example-query.js` ```bash node example-query.js ```
joelvaiju_Near-Challenge-2-NFT-Frontend
.gitpod.yml README.md babel.config.js contract Cargo.toml README.md compile.js src approval.rs enumeration.rs internal.rs lib.rs metadata.rs mint.rs nft_core.rs royalty.rs target .rustc_info.json debug .fingerprint Inflector-9e3c62115074b9cf lib-inflector.json autocfg-cd255646c4ed3339 lib-autocfg.json borsh-derive-5ae16aa117c7f399 lib-borsh-derive.json borsh-derive-internal-d0a21fc13ecc1b98 lib-borsh-derive-internal.json borsh-schema-derive-internal-ab395ab8b11e5be3 lib-borsh-schema-derive-internal.json byteorder-f262064c186a8c75 build-script-build-script-build.json convert_case-95e284bc56c12eb5 lib-convert_case.json derive_more-69cde6e327ff0a85 lib-derive_more.json generic-array-092d57466d24febe build-script-build-script-build.json hashbrown-23b6f330e2a36dbb run-build-script-build-script-build.json hashbrown-cdcf863ec2cdd3e4 build-script-build-script-build.json hashbrown-d487e72206b07264 lib-hashbrown.json indexmap-1e685d288818a816 lib-indexmap.json indexmap-8c9f9453652d0887 run-build-script-build-script-build.json indexmap-c3ac49ead90ef7d0 build-script-build-script-build.json itoa-b06d4cff304254a3 lib-itoa.json memchr-035c4269a6233099 build-script-build-script-build.json near-rpc-error-core-b63ef2006bff2a54 lib-near-rpc-error-core.json near-rpc-error-macro-379a6a754086af95 lib-near-rpc-error-macro.json near-sdk-core-f46d7222a619c60b lib-near-sdk-core.json near-sdk-macros-67a465e716adb526 lib-near-sdk-macros.json near-sdk-macros-727405ec780c5e3a lib-near-sdk-macros.json num-bigint-d69df36f763339d9 build-script-build-script-build.json num-integer-733a54d88eb397d7 build-script-build-script-build.json num-rational-ae7f1682837cfa6a build-script-build-script-build.json num-traits-4b4399d154f5b10c build-script-build-script-build.json proc-macro-crate-12334d71f27edc5f lib-proc-macro-crate.json proc-macro2-3fad645d9b421e8e run-build-script-build-script-build.json proc-macro2-5fcfb2a46c5df79c build-script-build-script-build.json proc-macro2-c9e8a8bd316f311f lib-proc-macro2.json quote-577c40b40ecd60bd lib-quote.json ryu-218129458e3751da run-build-script-build-script-build.json ryu-a617547c15767d1a build-script-build-script-build.json ryu-b54e014f019506d7 lib-ryu.json serde-283b18786897374f lib-serde.json serde-552ea552149e1a4d build-script-build-script-build.json serde-b99429f97a9b8ecc run-build-script-build-script-build.json serde_derive-4f31ad7a0e5ec105 build-script-build-script-build.json serde_derive-9dbff8f6028e8a2c run-build-script-build-script-build.json serde_derive-ea52ca527bcbc75e lib-serde_derive.json serde_json-bf5b0064d1b45776 run-build-script-build-script-build.json serde_json-c83f8bc67b0f9037 lib-serde_json.json serde_json-e9017e2e6ab453c7 build-script-build-script-build.json syn-0359a400ed92222f build-script-build-script-build.json syn-60a21e1eff858db6 lib-syn.json syn-84879f3141fd595e run-build-script-build-script-build.json toml-dc98d37f2aab96c5 lib-toml.json typenum-b487004bf7bc9a32 build-script-build-script-main.json unicode-xid-1ccf9a8622388d0a lib-unicode-xid.json version_check-c02be86861b3fab0 lib-version_check.json wee_alloc-4c1d5fc69208e232 build-script-build-script-build.json release .fingerprint Inflector-eefff05c7d46c877 lib-inflector.json autocfg-c02d6b6ee8622f5b lib-autocfg.json borsh-derive-c21411ffee7d065e lib-borsh-derive.json borsh-derive-internal-0726e6659e32ca55 lib-borsh-derive-internal.json borsh-schema-derive-internal-8697fe5729e1de66 lib-borsh-schema-derive-internal.json byteorder-4c53fba529cd8017 build-script-build-script-build.json convert_case-309159d13aa0180b lib-convert_case.json derive_more-17327c8597bdd865 lib-derive_more.json generic-array-bbd1689c084f1315 build-script-build-script-build.json hashbrown-3b10e6213c33e9f8 build-script-build-script-build.json hashbrown-71f18a7b216297ee run-build-script-build-script-build.json hashbrown-dfa58823fc945baa lib-hashbrown.json indexmap-003b007402065e55 build-script-build-script-build.json indexmap-7a45a55109b85880 lib-indexmap.json indexmap-849c89fb1d695e49 run-build-script-build-script-build.json itoa-63dbac3aa24376df lib-itoa.json memchr-5039b6a12ddaf906 build-script-build-script-build.json near-rpc-error-core-1f988ffce4bd9d91 lib-near-rpc-error-core.json near-rpc-error-macro-158fc2e33ce8e5d9 lib-near-rpc-error-macro.json near-sdk-core-d84548966f96da76 lib-near-sdk-core.json near-sdk-macros-a043428c9254ab21 lib-near-sdk-macros.json near-sdk-macros-f4424224c9e7df45 lib-near-sdk-macros.json num-bigint-098edfc0477822c7 build-script-build-script-build.json num-integer-f323875df5e8d39e build-script-build-script-build.json num-rational-345f00cf9e6a530c build-script-build-script-build.json num-traits-e9eac7993af6983b build-script-build-script-build.json proc-macro-crate-da01f2024500e13b lib-proc-macro-crate.json proc-macro2-02d8576ab8214684 run-build-script-build-script-build.json proc-macro2-dc182f065c28f8c4 build-script-build-script-build.json proc-macro2-ee244c0a276fdd6c lib-proc-macro2.json quote-6f475ef88c316af6 lib-quote.json ryu-275a04943870b294 run-build-script-build-script-build.json ryu-46daeab0ac17a91e build-script-build-script-build.json ryu-838cf17691075593 lib-ryu.json serde-868b859e86f1b52f build-script-build-script-build.json serde-a075dbe8ce36a928 lib-serde.json serde-a15dc695509366b7 run-build-script-build-script-build.json serde_derive-3b1694127305b275 lib-serde_derive.json serde_derive-68da4886ed165ea1 build-script-build-script-build.json serde_derive-bf5fe2264c5c5ecd run-build-script-build-script-build.json serde_json-6ba65a5d1d909229 run-build-script-build-script-build.json serde_json-94fd47e956bd539d build-script-build-script-build.json serde_json-f33e4dafaf2f9911 lib-serde_json.json syn-2584b402440daf00 build-script-build-script-build.json syn-34d0cd23b80b7deb run-build-script-build-script-build.json syn-f4eb97a9d7efaafa lib-syn.json toml-8816f709528797e3 lib-toml.json typenum-0fca38c01d29fb39 build-script-build-script-main.json unicode-xid-26809bafcedebcca lib-unicode-xid.json version_check-53eaf7d3ff4e406c lib-version_check.json wee_alloc-56f41bf66950ca1a build-script-build-script-build.json wasm32-unknown-unknown debug .fingerprint ahash-31500eed5b448aad lib-ahash.json aho-corasick-b89132b0381c749e lib-aho_corasick.json base64-646109cef868d84f lib-base64.json block-buffer-5f9d9d8b55f1d3b1 lib-block-buffer.json block-buffer-bc99728b4d693dcc lib-block-buffer.json block-padding-13821e343f46f753 lib-block-padding.json borsh-6fb510f42155d666 lib-borsh.json bs58-7212f4faed67cdbf lib-bs58.json byte-tools-a6cd60c7ea46575a lib-byte-tools.json byteorder-425eab9cce122222 lib-byteorder.json byteorder-b1278529b57a037f run-build-script-build-script-build.json cfg-if-53998aee5392ec20 lib-cfg-if.json cfg-if-f1de31271fe8376a lib-cfg-if.json digest-02b66ce3a4c78c1c lib-digest.json digest-5bb936ed42c05efb lib-digest.json generic-array-30cd0a03156a9b87 lib-generic_array.json generic-array-6802468a8a36c271 run-build-script-build-script-build.json generic-array-845551d223c666ba lib-generic_array.json greeter-98ace302c5fafa12 lib-greeter.json hashbrown-483ac2a52167ff2a lib-hashbrown.json hashbrown-51d3921aceef5ebb run-build-script-build-script-build.json hashbrown-9e21cf3d1d28819b lib-hashbrown.json hex-261cfc3fcecaa4ef lib-hex.json indexmap-33441b070f70d190 run-build-script-build-script-build.json indexmap-549d81fd1b4aaa1a lib-indexmap.json itoa-4c06a779eef25af4 lib-itoa.json keccak-b0c9ccd044944503 lib-keccak.json lazy_static-b79bf709c5f09987 lib-lazy_static.json memchr-8974d6d0eea4ab03 lib-memchr.json memchr-f922bd3de584c24c run-build-script-build-script-build.json memory_units-3804df6a88cda429 lib-memory_units.json near-contract-standards-5b3a3eab7dcb9d29 lib-near-contract-standards.json near-primitives-core-22512a59771af8d0 lib-near-primitives-core.json near-runtime-utils-3357838d53825c1b lib-near-runtime-utils.json near-sdk-6b4a65a8cf3fbfe5 lib-near-sdk.json near-sdk-b965d2f0da3bed34 lib-near-sdk.json near-sys-3c35b23306249209 lib-near-sys.json near-vm-errors-20e6f8caa2eb3f21 lib-near-vm-errors.json near-vm-logic-afd430776e895057 lib-near-vm-logic.json nft_minter-093969e334c557bc lib-nft_minter.json num-bigint-7e7d1d90c83f30cc lib-num-bigint.json num-bigint-a699a4a251dccc5e run-build-script-build-script-build.json num-integer-8d01ee791cd20c62 lib-num-integer.json num-integer-e00df3a90a9271c0 run-build-script-build-script-build.json num-rational-abf9bfbe7cbd8dd6 lib-num-rational.json num-rational-e5b980680ebaaecb run-build-script-build-script-build.json num-traits-c1660e0226ce670c run-build-script-build-script-build.json num-traits-e0f409119940623e lib-num-traits.json opaque-debug-20397d3162c70a7a lib-opaque-debug.json opaque-debug-67e11037907b0a62 lib-opaque-debug.json regex-e489d254d1e7e12d lib-regex.json regex-syntax-bcaa825332c5ba50 lib-regex-syntax.json ryu-52a388757c15c1a6 run-build-script-build-script-build.json ryu-a7a81e23b9bcb06b lib-ryu.json serde-6d59abfee35ae43b run-build-script-build-script-build.json serde-ebe94b1241f5047e lib-serde.json serde_json-1da7a0c73c632567 lib-serde_json.json serde_json-c3b71e6248b2b8a1 run-build-script-build-script-build.json sha2-bdebeba22d9faa70 lib-sha2.json sha3-4cf15448deb657a1 lib-sha3.json typenum-4976863587ad7c9e run-build-script-build-script-main.json typenum-eee99a2ed0f1494e lib-typenum.json wee_alloc-1aec1ccd611d0139 run-build-script-build-script-build.json wee_alloc-6b415af9b59e77f3 lib-wee_alloc.json build num-bigint-a699a4a251dccc5e out radix_bases.rs typenum-4976863587ad7c9e out consts.rs op.rs tests.rs wee_alloc-1aec1ccd611d0139 out wee_alloc_static_array_backend_size_bytes.txt release .fingerprint ahash-3f602f2bb35305d5 lib-ahash.json aho-corasick-030629cb0b324655 lib-aho_corasick.json base64-0266aaf2a36d6a95 lib-base64.json block-buffer-654559c263d4295c lib-block-buffer.json block-buffer-ef3bdc2ca4ca2258 lib-block-buffer.json block-padding-266537d29da7e855 lib-block-padding.json borsh-9b107637f753f2f3 lib-borsh.json bs58-f4b2e40280ca61bd lib-bs58.json byte-tools-3fdf9f81b3566a62 lib-byte-tools.json byteorder-281741e712c3593b run-build-script-build-script-build.json byteorder-3cbc13d9e2120d78 lib-byteorder.json cfg-if-4b6f1020c71c4854 lib-cfg-if.json cfg-if-996161407648b543 lib-cfg-if.json digest-3a298d941a772f8b lib-digest.json digest-3eaf2af68ec5d268 lib-digest.json generic-array-42cae01aebabebfd lib-generic_array.json generic-array-6eec14bd6daaf3f4 lib-generic_array.json generic-array-adc59870ace2b200 run-build-script-build-script-build.json hashbrown-452902ff07dbf6b7 lib-hashbrown.json hashbrown-7e8e848cd8112f62 run-build-script-build-script-build.json hashbrown-81ab7d7b70ef6a86 lib-hashbrown.json hex-cdfd77af73369eda lib-hex.json indexmap-0d6c7f4ff05e67fe lib-indexmap.json indexmap-330038fb844bc3b8 run-build-script-build-script-build.json itoa-88cf6ae2932e56ea lib-itoa.json keccak-f86991b32285e158 lib-keccak.json lazy_static-f4536af75ba8e49f lib-lazy_static.json memchr-ac3578d82635da1b run-build-script-build-script-build.json memchr-f4def7b5615765f0 lib-memchr.json memory_units-a3e83b33528dff62 lib-memory_units.json near-primitives-core-e7c89bfa4354a8fb lib-near-primitives-core.json near-runtime-utils-2655910cf28c83b9 lib-near-runtime-utils.json near-sdk-4bf23a838ada7fc0 lib-near-sdk.json near-sdk-a18a27d1b842c5b0 lib-near-sdk.json near-sys-618ec7eacf1f9b09 lib-near-sys.json near-vm-errors-863ac45711028062 lib-near-vm-errors.json near-vm-logic-250c7f2c58c6a3bc lib-near-vm-logic.json nft_minter-093969e334c557bc lib-nft_minter.json num-bigint-c900bd01afc2e1e7 lib-num-bigint.json num-bigint-dc825d0acebfdbcf run-build-script-build-script-build.json num-integer-349c2ce4d60f0d9f lib-num-integer.json num-integer-fdb86b9d5f8a8e0d run-build-script-build-script-build.json num-rational-18885d54ff9a376a run-build-script-build-script-build.json num-rational-8b42979eff2189a9 lib-num-rational.json num-traits-86239f847f33ab60 run-build-script-build-script-build.json num-traits-bfe3d12501cd1372 lib-num-traits.json opaque-debug-1b7ea0b07f1afb39 lib-opaque-debug.json opaque-debug-b6d439c6e3e9edbe lib-opaque-debug.json regex-a02f2db97851bb94 lib-regex.json regex-syntax-4657cf0405a3df43 lib-regex-syntax.json ryu-1337630bb096e090 run-build-script-build-script-build.json ryu-fcfc9347cfc122e6 lib-ryu.json serde-312623b7805249cb lib-serde.json serde-e2a1baf64aac9a1e run-build-script-build-script-build.json serde_json-dff9c9af4bee57f6 lib-serde_json.json serde_json-e5c223ce15e2bb43 run-build-script-build-script-build.json sha2-6c431d0402a066fd lib-sha2.json sha3-35f260f3db24565d lib-sha3.json typenum-43dbc46d2082a107 run-build-script-build-script-main.json typenum-52b05cf60c51c8aa lib-typenum.json wee_alloc-034861dc56ef4c30 run-build-script-build-script-build.json wee_alloc-deb6b3e0ff371f74 lib-wee_alloc.json build num-bigint-dc825d0acebfdbcf out radix_bases.rs typenum-43dbc46d2082a107 out consts.rs op.rs tests.rs wee_alloc-034861dc56ef4c30 out wee_alloc_static_array_backend_size_bytes.txt package.json src App.js __mocks__ fileMock.js assets logo-black.svg logo-white.svg config.js global.css index.html index.js jest.init.js main.test.js utils.js wallet login index.html
near-spring-NFT-challenge3 ================== This [React] app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-spring-NFT-challenge3.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-spring-NFT-challenge3.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-spring-NFT-challenge3.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-spring-NFT-challenge3.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages near-spring-NFT-challenge3 Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html
Mycelium-Lab_crisp-frontend
.eslintrc.js README.md babel.config.js docs README.md jsconfig.json package.json public index.html src assets icons ArrowsClockwise.svg arrow-down.svg arrow-twin.svg burger.svg expand.svg isActive active.svg error.svg logout.svg search.svg telegram.svg x.svg near-protocol-near-logo.svg constants charts.js index.js main.js router index.js store index.js utils format.js index.js localStorage.js number.js tick.js vue.config.js
### Documentation ### Layout: The parent component is the /App.vue file, it contains the router-view, where the components from the /views folder are stored. It also contains header, notifications-wrapper and socials-wrapper. The layout uses scss, some basic styles are placed in /assets/scss/main.scss file, for example text sizes and most colors for buttons and text. # Project structure: For each page the corresponding component from the /views folder is used, they are rendered in the router-view tag in App.vue, and switched by the router consequence /router/index.js. The router is configured to preserve history. Using a non-existent router (app.crisp.exchange/foo-bar12345), as well as its absence (app.crisp.exchange/), leads to the swap component. In the /constants folder there are two files with constants: 1. charts.js is used to store the default set of settings for the chart drawn using the Apexcharts library, if desired, you can change them directly in the constant, or add some parameters of a particular instance in the component using the spread operator (apexCharts = {...apexCharts, tooltip: {enabled: false}). 2. index.js has such constants as CONTRACT_ID, METHOD_NAMES (new methods should be added here, so that the user is granted usage rights for them at login), DEFAULT_SWAP_PAIR (if there is no last used pair in localStorage, SwapView component takes this pair as default) and SWAP_TOKENS (listing of all currently used tokens). The /utils folder contains five files with reusable functions and code: 1. format.js uses the ethers.js library to convert a token quantity using decimals (addDecimals), and to subtract decimals, converting that quantity to human format (removeDecimals) 2. index.js contains CONFIG which is currently set to testnet, these values can be changed to mainnet 3. localStorage.js simplifies working with localStorage by providing functions to write, read, and delete records from localStorage 4. number.js: toFixed() and isNumber() simplify working with numbers in exponential notation (replaced by ethers.js methods almost everywhere due to imprecision) 5. tick.js is used to round the price to the nearest tick # Application initialization When rendering the application, at the moment when the created() hook is triggered in the App.vue file, the load method is called, which sequentially calls the following functions in the store: 1. fetchCrispContract() initializes the walletSelector and setupModal entities of the nearWalletSelector library, as well as nearConnection (using the config), and then walletConnection (which are nearAPI entities themselves). When initializing setupModal, the user is presented with a login prompt if they are not already logged in. After a successful login, the page is refreshed and, if the user is logged in, the crispContract entity is initialized, which is then used for simple calls to contract functions (not batch transactions). 2. fetchPools() fetches the list of pools from the contract by calling the corresponding method 3. fetchBalances(), using the user's accountId, gets a list of the user's balances by initializing the tokenBalances entity. The response from get_balance_all_tokens comes as a single string, so it is broken down into an object, and then for each object there is a query to ft_metadata() to get additional data about the token - symbol, icon, number of decimals etc. This list is then matched against the existing (in constants/index.js) list of tokens to initialize objects for missing balances too, since a user may not have tokens on crisp but have them on nir wallet. After all manipulations for each tokenBalance object the ft_balance_of method is also called to get the same balance on the nir wallet, and as a result we get an array of objects, where each balance object contains all the necessary information about the token, its crisp balance, and its nir balance. 4. processTokensMetadata() is used to get information about tokens available in pools to output additional data for them. Potentially can be written out in the future, as now this information is also duplicated in tokenBalances, but for now it is used in most components 5. processPositions lists all positions in the pools obtained by the fetchPools method and initializes two arrays: one with all positions in all pools (for general display at the bottom of the page) and with user positions (for detailed display on the same page) 6. fetchDeposits retrieves user deposits made available for borrowing by other users, creating a userDeposits entity for rendering on the corresponding page 7. fetchBorrows gets the user's deposits, finds the corresponding items in the userPositions list and updates some values in them, so that the LiquidityView page will display the corresponding information about the item being in the borrowed state and some additional data. After all these functions are executed, the application displays all the necessary information when navigating through the different pages. Similar calls occur at the reload() function in the store component, which is called during some transactions that are executed directly on the page. # Transaction invocation, classical method and batch transactions At the moment there are two types of transaction calls used in a contract. The first one is simplified, when we need to call only one action in one contract, for example, the withdraw() call in DepositView. In this case, we only need to take the crispContract entity from the store, which has all existing contract functions in it, and call it in the form: ``` await contract.withdraw( { token: this.tokenW, // address of the token to be withdrawn amount: addDecimals(this.amountW, tokenObj), // amount of token with decimals added } ) ``` after which we process the returned promis with .then(), and dispatch reload(), since such functions can be called directly from the application The second is the so-called batch transaction, a functionality provided by the near wallet selector library, and it allows us to call several actions in one contract, and several actions in several contracts, within one transaction. Example: the create_deposit function in LendingView. If we set nearWallet as the desired token source to create the deposit, the following batchTx is executed: ``` await wallet.signAndSendTransactions({ transactions: [ { receiverId: tokenObj.token, actions: [ { type: "FunctionCall", params: { methodName: "storage_deposit", args: Buffer.from(JSON.stringify(argsDeposit)), gas: 150000000000000, deposit: 1 } }, { type: "FunctionCall", params: { methodName: "ft_transfer_call", args: Buffer.from(JSON.stringify(argsTransfer)), gas: 150000000000000, deposit: 1 } } ] }, { receiverId: CONTRACT_ID, actions: [ { type: "FunctionCall", params: { methodName: "create_deposit", args: Buffer.from(JSON.stringify(argsCreateDeposit)), gas: 150000000000000 // deposit: 1 } } ] } ] }) ``` The signAndSendTransactions function is a function of the wallet object that we need to get from store. The transactions parameter is an array of calls to the contracts we will access, and each transaction has an array of receiverId, which is the address of the contract where the actions will be executed, and the actions themselves, which is a list of actions in that contract. In each action we need to specify the type, as well as parameters such as the name of the method, the maximum amount of gas per transaction, and for some transactions we need to specify the deposit (in NEAR). Arguments are composed in the same way as for the previous transaction call, but they must be cast first to a string and then to a buffer (as specified in the example) # Crisp frontend [Crisp](https://crisp.exchange/) is an open-source structured liquidity protocol on the NEAR blockchain. This repository contains the frontend. There is also a [smart contract repository](https://github.com/Mycelium-Lab/crisp-exchange). ## Overview The protocol is an advanced DEX where you can: - trade with existing liquidity - place concentrated liquidity positions in a chosen price range to earn trading fees - leverage your positions up to 5x - lend your tokens for others to borrow for leverage and earn interest - liquidate underwater leveraged positions to earn a premium <img src="docs/img/swap.png" height=200> <img src="docs/img/create-position.png" height=200> <img src="docs/img/your-positions.png" height=200> <img src="docs/img/add-liquidity.png" height=200> <img src="docs/img/remove-liquidity.png" height=200> <img src="docs/img/lend.png" height=200> ## Project setup ``` npm i ``` ### Compiles and hot-reloads for development ``` npm run serve ``` ### Compiles and minifies for production ``` npm run build ```
marco-sundsk_near_rust_contract_example
README.md storage-analyser data eg_whole.json ncdl2_ex01.py near_rpc.py requirements.txt storage-demo Cargo.toml build.sh src lib.rs note.rs
# near_rust_contract_example
khorolets_keyvalue-contract
Cargo.toml README.md build.sh src getters.rs internal.rs lib.rs owner.rs
# keyvalue-contract
NEAR-Edu_solidity-vs-rust
README.md rust Cargo.toml build.sh deploy.sh dev-deploy.sh init-args.js src contract.rs lib.rs
# Solidity vs. Rust Smart Contract Comparison # Dependencies - Rust 1.56 - Node.js 14 - NEAR CLI 3.1 # Authors - Jacob Lindahl <jacob@near.foundation> [@sudo_build](https://twitter.com/sudo_build)
near_nitro
.github ISSUE_TEMPLATE BOUNTY.yml bug_report.md feature_request.md codeql codeql-config.yml workflows arbitrator-ci.yml arbitrator-skip-ci.yml ci.yml codeql-analysis.yml docker.yml waitForNitro.sh .golangci.yml .nitro-tag.txt README.md arbcompress compress_cgo.go compress_common.go compress_test.go compress_wasm.go arbitrator Cargo.toml arbutil Cargo.toml src color.rs format.rs lib.rs types.rs cbindgen.toml jit Cargo.toml build.rs programs print main.go time main.go src arbcompress.rs color.rs gostack.rs machine.rs main.rs runtime.rs socket.rs syscall.rs test.rs wavmio.rs prover Cargo.toml fuzz Cargo.toml fuzz_targets osp.rs src binary.rs host.rs lib.rs machine.rs main.rs memory.rs merkle.rs reinterpret.rs utils.rs value.rs wavm.rs test-cases go main.go rust Cargo.toml src bin basics.rs globalstate.rs host-io.rs keccak256.rs pi.rs stdlib.rs lib.rs wasm-libraries Cargo.toml brotli Cargo.toml build.rs src lib.rs go-abi Cargo.toml src lib.rs go-stub Cargo.toml src lib.rs value.rs host-io Cargo.toml src lib.rs soft-float bindings32.c bindings64.c wasi-stub Cargo.toml src lib.rs wasm-testsuite Cargo.toml check.sh src main.rs arbnode api.go batch_poster.go classicMessage.go dataposter data_poster.go dataposter_test.go dbstorage storage.go noop storage.go redis redisstorage.go slice slicestorage.go storage storage.go storage_test.go delayed.go delayed_seq_reorg_test.go delayed_sequencer.go inbox_reader.go inbox_test.go inbox_tracker.go inbox_tracker_test.go maintenance.go maintenance_test.go message_pruner.go message_pruner_test.go node.go redislock redis.go resourcemanager resource_management.go resource_management_test.go schema.go seq_coordinator.go seq_coordinator_atomic_test.go sequencer_inbox.go simple_redis_lock_test.go sync_monitor.go transaction_streamer.go arbos addressSet addressSet.go addressSet_test.go addressTable addressTable.go addressTable_test.go arbosState arbosstate.go arbosstate_test.go common_test.go initialization_test.go initialize.go arbostypes incomingmessage.go messagewithmeta.go block_processor.go blockhash blockhash.go blockhash_test.go burn burn.go common_test.go engine.go incomingmessage_test.go internal_tx.go l1pricing batchPoster.go batchPoster_test.go common_test.go l1PricingOldVersions.go l1pricing.go l1pricing_test.go l1pricing_test.go l2pricing l2pricing.go l2pricing_test.go model.go merkleAccumulator merkleAccumulator.go parse_l2.go queue_test.go retryable_test.go retryables retryable.go storage queue.go storage.go storage_test.go tx_processor.go util retryable_encoding_test.go tracing.go transfer.go util.go arbstate das_reader.go inbox.go inbox_fuzz_test.go arbutil block_message_relation.go correspondingl1blocknumber.go hash.go hash_test.go preimage_type.go transaction_data.go wait_for_l1.go blsSignatures blsSignatures.go blsSignatures_test.go broadcastclient broadcastclient.go broadcastclient_test.go broadcastclients broadcastclients.go broadcaster broadcaster.go broadcaster_serialization_test.go broadcaster_test.go sequencenumbercatchupbuffer.go sequencenumbercatchupbuffer_test.go cmd chaininfo arbitrum_chain_info.json chain_info.go conf chain.go database.go daserver daserver.go dataavailability data_availability_check.go datool datool.go deploy deploy.go genericconf config.go filehandler_test.go getversion17.go getversion18.go jwt.go liveconfig.go logging.go pprof.go server.go wallet.go ipfshelper ipfshelper.go ipfshelper_test.go nitro-val config.go nitro_val.go nitro config_test.go init.go nitro.go relay config_test.go relay.go replay db.go main.go seq-coordinator-invalidate seq-coordinator-invalidate.go seq-coordinator-manager rediscoordinator redis_coordinator.go seq-coordinator-manager.go util chaininfoutil.go confighelpers configuration.go keystore.go keystore_test.go codecov.yml das aggregator.go aggregator_test.go bigcache_storage_service.go bigcache_storage_service_test.go chain_fetch_das.go das.go dasRpcClient.go dasRpcServer.go das_test.go dastree dastree.go dastree_test.go db_storage_service.go extra_signature_checker_test.go factory.go fallback_storage_service.go fallback_storage_service_test.go ipfs_storage_service.go ipfs_storage_service_test.go iterable_storage_service.go key_utils.go lifecycle.go local_file_storage_service.go memory_backed_storage_service.go near_aggregator.go panic_wrapper.go read_limited.go reader_aggregator_strategies.go reader_aggregator_strategies_test.go redis_storage_service.go redis_storage_service_test.go redundant_storage_service.go redundant_storage_test.go regular_sync_storage_test.go regularly_sync_storage.go rest_server_list.go restful_client.go restful_server.go restful_server_list_test.go restful_server_test.go rpc_aggregator.go rpc_test.go s3_storage_service.go s3_storage_service_test.go sign_after_store_das_writer.go simple_das_reader_aggregator.go simple_das_reader_aggregator_test.go storage_service.go store_signing.go store_signing_test.go syncing_fallback_storage.go timeout_wrapper.go util.go deploy deploy.go docs notice.md execution gethexec api.go arb_interface.go block_recorder.go blockchain.go executionengine.go forwarder.go node.go sequencer.go tx_pre_checker.go interface.go gethhook geth-hook.go geth_test.go linter koanf handlers.go koanf.go koanf_test.go pointercheck pointer.go pointer_test.go structinit structinit.go structinit_test.go testdata src koanf a a.go b b.go pointercheck pointercheck.go structinit a a.go nodeInterface NodeInterface.go NodeInterfaceDebug.go virtual-contracts.go precompiles ArbAddressTable.go ArbAddressTable_test.go ArbAggregator.go ArbAggregator_test.go ArbBLS.go ArbDebug.go ArbFunctionTable.go ArbGasInfo.go ArbInfo.go ArbOwner.go ArbOwnerPublic.go ArbOwner_test.go ArbRetryableTx.go ArbRetryableTx_test.go ArbStatistics.go ArbSys.go ArbosActs.go ArbosTest.go context.go precompile.go precompile_test.go wrapper.go relay relay.go rust-toolchain.toml scripts build-brotli.sh download-machine.sh solgen gen.go staker assertion.go block_challenge_backend.go block_validator.go block_validator_schema.go challenge_manager.go challenge_test.go common_test.go execution_challenge_bakend.go execution_reverted_test.go l1_validator.go rollup_watcher.go staker.go stateless_block_validator.go txbuilder builder.go validatorwallet contract.go eoa.go noop.go statetransfer data.go interface.go jsondatareader.go memdatareader.go system_tests aliasing_test.go arbtrace_test.go batch_poster_test.go block_hash_test.go block_validator_bench_test.go block_validator_test.go bloom_test.go common_test.go conditionaltx_test.go contract_tx_test.go das_test.go debugapi_test.go delayedinbox_test.go delayedinboxlong_test.go estimation_test.go fees_test.go forwarder_test.go full_challenge_impl_test.go full_challenge_test.go infra_fee_test.go initialization_test.go ipc_test.go log_subscription_test.go meaningless_reorg_test.go nodeinterface_test.go outbox_test.go precompile_fuzz_test.go precompile_test.go recreatestate_rpc_test.go reorg_resequencing_test.go retryable_test.go seq_coordinator_test.go seq_nonce_test.go seq_pause_test.go seq_reject_test.go seq_whitelist_test.go seqcompensation_test.go seqfeed_test.go seqinbox_test.go staker_challenge_test.go staker_test.go state_fuzz_test.go test_info.go transfer_test.go triedb_race_test.go twonodes_test.go twonodeslong_test.go validation_mock_test.go validator_reorg_test.go wrap_transaction_test.go util arbmath bips.go bits.go math.go math_test.go colors colors.go containers lru.go promise.go promise_test.go queue.go queue_test.go syncmap.go contracts address_verifier.go headerreader header_reader.go jsonapi preimages.go preimages_test.go merkletree common_test.go merkleAccumulator_test.go merkleEventProof.go merkleEventProof_test.go merkleTree.go metricsutil metricsutil.go normalizeGas.go pretty pretty_printing.go redisutil redis_coordinator.go redisutil.go test_redis.go rpcclient rpcclient.go rpcclient_test.go sharedmetrics sharedmetrics.go signature datasigner.go sign_verify.go sign_verify_test.go simple_hmac.go verifier.go verifier_test.go stopwaiter stopwaiter.go stopwaiter_promise_test.go stopwaiter_test.go testhelpers pseudorandom.go testhelpers.go validator execution_state.go interface.go server_api json.go valiation_api.go validation_client.go server_arb execution_run.go machine.go machine_cache.go machine_loader.go mock_machine.go nitro_machine.go preimage_resolver.go prover_interface.go validator_spawner.go server_common machine_loader.go machine_locator.go valrun.go server_jit jit_machine.go machine_loader.go spawner.go validation_entry.go valnode valnode.go wavmio higher.go raw.go stub.go wsbroadcastserver clientconnection.go clientmanager.go connectionlimiter.go connectionlimiter_test.go dictionary.go utils.go wsbroadcastserver.go zeroheavy common_test.go zeroheavy.go zeroheavy_test.go
<br /> <p align="center"> <a href="https://arbitrum.io/"> <img src="https://arbitrum.io/assets/arbitrum/logo_color.png" alt="Logo" width="80" height="80"> </a> <h3 align="center">Arbitrum Nitro</h3> <p align="center"> <a href="https://developer.arbitrum.io/"><strong>Next Generation Ethereum L2 Technology »</strong></a> <br /> </p> </p> ## About Arbitrum Nitro <img src="https://arbitrum.io/assets/arbitrum/logo_color.png" alt="Logo" width="80" height="80"> Nitro is the latest iteration of the Arbitrum technology. It is a fully integrated, complete layer 2 optimistic rollup system, including fraud proofs, the sequencer, the token bridges, advanced calldata compression, and more. See the live docs-site [here](https://developer.arbitrum.io/) (or [here](https://github.com/OffchainLabs/arbitrum-docs) for markdown docs source.) See [here](./audits) for security audit reports. The Nitro stack is built on several innovations. At its core is a new prover, which can do Arbitrum’s classic interactive fraud proofs over WASM code. That means the L2 Arbitrum engine can be written and compiled using standard languages and tools, replacing the custom-designed language and compiler used in previous Arbitrum versions. In normal execution, validators and nodes run the Nitro engine compiled to native code, switching to WASM if a fraud proof is needed. We compile the core of Geth, the EVM engine that practically defines the Ethereum standard, right into Arbitrum. So the previous custom-built EVM emulator is replaced by Geth, the most popular and well-supported Ethereum client. The last piece of the stack is a slimmed-down version of our ArbOS component, rewritten in Go, which provides the rest of what’s needed to run an L2 chain: things like cross-chain communication, and a new and improved batching and compression system to minimize L1 costs. Essentially, Nitro runs Geth at layer 2 on top of Ethereum, and can prove fraud over the core engine of Geth compiled to WASM. Arbitrum One successfully migrated from the Classic Arbitrum stack onto Nitro on 8/31/22. (See [state migration](https://developer.arbitrum.io/migration/state-migration) and [dapp migration](https://developer.arbitrum.io/migration/dapp_migration) for more info). ## License We currently have Nitro [licensed](./LICENSE) under a Business Source License, similar to our friends at Uniswap and Aave, with an "Additional Use Grant" to ensure that everyone can have full comfort using and running nodes on all public Arbitrum chains. ## Contact Discord - [Arbitrum](https://discord.com/invite/5KE54JwyTs) Twitter: [Arbitrum](https://twitter.com/arbitrum)
MozyOk_learn-web3-dapp-solana
.gitpod.yml .prettierrc.json README.md __test__ avalanche.test.ts polygon.test.ts secret.test.ts solana.test.ts components protocols avalanche components index.ts lib index.ts celo components index.ts lib index.ts ceramic lib figmentLearnSchema.json figmentLearnSchemaCompact.json identityStore LocalStorage.ts index.ts index.ts types index.ts near components index.ts lib index.ts polkadot components index.ts lib index.ts polygon challenges balance.ts connect.ts deploy.ts getter.ts index.ts query.ts restore.ts setter.ts transfer.ts components index.ts lib index.ts pyth components index.ts lib index.ts swap.ts secret components index.ts lib index.ts solana components index.ts lib index.ts tezos components index.ts lib index.ts the_graph graphql query.ts the_graph_near graphql query.ts shared Button.styles.ts CustomMarkdown Markdown.styles.ts VideoPlayer VideoPlayer.styles.ts utils markdown-utils.ts string-utils.ts ProtocolNav ProtocolNav.styles.ts contracts celo HelloWorld.json near Cargo.toml README.md compile.js src lib.rs polygon SimpleStorage README.md SimpleStorage.json migrations 1_initial_migration.js 2_deploy_contracts.js package.json truffle-config.js solana program Cargo.toml Xargo.toml src lib.rs tests lib.rs tezos counter.js the_graph CryptopunksData.abi.json docker docker-compose-near.yml docker-compose.yml hooks index.ts useColors.ts useLocalStorage.ts useSteps.ts jest.config.js lib constants.ts markdown PREFACE.md avalanche CHAIN_CONNECTION.md CREATE_KEYPAIR.md EXPORT_TOKEN.md FINAL.md GET_BALANCE.md IMPORT_TOKEN.md PROJECT_SETUP.md TRANSFER_TOKEN.md celo CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md SWAP_TOKEN.md TRANSFER_TOKEN.md ceramic BASIC_PROFILE.md CHAIN_CONNECTION.md CUSTOM_DEFINITION.md FINAL.md LOGIN.md PROJECT_SETUP.md near CHAIN_CONNECTION.md CREATE_ACCOUNT.md CREATE_KEYPAIR.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md polkadot CHAIN_CONNECTION.md CREATE_ACCOUNT.md ESTIMATE_DEPOSIT.md ESTIMATE_FEES.md FINAL.md GET_BALANCE.md PROJECT_SETUP.md RESTORE_ACCOUNT.md TRANSFER_TOKEN.md polygon CHAIN_CONNECTION.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md QUERY_CHAIN.md RESTORE_ACCOUNT.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md pyth FINAL.md PROJECT_SETUP.md PYTH_CONNECT.md PYTH_EXCHANGE.md PYTH_LIQUIDATE.md PYTH_SOLANA_WALLET.md PYTH_VISUALIZE_DATA.md secret CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md solana CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md FUND_ACCOUNT.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md SOLANA_CREATE_GREETER.md TRANSFER_TOKEN.md tezos CHAIN_CONNECTION.md CREATE_ACCOUNT.md DEPLOY_CONTRACT.md FINAL.md GET_BALANCE.md GET_CONTRACT_VALUE.md PROJECT_SETUP.md SET_CONTRACT_VALUE.md TRANSFER_TOKEN.md the_graph FINAL.md GRAPH_NODE.md PROJECT_SETUP.md SUBGRAPH_MANIFEST.md SUBGRAPH_MAPPINGS.md SUBGRAPH_QUERY.md SUBGRAPH_SCAFFOLD.md SUBGRAPH_SCHEMA.md the_graph_near FINAL.md GRAPH_NODE.md PROJECT_SETUP.md SUBGRAPH_MANIFEST.md SUBGRAPH_MAPPINGS.md SUBGRAPH_QUERY.md SUBGRAPH_SCAFFOLD.md SUBGRAPH_SCHEMA.md | n : | next-env.d.ts next.config.js package.json pages api avalanche account.ts balance.ts connect.ts export.ts import.ts transfer.ts celo account.ts balance.ts connect.ts deploy.ts getter.ts setter.ts swap.ts transfer.ts near balance.ts check-account.ts connect.ts create-account.ts deploy.ts getter.ts keypair.ts setter.ts transfer.ts polkadot account.ts balance.ts connect.ts deposit.ts estimate.ts restore.ts transfer.ts pyth connect.ts secret account.ts balance.ts connect.ts deploy.ts getter.ts setter.ts transfer.ts solana balance.ts connect.ts deploy.ts fund.ts getter.ts greeter.ts keypair.ts setter.ts transfer.ts tezos account.ts balance.ts connect.ts deploy.ts getter.ts setter.ts transfer.ts the-graph-near entity.ts manifest.ts scaffold.ts the-graph entity.ts manifest.ts mapping.ts node.ts scaffold.ts public discord.svg figment-learn-compact.svg vercel.svg theme colors.ts index.ts media.ts tsconfig.json types index.ts utils colors.ts context.ts datahub.ts markdown.ts networks.ts pages.ts string-utils.ts tracking-utils.ts
Based on: MetaCoin tutorial from Truffle docs https://www.trufflesuite.com/docs/truffle/quickstart SimpleStorage example contract from Solidity docs https://docs.soliditylang.org/en/v0.4.24/introduction-to-smart-contracts.html#storage 1. Install truffle (https://www.trufflesuite.com/docs/truffle/getting-started/installation) `npm install -g truffle` 2. Navigate to this directory (/contracts/polygon/SimpleStorage) 3. Install dependencies `yarn` 4. Test contract `truffle test ./test/TestSimpleStorage.sol` **Possible issue:** "Something went wrong while attempting to connect to the network. Check your network configuration. Could not connect to your Ethereum client with the following parameters:" **Solution:** run `truffle develop` and make sure port matches the one in truffle-config.js under development and test networks 5. Run locally via `truffle develop` $ truffle develop ``` migrate let instance = await SimpleStorage.deployed(); let storedDataBefore = await instance.get(); storedDataBefore.toNumber() // Should print 0 instance.set(50); let storedDataAfter = await instance.get(); storedDataAfter.toNumber() // Should print 50 ``` 6. Create Polygon testnet account - Install MetaMask (https://chrome.google.com/webstore/detail/metamask/nkbihfbeogaeaoehlefnkodbefgpgknn?hl=en) - Add a custom network with the following params: Network Name: "Polygon Mumbai" RPC URL: https://rpc-mumbai.maticvigil.com/ Chain ID: 80001 Currency Symbol: MATIC Block Explorer URL: https://mumbai.polygonscan.com 7. Fund your account from the Matic Faucet https://faucet.matic.network Select MATIC Token, Mumbai Network Enter your account address from MetaMask Wait until time limit is up, requests tokens 3-4 times so you have enough to deploy your contract 8. Add a `.secret` file in this directory with your account's seed phrase or mnemonic (you should be required to write this down or store it securely when creating your account in MetaMask). In `truffle-config.js`, uncomment the three constant declarations at the top, along with the matic section of the networks section of the configuration object. 9. Deploy contract `truffle migrate --network matic` 10. Interact via web3.js ```js const {ethers} = require('ethers'); const fs = require('fs'); const mnemonic = fs.readFileSync('.secret').toString().trim(); const signer = new ethers.Wallet.fromMnemonic(mnemonic); const provider = new ethers.providers.JsonRpcProvider( 'https://matic-mumbai.chainstacklabs.com', ); const json = JSON.parse( fs.readFileSync('build/contracts/SimpleStorage.json').toString(), ); const contract = new ethers.Contract( json.networks['80001'].address, json.abi, signer.connect(provider), ); contract.get().then((val) => console.log(val.toNumber())); // should log 0 contract.set(50).then((receipt) => console.log(receipt)); contract.get().then((val) => console.log(val.toNumber())); // should log 50 ``` # 👋🏼 What is `learn-web3-dapp`? We made this decentralized application (dApp) to help developers learn about Web 3 protocols. It's a Next.js app that uses React, TypeScript and various smart contract languages (mostly Solidity and Rust). We will guide you through using the various blockchain JavaScript SDKs to interact with their networks. Each protocol is slightly different, but we have attempted to standardize the workflow so that you can quickly get up to speed on networks like Solana, NEAR, Polygon and more! - ✅ Solana - ✅ Polygon - ✅ Avalanche - ✅ NEAR - ✅ Tezos - ✅ Secret - ✅ Polkadot - ✅ Celo - ✅ The Graph - ✅ The Graph for NEAR - ✅ Pyth - 🔜 Ceramic - 🔜 Arweave - 🔜 Chainlink - [Let us know which one you'd like us to cover](https://github.com/figment-networks/learn-web3-dapp/issues) <img width="1024" alt="Screen Shot 1" src="https://raw.githubusercontent.com/figment-networks/learn-web3-dapp/main/markdown/__images__/readme_01.png"> <img width="1024" alt="Screen Shot 2" src="https://raw.githubusercontent.com/figment-networks/learn-web3-dapp/main/markdown/__images__/readme-02.png"> <img width="1024" alt="Screen Shot 3" src="https://raw.githubusercontent.com/figment-networks/learn-web3-dapp/main/markdown/__images__/readme-03.png"> # 🧑‍💻 Get started ## 🤖 Using Gitpod (Recommended) The best way to go through those courses is using [Gitpod](https://gitpod.io). Gitpod provides prebuilt developer environments in your browser, powered by VS Code. Just sign in using GitHub and you'll be up and running in seconds without having to do any manual setup 🔥 [**Open this repo on Gitpod**](https://gitpod.io/#https://github.com/figment-networks/learn-web3-dapp) ## 🐑 Clone locally Make sure you have installed [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), [Node.js](https://nodejs.org/en/) (Please install **v14.17.0**, we recommend using [nvm](https://github.com/nvm-sh/nvm)) and [yarn](https://yarnpkg.com/getting-started/install). Then clone the repo, install dependencies and start the server by running all these commands: ```text git clone https://github.com/figment-networks/learn-web3-dapp.git cd learn-web3-dapp yarn yarn dev ``` # 🤝 Feedback and contributing If you encounter any errors during this process, please join our [Discord](https://figment.io/devchat) for help. Feel free to also open an Issue or a Pull Request on the repo itself. We hope you enjoy our Web 3 education dApps 🚀 -- ❤️ The Figment Learn Team # Pathway Smart Contract A [smart contract] written in [Rust] for [figment pathway] # Quick Start Before you compile this code, you will need to install Rust with [correct target] # Exploring The Code 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [rust]: https://www.rust-lang.org/ [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html
anbork_go-near
.github workflows main.yml README.md package.json public index.html robots.txt src assets fonts Poppins OFL.txt images light.svg helpers api.ts config.ts hooks.ts mappers.ts media.ts near.ts routes.ts walletConnection.ts react-app-env.d.ts tsconfig.json
# Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `yarn start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.\ You will also see any lint errors in the console. ### `yarn test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `yarn build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `yarn eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). test.
klimoza_lab3-klimoza-hex-game
Cargo.toml README.md build.sh src board.rs cell.rs external.rs game.rs game_with_data.rs lib.rs roketo.rs
The Game of Hex =============== Hex is a two player abstract strategy board game in which players attempt to connect opposite sides of a rhombus-shaped board made of hexagonal cells. Hex was invented by mathematician and poet Piet Hein in 1942 and later rediscovered and popularized by John Nash. ## Description This contract implements hex-game backed by storage on NEAR blockchain. Contract in `src/lib.rs` provides methods to create new game, make an allowed move in one of already existing games or view information about game by index. The project is divided into separate files, each file contains one of the structures, that is used to keep information about the game, and tests for this structure. ## Interacting with contract Deployed game contract in testnet: `hex-game.klimoza.testnet` #### `create_game(first_player: AccountId, second_player: AccountId, field_size: usize) -> GameIndex` Creates new game with given parameters and returns index of created game. For example: ```console ➜ near call hex-game.klimoza.testnet create_game '{"first_player": "crossword.klimoza.testnet", "second_player": "klimoza.testnet", "field_size": 2}' --accountId hex-game.klimoza.testnet --amount 2 Scheduling a call: hex-game.klimoza.testnet.create_game({"first_player": "crossword.klimoza.testnet", "second_player": "klimoza.testnet", "field_size": 2}) Doing account.functionCall() Log [hex-game.klimoza.testnet]: Created board: Log [hex-game.klimoza.testnet]: . . Log [hex-game.klimoza.testnet]: . . 4 ``` #### `make_move(index: GameIndex, move_type: MoveType, cell: Option<Cell>) -> Game` Tries to make a move in the game at the given index and returns Game if move is correct(panics otherwise). Used structures: ```rust pub type GameIndex = u64; pub enum MoveType { PLACE, SWAP, } pub struct Cell { pub x: usize, pub y: usize, } ``` You can omit the `cell` parameter if `move_type` is `SWAP`(i.e. applying swap rule on the current move). For example: ```console ➜ near call hex-game.klimoza.testnet make_move '{"index": 4, "move_type": "SWAP"}' --accountId klimoza.testnet Scheduling a call: hex-game.klimoza.testnet.make_move({"index": 4, "move_type": "SWAP"}) Doing account.functionCall() Log [hex-game.klimoza.testnet]: Old board: Log [hex-game.klimoza.testnet]: . R Log [hex-game.klimoza.testnet]: . . Log [hex-game.klimoza.testnet]: New board: Log [hex-game.klimoza.testnet]: . . Log [hex-game.klimoza.testnet]: B . { first_player: 'crossword.klimoza.testnet', second_player: 'klimoza.testnet', turn: 2, board: { size: 2, field: 'IA==' }, current_block_height: 96244955, prev_block_height: 96244934, is_finished: false } ``` #### `get_game(index: GameIndex) -> Option<Game>` Returns the game at the given index(if there is one). For example: ```console ➜ near call hex-game.klimoza.testnet get_game '{"index": 4}' --accountId hex-game.klimoza.testnet Scheduling a call: hex-game.klimoza.testnet.get_game({"index": 4}) Doing account.functionCall() Log [hex-game.klimoza.testnet]: Game board: Log [hex-game.klimoza.testnet]: R B Log [hex-game.klimoza.testnet]: B . { first_player: 'crossword.klimoza.testnet', second_player: 'klimoza.testnet', turn: 4, board: { size: 2, field: 'KQ==' }, current_block_height: 96244985, prev_block_height: 96244971, is_finished: true } ``` #### `check_premium_account(account_id: AccountId) -> bool` Checks for a locked, expirable, active Roketo stream going from `account_id` to `hex_game_account`. Returns Promise. For example: ```console ➜ near call wrap.testnet ft_transfer_call '{"receiver_id": "streaming-r-v2.dcversus.testnet", "amount": "2200000000000000000000000", "memo": "Roketo transfer", "msg": "{\"Create\":{\"request\":{\"balance\":\"2000000000000000000000000\", \"owner_id\": \"klimoza.testnet\",\"receiver_id\":\"hex-game.klimoza.testnet\",\"token_name\": \"wrap.testnet\", \"tokens_per_sec\":\"6666666666666666666667\", \"is_locked\": true, \"is_expirable\": true}}}"}' --accountId klimoza.testnet --depositYocto 1 --gas 200000000000000 Doing account.functionCall() Log [wrap.testnet]: Transfer 2200000000000000000000000 from klimoza.testnet to streaming-r-v2.dcversus.testnet Log [wrap.testnet]: Memo: Roketo transfer Log [wrap.testnet]: Transfer 2100000000000000000000000 from streaming-r-v2.dcversus.testnet to finance-r-v2.dcversus.testnet '2200000000000000000000000' ➜ near call hex-game.klimoza.testnet check_premium_account '{"account_id": "klimoza.testnet"}' --accountId klimoza.testnet Scheduling a call: hex-game.klimoza.testnet.check_premium_account({"account_id": "klimoza.testnet"}) true ``` ## Testing At the moment, the projects contains 33 tests, each of which lies in the file of the structure it is testing. You can run them all using following command: ```console cargo test ``` Alternatively, you can specify test group you want to run, for example: ```console cargo test cell_tests ``` ## Demonstration [![Video](https://img.youtube.com/vi/mwgUEafpeow/0.jpg)](https://youtu.be/mwgUEafpeow)
jennifertrin_VRAccessToken
NEAR VR Scene.txt README.md
# VR Access Token VR Access Token Project for the Open Web Community Hackathon ## Our Project This solution demonstrates how an NFT published on Mintbase.io can be used as a ticket to a browser-based VR experience. Artists Liminallogic and Renderedflesh collaborated to create an immersive artwork inspired by the NEAR protocol. Using a mechanism developed by the MintGate platform, we created a URL of their VR experience that is gated by a NEAR NFT. ## Links to Project Presentation and Proposal: Link to the Proposal: https://gov.near.org/t/ideation-for-vr-dao-hackathon-tickets-to-the-metaverse/1749 Link to the Video Demo: https://www.youtube.com/watch?v=fZEmWSxQcA8 Link to Presentation: https://prezi.com/view/PdNMocaJtrqAlvmaPHW0/ ## How It Was Built ### NFT The VR Access Token NFT was minted on MintBase. First, we deployed a store. Once the store deployed, we updated the store settings and added Liminallogic and Renderedflesh as minters. Then, we provided the following information: - uploaded the thumbnail image - set the number of NFTs to be minted - filled out the description and added tag - added royalty splits - added revenue splits - uploaded media Then we minted the NFT which resulted in: https://testnet.mintbase.io/thing/UNgM3-n3wZMgRra12x9FrOOKNzaWQOFXSeUuM09K_gs:testing.mintspace2.testnet ### VR Experience The HTML site was built using A-Frame for webXR. ### Token Gating Mechanism MintGate is a platform that allows creators and communities to gate web content using blockchain tokens on Ethereum and 60+ EVM compatible blockchains. Their solution uses React on the frontend and proxy server on the backend. A creator inputs the URL of web content, sets the token parameters required for an end-user to hold to access the link, and the MintGate site provides a new, URL-shortened token gated link. Once a user clicks on the link and connects their wallet, MintGate's blockchain-aware CDN confirms that the user has the amount of NFTs or tokens that the creator specified when setting up the link. As part of the hackathon, they integrated with NEAR testnet. They updated their CDN to accept and read a NEAR contract addresses and token IDs. They also incorporated the NEAR testnet wallet and enabled it to check data inputted from proxy.
nticket_nticket-dapp-frontend-testnet
css app.1451efa9.css chunk-vendors.c93eadbc.css index.html js app.9985d681.js chunk-vendors.772b4d0c.js
PinkiNice_roketo-encode
README.md contract-api.md craco.config.js package-lock.json package.json public index.html manifest.json robots.txt src index.css near config.ts index.ts react-app-env.d.ts reportWebVitals.ts roketo config.ts helpers.ts index.ts interfaces contracts.ts entities.ts index.ts roketo-api.ts roketo-contract-api.ts roketo.ts setupTests.ts slides index.ts tailwind.config.js tsconfig.json
# Good Luck to hack everyone! Roketo Testnet contract: dev-1635510732093-17387698050424 ROKE-TO testnet token: dev-1635511395820-24328868660221 Discord Roketo Channel: [https://discord.com/invite/CHmAPQmH](https://discord.com/invite/CHmAPQmH) ### Run project with `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.\ You will also see any lint errors in the console.
keypom_fydp
.vscode settings.json Cargo.toml README.md __tests__ claims constants-tweaking.ava.ts create-account-and-claim.ava.ts ft-claim.ava.ts near-claim.ava.ts nft-claim.ava.ts null-claim.ava.ts testing-utils.ava.ts config-tests OLD-config-tests.ava.ts time-configs.ava.ts usage-configs.ava.ts creation create-and-add.ava.ts drop-payment.ava.ts edge-creation.ava.ts failed-creation.ava.ts fc-creation.ava.ts near-creation.ts nft-creation.ts nft-funding.ava.ts deletion ft-multi-use.ava.ts ft-single-use.ava.ts key-deletion.ava.ts nft-deletion.ava.ts fc-drops fc-drops.ava.ts ft-drops ft-drops.ava.ts utils ft-utils.ts internals test-internals.ava.ts nested-fc-fields nested-fc-fields.ava.ts nft-drops nft-drops.ava.ts utils nft-utils.ts nft-keys nft-keys-basics.ava.ts passwords password-tests.ava.ts utils pwUtils.ts poaps poap-tests.ava.ts utils distro.ts nearconUtils.ts profiling profiling.ava.ts simple.json utils nft-utils.ts pwUtils.ts pub-sales pub-sales.ava.ts stage1 test-simple.ava.ts ticketing ticketing-tests.ava.ts utils distro.ts nearconUtils.ts utils ft-utils.ts general.ts types.ts v3-tests.ava.ts withdraw-assets withdraw-ft.ava.ts withdraw-nft.ava.ts assets CODE_OF_CONDUCT.md build.sh contract Cargo.toml src assets ft_asset ft_balances.rs ft_claims.rs ft_refunds.rs internal_ft_core.rs mod.rs function_call fc_claims.rs helpers.rs internal_fc_core.rs mod.rs models.rs mod.rs nft_asset internal_nft_core.rs mod.rs nft_balances.rs nft_claims.rs nft_refunds.rs drop_claiming claim_callbacks.rs claims.rs helpers.rs mod.rs drop_creation add_keys.rs create_drop.rs helpers.rs mod.rs drop_deletion.rs helpers.rs internals constants gas_amounts.rs mod.rs strings.rs events events_core.rs helpers.rs keypom.rs mod.rs nfts.rs mod.rs types.rs lib.rs models config.rs external implementations.rs mod.rs models.rs internal implementations.rs mod.rs models.rs mod.rs standard.rs nft_keys approval.rs enumeration.rs internal.rs metadata.rs mod.rs nft_core.rs royalty.rs owner.rs user_balances.rs views drops.rs funder.rs helpers.rs keys.rs mod.rs General Create Account & Claim Claim Shared Constants Pessimistic Allowance Assets Access Key Method Names NFT Standard Stuff Keypom Standard Asset IDs Owner Only Things Drops NFT Keys Utility deploy ft configurations.js ft-create-sdk.js ft-create.js linkdrops.json function-call configurations.js fc-create-sdk.js fc-create.js nft configurations.js nft-create-sdk-minted.js nft-create-sdk-owned.js nft-create.js simple configurations.js linkdrops.json simple-create-sdk.js simple-create.js utils general.js package.json
<p align="center"> <img src="assets/claimed-linkdrop.png" alt="Logo" style="width: 35%; height: 35%"> <br /> </p> <div align="center"> <h1> Keypom </h1> Limitless possibilities in the palm of your hand. </div> <div align="center"> <br /> [![made by BenKurrek](https://img.shields.io/badge/made%20by-BenKurrek-ff1414.svg?style=flat-square)](https://github.com/BenKurrek) [![made by mattlockyer](https://img.shields.io/badge/made%20by-MattLockyer-ff1414.svg?style=flat-square)](https://github.com/mattlockyer) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Introduction](#introduction) - [Comparable Solutions](#comparable-solutions) - [Our Solution](#our-solution) - [Drop Customization](#shared-drop-customization) - [Primary Market Public Sale for Keys](#primary-market-public-sale-for-keys) - [Simple Drops](#simple-drops) - [NFT Drops](#non-fungible-token-drops) - [FT Drops](#fungible-token-drops) - [Function Call Drops](#function-call-drops) - [How It Works](#how-do-fc-drops-work) - [Security](#security-for-fc-drops) - [User-Provided Args](#user-provided-arguments) - [Use Cases](#fc-drop-use-cases) - [Password Protected Keys](#password-protected-keys) - [dApp Free Trials for Users](#dapp-free-trials-for-users) - [Costs](#costs) - [Per Drop](#per-drop) - [Per Key](#per-key) - [Deleting Keys and Drops](#deleting-keys-and-drops) - [Automatic Refunds](#automatic-refunds-when-keys-are-used) - [Account Balances](#account-balances-for-smooth-ux) - [How Linkdrops Work](#how-linkdrops-work) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Deploy Scripts](#deploy-scripts) - [Query Information From Keypom](#query-information-from-keypom) - [Key Specific](#key-specific) - [Drop Specific](#drop-specific) - [Running Tests](#running-the-keypom-tests) - [Contributing](#contributing) - [Acknowledgements](#acknowledgements) </details> --- # About <tr> <td> > To view our debut talk at NEARCON 2022, click [here](https://www.youtube.com/watch?v=J-BOnfhHV50). Keypom is an access key factory created as a result of 3 common problems that arose in the ecosystem. 1. People want a *cheap, customizable, and unique* onboarding experience for users. 2. Companies don't want to expose **full access keys** in their backend servers. 3. dApps want a *smooth UX* with zero barrier to entry onboarding. The contract was initially created as a way to handle the 1 $NEAR minimum deposit required for creating linkdrops using the [regular linkdrop contract](https://github.com/near/near-linkdrop/blob/f24f2608e1558db773f2408a28849d330abb3881/src/lib.rs#L18). If users wanted to create linkdrops, they needed to attach a **minimum** of 1 $NEAR. This made it costly and unscalable for projects that wanted to mass onboard onto NEAR. Keypom, on the other hand, has been highly optimized to allow for the lowest possible costs. ## Introduction Blockchain technology comes with many benefits such as sovereign ownership, digital rights, privacy, freedom, peer to peer coordination and much more. The problem with this technology, however, is that there is an extremely high barrier to entry for an everyday individual. None of it matters if nobody can onboard. It’s confusing to create and fund a crypto wallet. People are unfamiliar with the process, technical jargon, and the general flow. NEAR’s account model is powerful, but extremely underutilized because it’s complex for developers to take full advantage of. Keypom wraps this up in a single API call. With NEAR’s goal of onboarding 1 billion users to Web3, there needs to be a solution to this high barrier to entry for developers building on NEAR and users onboarding to their apps and the NEAR ecosystem. Below is a table outlining the minimum costs to onboard a new user onto NEAR with a named account. | | 1 Account | 1,000 Accounts | 1,000,000 Accounts | |----------------------|-----------------|-----------------|--------------------| | Traditional Linkdrop | ~1 NEAR | ~1,003 NEAR | ~1,002,840 NEAR | | Keypom | ~0.0035 NEAR | ~3.5 NEAR | ~3,500 NEAR | | | ~99.65% Cheaper | ~99.65% Cheaper | ~99.65% Cheaper | Keypom allows anyone to create highly customizable onboarding experiences for their users. These experiences can be both for new, or existing users. If someone already has a wallet, they can still use a Keypom link to experience an app, and then transfer the assets later. ## Comparable Solutions | | **Keypom** | **NEAR Drop** | **Satori** | |----------------------------------------------|------------|---------------|------------| | NEAR Drop | ✅ | ✅ | ❌ | | FT Drop | ✅ | ❌ | ❌ | | NFT Drop | ✅ | ❌ | ✅ | | Function Call Drop | ✅ | ❌ | ❌ | | Embeddable in Dapps | ✅ | ❌ | ❌ | | Wallet Selector Integration | ✅ | ❌ | ❌ | | No Fee | ✅ | Maybe? | ❌ | | No Backend / 3rd Party | ✅ | ✅ | ❌ | | Campaigns | ✅ | ✅ | ✅ | | Multi-Step e.g. Tickets click > scan > claim | ✅ | ❌ | ❌ | | Password Protected Drops | ✅ | ❌ | ❌ | | Timed Drops e.g. recurring payments | ✅ | ❌ | ❌ | | Custom Names e.g. user.myapp.near | ✅ | ❌ | ❌ | # Our Solution Keypom allows for the creation of highly customizable access keys. These keys can be thought of as having their own *smart contracts*. Each access key derives from what's known as a *drop*. These drops outline the different functionalities and behaviors the key will have. A drop can be thought of as a bucket that access keys belong to. You can create many different buckets and fill them each with their own keys. Each key will act in accordance to the drop, or bucket, it belongs to. A drop can be one of four different types: 1. Simple drop. 2. Non Fungible Token drop. 3. Fungible Token drop. 4. Function Call drop. # Shared Drop Customization While each *type* of drop has its own set of customizable features, there are some that are shared by **all drops** These are outlined below. ```rust /// Each time a key is used, how much $NEAR should be sent to the claiming account (can be 0). pub deposit_per_use: u128, /// How much Gas should be attached when the key is used. The default is 100 TGas as this is /// what's used by the NEAR wallet. pub required_gas: Gas, /// The drop as a whole can have a config as well pub config: Option<DropConfig>, /// Metadata for the drop in the form of stringified JSON. The format is completely up to the /// user and there are no standards for format. pub metadata: LazyOption<DropMetadata>, ``` Within the config, there are a suite of features that can be customized as well: ```rust /// How many uses can each key have before it's deleted. If None, default to 1. pub uses_per_key: Option<u64>, /// Override the global root account that sub-accounts will have (near or testnet). This allows /// users to create specific drops that can create sub-accounts of a predefined root. /// For example, Fayyr could specify a root of `fayyr.near` By which all sub-accounts will then /// be `ACCOUNT.fayyr.near` pub root_account_id: Option<AccountId>, // /Any time based configurations pub time: Option<TimeConfig>, /// Public sale config options pub sale: Option<PublicSaleConfig>, /// Any usage specific configurations pub usage: Option<UsageConfig>, ``` ## Time Based Customizations Keypom allows users to customize time-based configurations as outlined below. ```rust pub struct TimeConfig { /// Minimum block timestamp before keys can be used. If None, keys can be used immediately /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub start: Option<u64>, /// Block timestamp that keys must be before. If None, keys can be used indefinitely /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub end: Option<u64>, /// Time interval between each key use. If None, there is no delay between key uses. /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub throttle: Option<u64>, /// Interval of time after the `start_timestamp` that must pass before a key can be used. /// If multiple intervals pass, the key can be used multiple times. This has nothing to do /// With the throttle timestamp. It only pertains to the start timestamp and the current /// timestamp. The last_used timestamp is not taken into account. /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub interval: Option<u64>, } ``` ## Usage Based Customizations In addition to time-based configurations, the funder can customize behaviors pertaining to key usages. ```rust pub struct UsageConfig { /// Can the access key only call the claim method_name? Default to both method_name callable pub permissions: Option<ClaimPermissions>, /// If claim is called, refund the deposit to the owner's balance. If None, default to false. pub refund_deposit: Option<bool>, /// Should the drop be automatically deleted when all the keys are used? This is defaulted to false and /// Must be overwritten pub auto_delete_drop: Option<bool>, /// When this drop is deleted and it is the owner's *last* drop, automatically withdraw their balance. pub auto_withdraw: Option<bool>, /// When calling `create_account` on the root account, which keypom args should be attached to the payload. pub account_creation_fields: Option<KeypomArgs>, } ``` ## Primary Market Public Sale for Keys The last type of customization available to the funder is the ability to create a public sale for access keys in a drop. The funder can create a drop and let people add keys to it on an as-needed basis. The sale configurations are outlined below. ```rust pub struct PublicSaleConfig { /// Maximum number of keys that can be added to this drop. If None, there is no max. pub max_num_keys: Option<u64>, /// Amount of $NEAR that the user needs to attach (if they are not the funder) on top of costs. This amount will be /// Automatically sent to the funder's balance. If None, the keys are free to the public. pub price_per_key: Option<u128>, /// Which accounts are allowed to add keys? pub allowlist: Option<LookupSet<AccountId>>, /// Which accounts are NOT allowed to add keys? pub blocklist: Option<LookupSet<AccountId>>, /// Should the revenue generated be sent to the funder's account balance or /// automatically withdrawn and sent to their NEAR wallet? pub auto_withdraw_funds: Option<bool>, /// Minimum block timestamp before the public sale starts. If None, keys can be added immediately /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub start: Option<u64>, /// Block timestamp dictating the end of the public sale. If None, keys can be added indefinitely /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub end: Option<u64>, } ``` ### Use-Cases for Public Sales Giving the funder the ability to sell access keys to a drop introduces a ton of awesome use-cases and has a slew of benefits: - Everything is decentralized and on-chain. There is no need to trust a third party to hold the keys. - Keys are created on an as-needed basis. This *drastically reduces up-front costs* for the drop funder. - Since keys are created only when users want them, there is no chance that through distribution, the private key gets compromised. The key is created *when the user purchases it*. - Everything *can* remain anonymous and private since people can purchase access keys with their crypto wallets. Having a public sale allows for an on-chain distribution mechanism for access keys. Let's look at two examples where this can be used. #### Example 1: Ticketing Imagine there is an event organizer that wants to host with a guest-list of 100,000 people. Without doing a public sale, the organizer would need to spend a lot of $NEAR up-front to create all 100 thousand access keys. At this point, they would need to find a way to distribute all the keys. With a public sale, the organizer can set a price per key, an allowlist, a blocklist, and even a start date for when the sale goes live. At this point, the keys would be lazily purchased by people coming to the event. This not only reduces the up-front cost for the funder but it can also provide more accurate data on how many people are actually coming to the event. #### Example 2: Selling Function Calls Access keys can be used for much more than just POAPs, onboarding or tickets. When using FC Drops, the keys can execute functions on external contracts. This feature can be used in conjunction with the public sale to create a marketplace for gated function calls. Imagine a simple guest-book smart contract that only allowed people to sign the book if they had a valid Keypom access key. Whoever signed the guest-book had access to all VIP events at NEARCon. You could lock access to signing the guest-book behind a Keypom drop and setup a public sale. #### Example 3: NFT Collections A very common scenario is an artist launching a new NFT collection. The artist can setup a custom marketplace whereby the keys are lazily minted and sold to the public. They can then create a custom website that takes a Keypom link and brings the user through a unique, creative experience before the NFT is minted and a wallet is optionally created. People that purchase the links can either use them to send the NFT to their existing wallet or create an entirely new wallet. ## Simple Drops The most basic type of drop is the simple kind. Any keys that are part of a simple drop can only be used for 1 thing: **transferring $NEAR**. Once the key is claimed, the claiming account will receive the $NEAR specified in the `deposit_per_use`. Simple drops are a great way to send $NEAR to claiming accounts while not storing a lot of information on the contract. Below are a couple use cases. ### Backend Servers Let's say you have a backend server that should send 10 $NEAR to the first 3 people that redeem an NFT. Rather than exposing your full access key in the backend server, you could create a simple drop that either has 3 keys or 1 key that is claimable 3 times. In the drop, you'd specify that each time the key is claimed, the specified account would receive 10 $NEAR. ### Recurring Payments Recurring payments are quite a common situation. If you need to send someone 10 $NEAR once a month for 6 months, you could create a simple drop that has a usage config with an `interval` of 1 month. In addition, you can set the time based config to have a `start` of next week. Everytime the key is used, 10 $NEAR is sent to the account. If the contractor missed a month's payment, they can claim the key late but can never use the key more than what is intended. <p align="center"> <img src="assets/flowcharts/recurring_payments.png" style="width: 65%; height: 65%" alt="Logo"> </p> ### Quick Onboarding If you need to quickly onboard users onto NEAR, you could create a simple drop with a small amount of $NEAR (enough to create a wallet) and set the usage's permissions to be `create_account_and_claim`. This means that the key can only be used to create accounts. You can then add keys as you wish to the drop and give them out to users so they can create accounts and be onboarded onto NEAR. ### Lazy Registering Keys A unique use-case for simple drops is the ability to lazy register key uses. This allows the funder to batch create many keys at a time while only paying for basic fees such as the storage used and the key's allowance. The funder would **not** need to pay for the `deposit_per_use` of each key up front. They can instead register individual key uses as they are needed. With this scenario, if an organization wanted to onboard users with a linkdrop valued at 10 $NEAR, they could create 1000 keys without needing to pay 1000 * 10 = 10,000 $NEAR up-front. They could then register keys on an as-needed basis. If they need to register 25 keys at a time, they can do this by simply calling the `register_uses` function. ## Non-Fungible Token Drops Non-Fungible Token drops are a special type that allows users to "preload" the drop with NFTs. These tokens will then be *automatically* sent to the **claiming user**. The claiming flow is fairly similar to simple drops in that users can either create an account or claim to an existing one. NFT drops are essentially a wrapper around simple drops. All the functionalities that simple drops have are carried over but now, users can receive an NFT as well as $NEAR. This brings introduces some customization and uniqueness to the use-cases. ### How does it work? Every drop has a field known as `registered_uses`. This tells the contract how many uses the drop has across all its keys. For basic simple drops that are *not* lazy registering keys, this field doesn't matter since all the uses are paid for up-front when the drop is created or when keys are added. With NFT drops, however, there is a 2 step process: - Firstly, the drop is created and all the $NEAR required is pre-paid for. This is the same as simple drops, however, the `registered_uses` are set to 0. - Once the drop is created, the owner must send the contract the NFTs in order for keys to be usable. This process is done through the `nft_transfer_call` workflow baked into the NFT standards. It's up to the owner to facilitate this process. Whenever the contract receives tokens, it will push the ID to a vector. These IDs are **popped** off whenever a key is used. A user will receive the most recent token sent to the contract as the vector is acting like a *stack*. ### NFT Config Along with the default global configurations for drops, if you'd like to create an NFT drop, you must specify the following pieces of information when the drop is created. ```rust pub struct NFTDataConfig { /// Which account ID will be sending the NFTs to the contract. If this is not specified, anyone can send NFTs for the specific drop. pub sender_id: Option<AccountId>, /// Which contract will the NFTs live on pub contract_id: AccountId, } ``` By specifying this information, the drop is locked into only accepting NFTs from the specific contract and optionally from a specified sender account. ### Use Cases NFT drops work really well for when you want to send a *pre-existing* NFT to a user along with some $NEAR. Since NFT drops are a light wrapper around simple drops, most of the use-cases are the same although people can now get NFTs as well. This means you can onboard a user with some $NEAR **and** they *get an NFT* too. ## Fungible Token Drops A Fungible Token drop is also a light wrapper around the simple drop. It works very similarly to how its NFT counterpart does. First, you'll need to create the drop and then you can fund it with assets and register key uses. You can preload a drop with as many FTs as you'd like even if you don't have the keys yet. This will spike the `registered_uses` and then you can create keys and slowly eat away from this "total supply" overtime. If the drop runs out, you can send it more FTs to top up. All the keys in the FT drop will share from this supply and everytime a key is used, the `registered_uses` will decrement and the "total supply" will get smaller. ### How does it work? As mentioned in the NFT section, every drop has a field known as `registered_uses`. This tells the contract how many uses the drop has across all its keys. For basic simple drops that are *not* lazy registering keys, this field doesn't matter since all the uses are paid for up-front when the drop is created or when keys are added. With FT drops, however, there is a 2 step process: - Firstly, the drop is created and all the $NEAR required is pre-paid for. This is the same as simple drops, however, the `registered_uses` are set to 0. - Once the drop is created, the owner must send the contract the FTs in order for keys to be usable. This process is done through the `ft_transfer_call` workflow baked into the FT standards. It's up to the owner to facilitate this process. ### FT Config Along with the default global configurations for drops, if you'd like to create a FT drop, you must specify the following pieces of information when the drop is created. ```rust pub struct FTDataConfig { /// The contract that the FTs live on. pub contract_id: AccountId, /// The account ID that will be sending the FTs to the contract. If this is not specified, anyone can send FTs for the specific drop. pub sender_id: Option<AccountId>, /// How many FTs should the contract send *each time* a key is used. pub balance_per_use: U128, } ``` By specifying this information, the drop is locked into only accepting FTs from the specific contract and optionally from a specified sender account. you can send as many FTs as you'd like and can over-pay, you *must* send at **least** enough FTs in one call to cover 1 use. As an example, if a drop is created such that 10 FTs will be sent when a key is used, you must send **at least 10** and cannot break it up into separate calls where you send 5 one time and 5 another. ### Use Cases FT drops have some awesome flexibility due to the fact that they support all the functionalities of the Simple drops, just with more use-cases and possibilities. Let's look at some use cases to see how fungible token drops can be used. #### Recurring Payments Recurring payments are quite a common situation. Let's say you need to send someone $50 USDC every week. You could create a key with 5 uses that has a time config `interval` of 1 week. You would then pre-load maybe the first week's deposit of $50 USDC and register 1 use or you could send $500 USDC for the first 10 weeks. At that point, you would simply hand over the key to the user and they can claim once a week. #### Backend Servers Taking the recurring payments problem to another level, imagine that instead of leaving the claims up to the contractor, you wanted to automatically pay them through a backend server. They would give you their NEAR account and you would send them FTs. The problem is that you don't want to expose your full access key in the server. By creating a FT drop, you can store **only the function call access key** created by Keypom in the server. Your backend would them use the key to call the `claim` function and pass in the user's account ID to send them the FTs. #### Creating a Wallet with FTs Another awesome use-case is to allow users to be onboarded onto NEAR and **also** receive FTs. As an example, You could do a promotion where you're giving away $10 USDC to the first 100 users that sign up to your mailing list. You can also give away QR codes at events that contain a new fungible token that you're launching. You can simply create a FT drop and pre-load it with the FT of your choice. In addition, you can give it 0.02 $NEAR for new wallets that are created. You can pair this with setting the usage config's `refund_deposit` flag to true which would make it so that if anyone claims the fungible tokens and they *already have a wallet*, it will automatically refund you the 0.02 $NEAR. That money should only be used for the creation of new wallets. Since your focus is on the fungible tokens, you don't want to **force users** to create a new wallet if they have one already by specifying the usage permissions to be `create_account_and_claim` but instead, you want to be refunded in case they do. ## Function Call Drops Function call drops are by far the most powerful feature that Keypom provides. FC drops allow **any** method on **any** contract to be executed (with some exceptions). In addition, there are a huge variety of customizations and features you can choose from when defining the drop that come on top of the global options. The possibilities are almost endless. State of the art NFT ticketing, lazy minting NFTs, auto registration into DAOs, analytics for marketing at events and much more. ### How do FC Drops work? Unlike NFT and FT drops, the function calls must have everything paid for **upfront**. There is no two step process so the creation is similar to Simple drops. Once the drop is created and keys are added, you can immediately start using it. #### Function Call Config When creating the drop, you have quite a lot of customization available. At the top level, there is a FC drop global config similar to how the *general* config works. ```rust pub struct FCConfig { /// How much GAS should be attached to the function call if it's a regular claim. /// If this is used, you *cannot* go through conventional linkdrop apps such as mynearwallet /// since those *always* attach 100 TGas no matter what. In addition, you will only be able to /// call `claim` if this is specified. You cannot have an `attached_gas` parameter and also /// call `create_account_and_claim. pub attached_gas: Option<Gas>, } ``` #### Method Data In addition to the global config, the user can specify a set of what's known as `MethodData`. This represents the information for the function being called. Within this data, there are also a few optional configurations you can use to extend your use cases. You'll see how powerful these can be in the use cases [section](#use-cases). ```rust pub struct MethodData { /// Contract that will be called pub receiver_id: AccountId, /// Method to call on receiver_id contract pub method_name: String, /// Arguments to pass in (stringified JSON) pub args: String, /// Amount of yoctoNEAR to attach along with the call pub attached_deposit: U128, /// Specifies what field the claiming account ID should go in when calling the function /// If None, this isn't attached to the args pub account_id_field: Option<String>, /// Specifies what field the drop ID should go in when calling the function. To insert into nested objects, use periods to separate. For example, to insert into args.metadata.field, you would specify "metadata.field" /// If Some(String), attach drop ID to args. Else, don't attach. pub drop_id_field: Option<String>, /// Specifies what field the key ID should go in when calling the function. To insert into nested objects, use periods to separate. For example, to insert into args.metadata.field, you would specify "metadata.field" /// If Some(String), attach key ID to args. Else, don't attach. pub key_id_field: Option<String>, // Specifies what field the funder id should go in when calling the function. To insert into nested objects, use periods to separate. For example, to insert into args.metadata.field, you would specify "metadata.field" // If Some(string), attach the funder ID to the args. Else, don't attach. pub funder_id_field: Option<String>, // What permissions does the user have when providing custom arguments to the function call? // By default, the user cannot provide any custom arguments pub user_args_rule: Option<UserArgsRule>, } ``` The MethodData keeps track of the method being called, receiver, arguments, and attached deposit. In addition, there are some optional fields that can be used to extend the use cases. If you have a contract that requires some more context from Keypom such as the funder ID, drop ID, key ID, and account ID that used the key, these can all be specified. We've kept it generic such that you can specify the actual argument name that these will be passed in as. For example, if you had a contract that would lazy mint an NFT and it required the account to be passed in as `receiver_id`, you could specify an `account_id_field` set to `receiver_id` such that Keypom will automatically pass in the account ID that used the key under the field `receiver_id`. Similarly, inserting fields into nested arguments is quite trivial. Let's say you wanted to insert the account ID that claimed the drop into the `receiver_id` under metadata for the following args: ```json args: { "token_id": "foobar", "metadata": { "receiver_id": INSERT_HERE } } ``` You could specify the `account_id_field` as `metadata.receiver_id` and Keypom will automatically create the `receiver_id` field and insert it into `metadata`. This would work whether or not `metadata` was already present in the args. > **NOTE:** The location for inserting the arguments *cannot* collide with another entry. In the above example, `token_id.receiver_id` could *NOT* be specified since `token_id` is mapped to `foobar` already. This logic extends to the drop ID, and key Id as well. #### Key Uses For **every key use**, you can specify a *vector* of `MethodData` which allows you to execute multiple function calls each time a key is used. These calls are scheduled 1 by 1 using a simple for loop. This means that most of the time, the function calls will be executed in the order specified in the vector but it is not *guaranteed*. It's important to note that the Gas available is split evenly between *all* the function calls and if there are too many, you might run into issues with not having enough Gas. You're responsible for ensuring that this doesn't happen. The vector of `MethodData` is *optional* for each key use. If a key use has `null` rather than `Some(Vector<MethodData>)`, it will decrement the uses and work as normal such that the `timestamp, `start` etc. are enforced. The only difference is that after the key uses are decremented and these checks are performed, the execution **finishes early**. The null case does **not** create an account or send *any* funds. It doesn't invoke any function calls and simply *returns once the checks are done*. This makes the null case act as a "burner" where you disregard any logic. This has many uses which will be explored in the use cases [section](#use-cases). If a key has more than 1 use, you can specify a *different vector* of `MethodData` for **each use**. As an example, you could specify that the first use will result in a null case and the second use will result in a lazy minting function being called. If you have multiple uses but want them all to do the same thing, you don't have to repeat the same data. Passing in only 1 vector of `MethodData` will result in **all the uses** inheriting that data. ### Security for FC Drops Since all FC drops will be signed by the Keypom contract, there are a few restrictions in place to avoid malicious behaviors. To avoid users from stealing registered assets from other drops, the following methods cannot be called via FC Drops: ```rust /// Which methods are prohibited from being called by an FC drop const DEFAULT_PROHIBITED_FC_METHODS: [&str; 6] = [ "nft_transfer", "nft_transfer_call", "nft_approve", "nft_transfer_payout", "ft_transfer", "ft_transfer_call", ]; ``` In addition, the Keypom contract cannot be the receiver of any function call. This is to avoid people from calling private methods through FC Drops. #### Keypom Arguments When a key is used and a function is called, there is a data structure that is **automatically** attached to the arguments. This is known as the `keypom_args`. It contains the information that the drop creator specified in the `MethodData`. ```rust pub struct KeypomArgs { pub account_id_field: Option<String>, pub drop_id_field: Option<String>, pub key_id_field: Option<String>, pub funder_id_field: Option<String> } ``` ##### Motivation Let's say there was an exclusive NFT contract that allowed the Keypom contract to mint NFTs as part of an FC drop. Only Keypom was given access to mint the NFTs so they could be given out as linkdrops. The organizer only wanted links that were part of their drop to be valid. For this reason, the NFT contract would only mint if Keypom called the `nft_mint` function and there was a field `series` passed in and it was equal to the drop ID created by the organizer. Let's say the owner created an exclusive drop that happened to have a drop ID of 5. They could then go to the NFT contract and restrict NFTs to only be minted if: - `series` had a value of 5. - The Keypom contract was the one calling the function. In order for this to work, when creating the drop, the owner would need to specify that the`drop_id_field` was set to a value of `series` such that the drop ID is correctly passed into the function. The problem with this approach is that the NFT contract has no way of knowing which arguments were sent by the **user** when the drop was created `as part of the MethodData `args` and which arguments are automatically populated by the Keypom contract. There is nothing stopping a malicious user from creating a new drop that has an ID of 6 but hardcoding in the actual arguments that `series` should have a value of 5. In this case, the malicious drop would have *no* `drop_id_field` and the NFT contract would have no way of knowing that the `series` value is malicious. This can be prevented if a new field is introduced representing what was automatically injected by the Keypom contract itself. At the end of the day, Keypom will **always** send correct information to the receiving contracts. If those contracts have a way to know what has been sent by Keypom and what has been manually set by users, the problem is solved. In the above scenario, the NFT contract would simply add an assertion that the `keypom_args` had the `account_id_field` set to `Some(series)` meaning that the incoming `series` field was set by Keypom and not by a malicious user. ### User Provided Arguments In the `MethodData`, there is an optional field that determines whether or not users can provide their own arguments when claiming a linkdrop and what that behaviour will look like. This is known as the `user_args_rule` and can be one of the following: ```rs /// When a user provides arguments for FC drops in `claim` or `create_account_and_claim`, what behaviour is expected? /// For `AllUser`, any arguments provided by the user will completely overwrite any previous args provided by the drop creator. /// For `FunderPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator. If there are any duplicate args, the drop funder's arguments will be used. /// For `UserPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator, but if there are any duplicate keys, the user's arguments will overwrite the drop funder's. pub enum UserArgsRule { AllUser, FunderPreferred, UserPreferred } ``` By default, if `user_args_rule` is `None` / not provided, any user provided arguments will be completely disregarded. It would act as if the user provided *no args* in the first place. These user arguments must be passed in via the `fc_args` field in `claim` and `create_account_and_claim`. This field is of type `Option<Vec<Option<String>>>` indicating that it's optional to provide the args and for each claim, a set of args can be provided. If, for a specific method, args shouldn't be passed in, the vector can have `None` as the value. The order of the args must match the order of the methods that will be executed. > **NOTE:** If a user provides `fc_args`, the length of the vector *MUST* match the number of methods being executed during the claim. #### All User If `user_args_rule` is set to `AllUser`, any arguments provided by the user will completely *overwrite* any previous args provided by the drop creator. If no args as passed in by the user, the drop creator's original args will be used. As an example, if the method data was: ```js args: JSON.stringify({ "foo": "bar", "baz": { "foo": "bar } }) ``` And the user provided the following args: ```js fc_args: JSON.stringify({ "new_field": "new_value" }) ``` Keypom would completely overwrite the funder's previous args and use the user's `fc_args` instead. #### Funder Preferred If `user_args_rule` is set to `FunderPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator. If there are any duplicate args, the drop funder's arguments will be prioritized / used. As an example, if the funder args were: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "funder_value" } }) ``` And the user provided the following args: ```js fc_args: JSON.stringify({ "funder_field": "user_value", "object": { "funder_field": "user_value", "user_field": "user_value" } }) ``` Keypom would take the user args and merge them together with the funder's but prioritize any fields that are funder specified. The resulting output would be: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "funder_value", "user_field": "user_value" } }) ``` #### User Preferred If `user_args_rule` is set to `UserPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator, but if there are any duplicate keys, the *user's arguments* will overwrite the drop funder's. As an example, if the funder args were: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "funder_value" } }) ``` And the user provided the following args: ```js fc_args: JSON.stringify({ "object": { "funder_field": "user_value", "user_field": "user_value" } }) ``` Keypom would take the user args and merge them together with the funder's but prioritize any fields that are *user specified*. The resulting output would be: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "user_value", "user_field": "user_value" } }) ``` ### FC Drop Use Cases Function call drops are the bread and butter of the Keypom contract. They are the most powerful and complex drops that can currently be created. With this complexity, there are an almost infinite number of use-cases that arise. #### Proof of Attendance Protocols A very common use case in the space is what's known as Proof of Attendance. Often times when people go to events, they want a way to prove that they were there. Some traditional approaches would be to submit your wallet address and you would be sent an NFT or some other form of proof at a later date. The problem with this is that it has a very high barrier to entry. Not everyone has a wallet. With Keypom, you can create a function call drop that allows people to onboard onto NEAR if they don't have a wallet or if they do, they can simply use that. As part of the onboarding / claiming process, they would receive some sort of proof of attendance such as an NFT. This can be lazy minted on-demand such that storage isn't paid up-front for all the tokens. At this point, the event organizers or the funder can distribute links to people that attend the event in-person. These links would then be claimed by users and they would receive the proof of attendance. #### Auto Registration into DAOs DAOs are a raging topic in crypto. The problem with DAOs, however, is there is a barrier to entry for users that aren't familiar with the specific chain they're built on top of. Users might not have wallets or understand how to interact with contracts. On the contrary, they might be very well versed or immersed in the DAO's topics. They shouldn't be required to create a wallet and learn the onboarding process. With Keypom, you can create a function call drop with the main purpose of registering users into a DAO. For people that have a wallet, this will act as an easy way of registering them with the click of a link. For users that don't have a wallet and are unfamiliar with NEAR, they can be onboarded and registered into the DAO with the same click of a link. #### Multisig Contracts Another amazing use-case for Keypom is allowing multisig contracts to have ZERO barrier to entry. Often times when using a multisig contract, you will entrust a key to a trusted party. This party might have no idea what NEAR is or how to interact with your contract. With Keypom, you can create a drop that will allow them to sign their transaction with a click of a link. No NEAR wallet is needed and no knowledge of the chain is required. At the end of the day, from the users perspective, they are given a link and when they click it, their portion of the multisig transaction is signed. The action is only performed on the multisig contract once all links have been clicked. This is an extremely powerful way of doing accomplishing multisig transactions with zero barrier to entry. The users don't even need to create a new account. They can simply call `claim` when the link is clicked which will fire the cross-contract call to the multisig contract and pass in the keypom arguments that will be cross-checked by that contract. #### NFT Ticketing The problem with current NFT ticketing systems is that they require users to have a wallet. This is a huge barrier to entry for people that are attending events but don't have wallets. In addition, there is often no proof of attendance for the event as the NFT is burned in order to get into the event which requires an internet connection. Keypom aims to solve these problems by having a ticketing system that has the following features. - No wallet is needed to enter the event or receive a POAP. - No wifi is needed at the door. - An NFT is minted on-demand for each user that attends the event. - Users can optionally onboard onto NEAR if they don't have a wallet. In addition, some way to provide analytics to event organizers that contains information such as links that were: - Given out but not clicked at all. - Clicked but not attended. - Partially claimed indicating the number of people that attended but did not onboard or receive a POAP. - Fully claimed indicating the number of people that attended and received a POAP. In order to accomplish this, you can create a drop that has 3 uses per key. These uses would be: 1. Array(`null`) 2. Array(`null`) 3. Array(function call to POAP contract to lazy mint an NFT) The event organizer would create the links and distribute them to people however they see fit. When a user receives the link, the first claim is automatically fired. This is a `null` case so nothing happens except for the fact that the key uses are decremented. At this point, the organizer knows that the user has clicked the link since the uses have been decremented. The next claim happens **only** when the user is at the door. Keypom would expose a QR code that can only be scanned by the bouncer's phone. This QR code would appear once the first link is clicked and contains the private key for the link. At the event, they wouldn't need any wifi to get in as they only need to show the bouncer the QR code. Once the bouncer scans it, the site would ensure that they have exactly 2 out of the 3 uses left. If they don't, they're not let in. At that point, a use is decremented from the key and the next time they visit the ticket page (when they have internet), they would be able to claim the final use and be onboarded / receive a POAP. <p align="center"> <img src="assets/flowcharts/ticketing.png" style="width: 65%; height: 65%" alt="Logo"> </p> ## Password Protected Keys Password protecting key uses is an extremely powerful feature that can unlock many use-cases. Keypom has baked flexibility and customization into the contract such that almost all use-cases involving password protection can be accomplished. Whenever a key is added to a drop, it can have a unique password for each individual use, or it can one password for all uses in general. ### How Does It Work? The Keypom implementation has been carefully designed so that users can't look at the NEAR Explorer to view what was passed into the contract either when the drop was created or when a key was used to try and copy those passwords. We also want passwords to be unique across keys so that if you know the password for 1 key, it doesn't work on a different key. In order to accomplish this, we use the concept of hashing. Imagine you have a drop with 2 keys and you want to password protect each key. Rather than forcing the drop funder to input a unique password for each key and having them remember each one, we can have them input a single **base password** and derive unique passwords from it that are paired with the key's public key. This is the most scalable option as it allows the drop funder to only need to remember 1 password and they can derive all the other ones using the hashing algorithm and public key. In the above scenario, let's say the funder inputs the base password as `mypassword1`. If a user wanted to claim the first key, they would need to input into the contract: `hash("mypassword1" + key1_public_key)` The funder would need to give the user this hash somehow (such as embedding it into the link or having an app that can derive it). It's important to note that the funder should probably **NOT** give them the base password otherwise the user could derive the passwords for all other keys (assuming those keys have the same base password). ### What is Stored On-Chain? How does Keypom verify that the user passed in the correct password? If the funder were to simply pass in `hash("mypassword1" + key1_public_key)` into the contract as an argument when the key is created, users could just look at the NEAR Explorer and copy that value. Instead, the funder needs to pass in a double hash when the key is created: `hash(hash("mypassword1" + key1_public_key))`. This is the value that is stored on-chain and when the user tries to claim the key, they would pass in just the single hash: `hash("mypassword1" + key1_public_key)`. The contract would then compute `hash(hash("mypassword1" + key1_public_key))` and compare it to the value stored on-chain. If they match, the key is claimed. Using this method, the base password is not exposed to the user, nobody can look on-chain or at the NEAR explorer and derive the password, and the password is unique across multiple keys. ## Passwords Per Key Use Unlike the passwords per key which is the same for all uses of a key, the drop creator can specify a password for each individual key use. This password follows the same pattern as the passwords per key in that the funder inputs a `hash(hash(SOMETHING))` and then the user would input `hash(SOMETHING)` and the contract would hash this and compare it to the value stored on-chain. The difference is that each individual key use can have a different value stored on-chain such that the user can be forced to input a different hash each time. This `SOMETHING` that is hashed can be similar to the global password per key example but this time, the desired key use is added: `hash("mypassword1" + key1_public_key + use_number)` In order to pass in the passwords per use, a new data structure is introduced so you only need to pass in passwords for the uses that have them. This is known as the `JsonPasswordForUse` and is as follows: ```rust pub struct JsonPasswordForUse { /// What is the password for this use (such as `hash("mypassword1" + key1_public_key + use_number)`) pub pw: String, /// Which use does this pertain to pub key_use: u64 } ```` ## Adding Your First Password Whenever keys are added to Keypom, if there's passwords involved, they must be passed in using the following format. ```rust passwords_per_use: Option<Vec<Option<Vec<JsonPasswordForUse>>>>, passwords_per_key: Option<Vec<Option<String>>>, ``` Each key that is being added either has a password, or doesn't. This is through the `Vec<Option<>`. This vector **MUST** be the same length as the number of keys created.This doesn't mean that every key needs a password, but the Vector must be the same length as the keys. As an example, if you wanted to add 3 keys to a drop and wanted only the first and last key to have a password_per_key, you would pass in: ```rust passwords_per_key: Some(vec![Some(hash(hash(STUFF))), None, Some(hash(hash(STUFF2)))]) ``` ## Complex Example To help solidify the concept of password protected keys, let's go through a complex example. Imagine Alice created a drop with a `uses_per_key` of 3. She wants to create 4 keys: - Key A: No password protection. - Key B: Password for uses 1 and 2. - Key C: Password for use 1 only. - Key D: Password that doesn't depend on the use. In this case, for Keys B and C, they will have the same base password but Alice wants to switch things up and have a different base password for Key D. When these keys are added on-chain, the `passwords_per_key` will be passed in as such: ```rust passwords_per_key: Some(vec![ None, // Key A None, // Key B None, // Key C // Key D Some( hash(hash("key_d_base_password" + key_d_public_key)) ), ]), ``` The passwords for Key B and Key C will be passed in as such: ```rust passwords_per_use: Some(vec![ None, // Key A // Key B vec![ { pw: hash(hash("keys_bc_base_password" + key_b_public_key + "1")), key_use: 1 }, { pw: hash(hash("keys_bc_base_password" + key_b_public_key + "2")), key_use: 2 } ] // Key C vec![ { pw: hash(hash("keys_bc_base_password" + key_c_public_key + "1")), key_use: 1 } ] None // Key D ]), ``` The drop funder would then give the keys out to people: ### Key A Alice gives Bob Key A and he would be able to claim it 3 times with no password required. ### Key D Alice gives Charlie Key D and he would be able to claim it 3 times with the hashed global key password: `hash("key_d_base_password" + key_d_public_key)`. When Charlie uses the key, he would input the password `hash("key_d_base_password" + key_d_public_key)` and the contract would hash that and check to see if it matches what is stored on-chain (which it does). If anyone tried to look at what Charlie passes in through the explorer, it wouldn't work since his hash contains the public key for key D and as such it is only valid for Key D. Similarly, if Charlie tried to look at the explorer when Alice created the keys and attempted to pass in `hash(hash("key_d_base_password" + key_d_public_key))`, the contract would attempt to hash this and it would NOT match up with what's in the storage. ### Key B Alice gives Eve Key B and she would need a password for claim 1 and 2. For the first claim, she needs to pass in: `hash("keys_bc_base_password" + key_b_public_key + "1")`. The contract would then check and see if the hashed version of this matches up with what's stored on-chain for that use. The second time Eve uses the key, she needs to pass in `hash("keys_bc_base_password" + key_b_public_key + "2")` and the same check is done. If Eve tries to pass in `hash("keys_bc_base_password" + key_b_public_key + "1")` for the second key use, the contract would hash it and check: ``` hash(hash("keys_bc_base_password" + key_b_public_key + "1")) == hash(hash("keys_bc_base_password" + key_b_public_key + "2")) ``` Which is incorrect and the key would not be claimed. Once Eve uses the key 2 times, the last claim is not password protected and she's free to claim it. Key C is similar to Key B except that it only has 1 password for the first use. ## Use-Cases Password protecting key uses is a true game changer for a lot of use-cases spanning from ticketing to simple marketing and engagement. #### Ticketing and POAPs Imagine you had an event and wanted to give out exclusive POAPs to people that came. You didn't want to force users to: - Have a NEAR wallet - Have wifi at the door. - Burn NFTs or tokens to get into the event. The important thing to note is that by using password protected key uses, you can **GUARANTEE** that anyone that received a POAP had to **PHYSICALLY** show up to the event. This is because the POAP would be guarded by a password. You could create a ticketing event using Keypom as outlined in the [Ticketing](#nft-ticketing) section and have a key with 2 uses. The first use would be password protected and the second use is not. The first use will get you through the door and into the event and the second contains the exclusive POAP and can onboard you. This means that anyone with the ticket, or key, can only receive the POAP if they know the password. You can have a scanner app that would scan people's tickets (tickets are just the private key). In this scanner app, the *base password* is stored and whenever the ticket is scanned, the public key is taken and the following hash is created: `hash(base password + public key)` This hash is then used to claim a use of the key and you will be let into the party. The scanner app can deterministically generate all the necessary hashes for all the tickets by simply scanning the QR code (which has the private key exposed). The tickets are worthless unless you actually show up to the event and are scanned. Once you're scanned, you can refresh your ticket page and the use the second key claim which is not password protected. This use contains the exclusive POAP and you can onboard onto NEAR. #### Marketing and Engagement Let's say that you're at an event and want people to show up to your talks and learn about your project. You can have a scanner app similar to the one mentioned in the ticketing scenario that derives the password for any use on any key. At the beginning of the event, you can give out a bunch of keys that have progressively increasing rewards gated by a password. At the end, the last key use contains a special reward that is only unlocked if the user has claimed all the previous key uses. In order for these uses to be unlocked, People must show up to your talks and get scanned. The scanner will derive the necessary password and unlock the rewards. Users will only get the exclusive reward if they come to ALL your talks. This idea can be further expanded outside the physical realm to boost engagement on your websites as an example: You want users to interact with new features of your site or join your mailing list. You can have links where uses are ONLY unlocked if the user interacts with special parts of your site such as buying a new NFT or joining your mailing list or clicking an easter egg button on your site etc. ## dApp Free Trials for Users In the upcoming Keypom V2.0, dApps will be able to integrate the Keypom wallet selector plugging to allow for free trials for their users. One of the biggest pain-points with Web3 at the moment is the fact that users need to fund wallets *before* they interact with a dApp. In Web2, a user can find value in an application by using it before they go through the messy onboarding process. Why can't Web3 be the same? Keypom will allow apps to create links that will automatically sign users into their applications and give them a free trial of the app. The user will be able to interact with things, spend $NEAR, sign transactions and gather assets through the trial. A unique feature of this is that the user will *never be redirected to the NEAR wallet* to approve transactions. Keypom will provide a seamless user experience where users can find value in applications. Once the free trial is over and users have collected assets / $NEAR through interacting with the dApp, they can *THEN* choose to onboard. With Keypom's technology, users will be locked into only interacting with the dApp specified in the link. Users can't rug the application and steal the $NEAR embedded in the link. The funds are allocated for 1 thing and 1 thing only: free trials of that one specific dApp. <p align="center"> <img src="assets/flowcharts/trial_accounts.png" style="width: 65%; height: 65%" alt="Logo"> </p> # Costs It is important to note that the Keypom contract is 100% **FEE FREE** and will remain that way for the *forseeable future*. This contract is a public good and is meant to inspire change in the NEAR ecosystem. With that being said, there are several mandatory costs that must be taken into account when using Keypom. These costs are broken down into two categories: per key and per drop. > **NOTE:** Creating an empty drop and then adding 100 keys in separate calls will incur the same cost as creating a drop with 100 keys in the same call. ## Per Drop When creating an empty drop, there is only one cost to keep in mind regardless of the drop type: - Storage cost (**~0.006 $NEAR** for simple drops) ## Per Key Whenever keys are added to a drop (either when the drop is first created or at a later date), the costs are outlined below. ### Key Costs for Simple Drop - $NEAR sent whenever the key is used (can be 0). - Access key allowance (**~0.0187 $NEAR per use**). - Storage for creating access key (**0.001 $NEAR**). - Storage cost (**~0.006 $NEAR** for simple drops) ### Additional Costs for NFT Drops Since keys aren't registered for use until **after** the contract has received the NFT, we don't know how much storage the token IDs will use on the contract. To combat this, the Keypom contract will automatically measure the storage used up for storing each token ID in the `nft_on_transfer` function and that $NEAR will be taken from the funder's balance. ### Additional Costs for FT Drops Since accounts claiming FTs may or may not be registered on the Fungible Token contract, Keypom will automatically try to register **all** accounts. This means that the drop creators must front the cost of registering users depending on the `storage_balance_bounds` returned from the FT contract. This applies to every use for every key. In addition, Keypom must be registered on the FT contract. If you create a FT drop and are the first person to ever do so for a specific FT contract on Keypom, Keypom will be automatically registered when the drop is created. This is a one time cost and once it is done, no other account will need to register Keypom for that specific FT contract. ### Additional Costs for FC Drops Drop creators have a ton of customization available to them when creation Function Call drops. A cost that they might incur is the attached deposit being sent alongside the function call. Keypom will charge creators for all the attached deposits they specify. > **NOTE:** The storage costs are dynamically calculated and will vary depending on the information you store on-chain. ## Deleting Keys and Drops Creators have the ability to delete drops and keys at any time. In this case, **all** the initial costs they incurred for the remaining keys will be refunded to them (minus Gas fees of course). ## Automatic Refunds When Keys are Used One way that Keypom optimizes the fee structure is by performing automatic refunds for some of the initial costs that creators pay for when keys are used. All the storage that is freed along with any unused allowance is automatically sent back to the creator whenever a key is used. This model drastically reduces the overall costs of creating drops and creates incentives for the keys to be used. ## Account Balances for Smooth UX In order to make the UX of using Keypom seamless, the contract introduces a debiting account model. All costs and refunds go through your account's balance which is stored on the contract. This balance can be topped up or withdrawn at any moment using the `add_to_balance()` and `withdraw_from_balance()` functions. This account balance is not *required*, however. You can create a drop by attaching a deposit to the call. Keep in mind that this will create an account balance for you behind the scenes, however. </td> </tr> </table> ## Built With - [near-sdk-rs](https://github.com/near/near-sdk-rs) - [near-api-js](https://github.com/near/near-api-js) # How Linkdrops Work For some background as to how linkdrops works on NEAR: *The funder that has an account and some $NEAR:* - creates a keypair locally `(pubKey1, privKey1)`. The blockchain doesn't know of this key's existence yet since it's all local for now. - calls `send` on the contract and passes in the `pubKey1` as an argument as well as the desired `balance` for the linkdrop. - The contract will map the `pubKey1` to the desired `balance` for the linkdrop. - The contract will then add the `pubKey1` as a **function call access key** with the ability to call `claim` and `create_account_and_claim`. This means that anyone with the `privKey1` that was created locally, can claim this linkdrop. - Funder will then create a link to send to someone that contains this `privKey1`. The link follows the following format: ``` wallet.testnet.near.org/linkdrop/{fundingContractAccountId}/{linkdropKeyPairSecretKey}?redirectUrl={redirectUrl} ``` * `fundingContractAccountId`: The contract accountId that was used to send the funds. * `linkdropKeyPairSecretKey`: The corresponding secret key to the public key sent to the contract. * `redirectUrl`: The url that wallet will redirect to after funds are successfully claimed to an existing account. The URL is sent the accountId used to claim the funds as a query param. *The receiver of the link that is claiming the linkdrop:* - Receives the link which includes `privKey1` and sends them to the NEAR wallet. - Wallet creates a new keypair `(pubKey2, privKey2)` locally. The blockchain doesn't know of this key's existence yet since it's all local for now. - Receiver will then choose an account ID such as `new_account.near`. - Wallet will then use the `privKey1` which has access to call `claim` and `create_account_and_claim` in order to call `create_account_and_claim` on the contract. - It will pass in `pubKey2` which will be used to create a full access key for the new account. - The contract will create the new account and transfer the funds to it alongside any NFT or fungible tokens pre-loaded. </p> # Getting Started There are several ways to get started using Keypom. You can use the NEAR CLI, our Keypom application, our Keypom SDK and more. In this section, we will go over how you can interact with Keypom and create drops using the NEAR-API-JS library and write simple node scripts. ## Prerequisites In order to successfully interact with this contract using the deploy scripts, you should have the following: - [NEAR account](https://docs.near.org/concepts/basics/account) - [Node JS](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) ## Deploy Scripts There are 4 deploy scripts that have been made available for you to use and easily create Keypom links. These are for: - Simple Drops - NFT Drops - FT Drops - Function Call Drops Each drop type deploy script has a version using `NEAR-API-JS`, and a version using the `Keypom-JS SDK`. The file tree for these scripts is shown below. ```bash /deploy ├── ft │ └── configurations.js │ └── ft-create-sdk.js │ └── ft-create.js │ ├── function-call │ └── configurations.js │ └── fc-create-sdk.js │ └── fc-create.js │ ├── nft │ └── configurations.js │ └── nft-create-sdk-minted.js │ └── nft-create-sdk-owned.js │ └── nft-create.js │ ├── simple │ └── configurations.js │ └── simple-create-sdk.js │ └── simple-create.js │ ├── utils ``` In order to use these scripts, open the `deploy/` directory and modify the `configurations.js` file for the drop you want to create. In this file, you can specify important information such as the number of keys you wish to create, the amount of $NEAR you want to send, how many uses per key etc. You must specify the account that you will fund the drops with under the `FUNDING_ACCOUNT_ID` variable. This account needs to have keys stored in your `~/.near-credentials` folder. To do this, simply run `near login` on your terminal and follow the prompts using the NEAR CLI. Once the `configurations.js` file has been modified to your liking, navigate back to the root directory and run the deploy script. For simple drops: ``` // Using NEAR-API-JS yarn simple // Using SDK yarn simple-sdk ``` For FT drops: ``` // Using NEAR-API-JS yarn ft // Using SDK yarn ft-sdk ``` For NFT drops: ``` // Using NEAR-API-JS yarn nft // Using SDK yarn nft-sdk ``` For Function Call drops: ``` // Using NEAR-API-JS yarn fc // Using SDK yarn fc-sdk ``` # Query Information From Keypom Keypom allows users to query a suite of different information from the contract. This information can be broken down into two separate objects that are returned. JsonDrops and JsonKeys. ```rs pub struct JsonDrop { // Drop ID for this drop pub drop_id: DropId, // owner of this specific drop pub owner_id: AccountId, // Balance for all keys of this drop. Can be 0 if specified. pub deposit_per_use: U128, // Every drop must have a type pub drop_type: JsonDropType, // The drop as a whole can have a config as well pub config: Option<DropConfig>, // Metadata for the drop pub metadata: Option<DropMetadata>, // How many uses are registered pub registered_uses: u64, // Ensure this drop can only be used when the function has the required gas to attach pub required_gas: Gas, // Keep track of the next nonce to give out to a key pub next_key_id: u64, } pub struct JsonKeyInfo { // Drop ID for the specific drop pub drop_id: DropId, pub pk: PublicKey, // How many uses this key has left. Once 0 is reached, the key is deleted pub remaining_uses: u64, // When was the last time the key was used pub last_used: u64, // How much allowance does the key have left. When the key is deleted, this is refunded to the funder's balance. pub allowance: u128, // Nonce for the current key. pub key_id: u64, } ``` ## Key Specific - **`get_key_balance(key: PublicKey)`**: Returns the $NEAR that will be sent to the claiming account when the key is used - **`get_key_total_supply()`**: Returns the total number of keys currently on the contract - **`get_keys(from_index: Option<U128>, limit: Option<u64>)`**: Paginate through all keys on the contract and return a vector of key info - **`get_key_information(key: PublicKey)`**: Return the key info for a specific key - **`get_key_information_batch(keys: Vec<PublicKey>)`**: Return a vector of key info for a set of public keys ## Drop Specific - **`get_drop_information(drop_id: Option<DropId>, key: Option<PublicKey>)`**: Return the drop info for a specific drop. This can be queried for by either passing in the drop ID or a public key. - **`get_key_supply_for_drop(drop_id: DropId)`**: Return the total number of keys for a specific drop - **`get_keys_for_drop(drop_id: DropId, from_index: Option<U128>, limit: Option<u64>)`**: Paginate through all keys for a specific drop and return a vector of key info - **`get_drop_supply_for_owner(account_id: AccountId)`**: Return the total number of drops for a specific account - **`get_drops_for_owner(account_id: AccountId, from_index: Option<U128>, limit: Option<u64>)`**: Paginate through all drops for a specific account and return a vector of drop info - **`get_nft_supply_for_drop(drop_id: DropId)`**: Get the total number of NFTs registered for a given drop. - **`get_nft_token_ids_for_drop(drop_id: DropId, from_index: Option<U128>, limit: Option<u64>)`**: Paginate through token IDs for a given drop - **`get_next_drop_id()`**: Get the next drop ID that will be used for a new drop ### Utility - **`get_root_account()`**: Get the global root account that all created accounts with be based off. - **`get_user_balance()`**: Get the current user balance for a specific account. # Running the Keypom Tests We have put together a suite of test cases that can be found in the `__tests__` folder. These range anywhere from simple config tests all the way to full blown ticketing and POAPs. In the `__tests__` folder, there are sub-folders with each type of test. Some of these sub-folders contain a `utils` folder with some utility functions used. All the tests use `workspaces-js`. In order to run all the tests, run the following command. ```bash yarn && yarn test ``` This will run through each test 1 by 1. If you wish to only run a set of specific tests, the full list of commands can be found below. ```bash "test:internals" "test:stage1" "test:stage1:simple" "test:ticketing" "test:poaps" "test:configs" "test:nft-drops" "test:ft-drops" "test:profiling" "test:passwords" ``` # Contributing First off, thanks for taking the time to contribute! Contributions are what makes the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please try to create bug reports that are: - _Reproducible._ Include steps to reproduce the problem. - _Specific._ Include as much detail as possible: which version, what environment, etc. - _Unique._ Do not duplicate existing opened issues. - _Scoped to a Single Bug._ One bug per report. Please adhere to this project's [code of conduct](docs/CODE_OF_CONDUCT.md). You can use [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli) to check for common markdown style inconsistency. # License This project is licensed under the **GPL License**. # Acknowledgements Thanks for these awesome resources that were used during the development of the **Keypom Contract**: - <https://github.com/dec0dOS/amazing-github-template> - <https://github.com/near/near-linkdrop> - <https://github.com/near/near-wallet/blob/master/packages/frontend/docs/Linkdrop.md> # FYDP Documentation One stop shop for Capstone design logbook and other stuff # React + Vite This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules. Currently, two official plugins are available: - [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react/README.md) uses [Babel](https://babeljs.io/) for Fast Refresh - [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh useful commands: npm run dev <!-- no longer needed: json-server -w ./data/db.json --> basic setup using this tutorial: https://www.youtube.com/watch?v=yO8XWvi0Hms&list=PL4cUxeGkcC9hcnIeryurNMMcGBHp7AYlP&index=4 <p align="center"> <img src="assets/claimed-linkdrop.png" alt="Logo" style="width: 35%; height: 35%"> <br /> </p> <div align="center"> <h1> Keypom </h1> Limitless possibilities in the palm of your hand. </div> <div align="center"> <br /> [![made by BenKurrek](https://img.shields.io/badge/made%20by-BenKurrek-ff1414.svg?style=flat-square)](https://github.com/BenKurrek) [![made by mattlockyer](https://img.shields.io/badge/made%20by-MattLockyer-ff1414.svg?style=flat-square)](https://github.com/mattlockyer) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Introduction](#introduction) - [Comparable Solutions](#comparable-solutions) - [Our Solution](#our-solution) - [Drop Customization](#shared-drop-customization) - [Primary Market Public Sale for Keys](#primary-market-public-sale-for-keys) - [Simple Drops](#simple-drops) - [NFT Drops](#non-fungible-token-drops) - [FT Drops](#fungible-token-drops) - [Function Call Drops](#function-call-drops) - [How It Works](#how-do-fc-drops-work) - [Security](#security-for-fc-drops) - [User-Provided Args](#user-provided-arguments) - [Use Cases](#fc-drop-use-cases) - [Password Protected Keys](#password-protected-keys) - [dApp Free Trials for Users](#dapp-free-trials-for-users) - [Costs](#costs) - [Per Drop](#per-drop) - [Per Key](#per-key) - [Deleting Keys and Drops](#deleting-keys-and-drops) - [Automatic Refunds](#automatic-refunds-when-keys-are-used) - [Account Balances](#account-balances-for-smooth-ux) - [How Linkdrops Work](#how-linkdrops-work) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Deploy Scripts](#deploy-scripts) - [Query Information From Keypom](#query-information-from-keypom) - [Key Specific](#key-specific) - [Drop Specific](#drop-specific) - [Running Tests](#running-the-keypom-tests) - [Contributing](#contributing) - [Acknowledgements](#acknowledgements) </details> --- # About <tr> <td> > To view our debut talk at NEARCON 2022, click [here](https://www.youtube.com/watch?v=J-BOnfhHV50). Keypom is an access key factory created as a result of 3 common problems that arose in the ecosystem. 1. People want a *cheap, customizable, and unique* onboarding experience for users. 2. Companies don't want to expose **full access keys** in their backend servers. 3. dApps want a *smooth UX* with zero barrier to entry onboarding. The contract was initially created as a way to handle the 1 $NEAR minimum deposit required for creating linkdrops using the [regular linkdrop contract](https://github.com/near/near-linkdrop/blob/f24f2608e1558db773f2408a28849d330abb3881/src/lib.rs#L18). If users wanted to create linkdrops, they needed to attach a **minimum** of 1 $NEAR. This made it costly and unscalable for projects that wanted to mass onboard onto NEAR. Keypom, on the other hand, has been highly optimized to allow for the lowest possible costs. ## Introduction Blockchain technology comes with many benefits such as sovereign ownership, digital rights, privacy, freedom, peer to peer coordination and much more. The problem with this technology, however, is that there is an extremely high barrier to entry for an everyday individual. None of it matters if nobody can onboard. It’s confusing to create and fund a crypto wallet. People are unfamiliar with the process, technical jargon, and the general flow. NEAR’s account model is powerful, but extremely underutilized because it’s complex for developers to take full advantage of. Keypom wraps this up in a single API call. With NEAR’s goal of onboarding 1 billion users to Web3, there needs to be a solution to this high barrier to entry for developers building on NEAR and users onboarding to their apps and the NEAR ecosystem. Below is a table outlining the minimum costs to onboard a new user onto NEAR with a named account. | | 1 Account | 1,000 Accounts | 1,000,000 Accounts | |----------------------|-----------------|-----------------|--------------------| | Traditional Linkdrop | ~1 NEAR | ~1,003 NEAR | ~1,002,840 NEAR | | Keypom | ~0.0035 NEAR | ~3.5 NEAR | ~3,500 NEAR | | | ~99.65% Cheaper | ~99.65% Cheaper | ~99.65% Cheaper | Keypom allows anyone to create highly customizable onboarding experiences for their users. These experiences can be both for new, or existing users. If someone already has a wallet, they can still use a Keypom link to experience an app, and then transfer the assets later. ## Comparable Solutions | | **Keypom** | **NEAR Drop** | **Satori** | |----------------------------------------------|------------|---------------|------------| | NEAR Drop | ✅ | ✅ | ❌ | | FT Drop | ✅ | ❌ | ❌ | | NFT Drop | ✅ | ❌ | ✅ | | Function Call Drop | ✅ | ❌ | ❌ | | Embeddable in Dapps | ✅ | ❌ | ❌ | | Wallet Selector Integration | ✅ | ❌ | ❌ | | No Fee | ✅ | Maybe? | ❌ | | No Backend / 3rd Party | ✅ | ✅ | ❌ | | Campaigns | ✅ | ✅ | ✅ | | Multi-Step e.g. Tickets click > scan > claim | ✅ | ❌ | ❌ | | Password Protected Drops | ✅ | ❌ | ❌ | | Timed Drops e.g. recurring payments | ✅ | ❌ | ❌ | | Custom Names e.g. user.myapp.near | ✅ | ❌ | ❌ | # Our Solution Keypom allows for the creation of highly customizable access keys. These keys can be thought of as having their own *smart contracts*. Each access key derives from what's known as a *drop*. These drops outline the different functionalities and behaviors the key will have. A drop can be thought of as a bucket that access keys belong to. You can create many different buckets and fill them each with their own keys. Each key will act in accordance to the drop, or bucket, it belongs to. A drop can be one of four different types: 1. Simple drop. 2. Non Fungible Token drop. 3. Fungible Token drop. 4. Function Call drop. # Shared Drop Customization While each *type* of drop has its own set of customizable features, there are some that are shared by **all drops** These are outlined below. ```rust /// Each time a key is used, how much $NEAR should be sent to the claiming account (can be 0). pub deposit_per_use: u128, /// How much Gas should be attached when the key is used. The default is 100 TGas as this is /// what's used by the NEAR wallet. pub required_gas: Gas, /// The drop as a whole can have a config as well pub config: Option<DropConfig>, /// Metadata for the drop in the form of stringified JSON. The format is completely up to the /// user and there are no standards for format. pub metadata: LazyOption<DropMetadata>, ``` Within the config, there are a suite of features that can be customized as well: ```rust /// How many uses can each key have before it's deleted. If None, default to 1. pub uses_per_key: Option<u64>, /// Override the global root account that sub-accounts will have (near or testnet). This allows /// users to create specific drops that can create sub-accounts of a predefined root. /// For example, Fayyr could specify a root of `fayyr.near` By which all sub-accounts will then /// be `ACCOUNT.fayyr.near` pub root_account_id: Option<AccountId>, // /Any time based configurations pub time: Option<TimeConfig>, /// Public sale config options pub sale: Option<PublicSaleConfig>, /// Any usage specific configurations pub usage: Option<UsageConfig>, ``` ## Time Based Customizations Keypom allows users to customize time-based configurations as outlined below. ```rust pub struct TimeConfig { /// Minimum block timestamp before keys can be used. If None, keys can be used immediately /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub start: Option<u64>, /// Block timestamp that keys must be before. If None, keys can be used indefinitely /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub end: Option<u64>, /// Time interval between each key use. If None, there is no delay between key uses. /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub throttle: Option<u64>, /// Interval of time after the `start_timestamp` that must pass before a key can be used. /// If multiple intervals pass, the key can be used multiple times. This has nothing to do /// With the throttle timestamp. It only pertains to the start timestamp and the current /// timestamp. The last_used timestamp is not taken into account. /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub interval: Option<u64>, } ``` ## Usage Based Customizations In addition to time-based configurations, the funder can customize behaviors pertaining to key usages. ```rust pub struct UsageConfig { /// Can the access key only call the claim method_name? Default to both method_name callable pub permissions: Option<ClaimPermissions>, /// If claim is called, refund the deposit to the owner's balance. If None, default to false. pub refund_deposit: Option<bool>, /// Should the drop be automatically deleted when all the keys are used? This is defaulted to false and /// Must be overwritten pub auto_delete_drop: Option<bool>, /// When this drop is deleted and it is the owner's *last* drop, automatically withdraw their balance. pub auto_withdraw: Option<bool>, /// When calling `create_account` on the root account, which keypom args should be attached to the payload. pub account_creation_fields: Option<KeypomArgs>, } ``` ## Primary Market Public Sale for Keys The last type of customization available to the funder is the ability to create a public sale for access keys in a drop. The funder can create a drop and let people add keys to it on an as-needed basis. The sale configurations are outlined below. ```rust pub struct PublicSaleConfig { /// Maximum number of keys that can be added to this drop. If None, there is no max. pub max_num_keys: Option<u64>, /// Amount of $NEAR that the user needs to attach (if they are not the funder) on top of costs. This amount will be /// Automatically sent to the funder's balance. If None, the keys are free to the public. pub price_per_key: Option<u128>, /// Which accounts are allowed to add keys? pub allowlist: Option<LookupSet<AccountId>>, /// Which accounts are NOT allowed to add keys? pub blocklist: Option<LookupSet<AccountId>>, /// Should the revenue generated be sent to the funder's account balance or /// automatically withdrawn and sent to their NEAR wallet? pub auto_withdraw_funds: Option<bool>, /// Minimum block timestamp before the public sale starts. If None, keys can be added immediately /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub start: Option<u64>, /// Block timestamp dictating the end of the public sale. If None, keys can be added indefinitely /// Measured in number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC. pub end: Option<u64>, } ``` ### Use-Cases for Public Sales Giving the funder the ability to sell access keys to a drop introduces a ton of awesome use-cases and has a slew of benefits: - Everything is decentralized and on-chain. There is no need to trust a third party to hold the keys. - Keys are created on an as-needed basis. This *drastically reduces up-front costs* for the drop funder. - Since keys are created only when users want them, there is no chance that through distribution, the private key gets compromised. The key is created *when the user purchases it*. - Everything *can* remain anonymous and private since people can purchase access keys with their crypto wallets. Having a public sale allows for an on-chain distribution mechanism for access keys. Let's look at two examples where this can be used. #### Example 1: Ticketing Imagine there is an event organizer that wants to host with a guest-list of 100,000 people. Without doing a public sale, the organizer would need to spend a lot of $NEAR up-front to create all 100 thousand access keys. At this point, they would need to find a way to distribute all the keys. With a public sale, the organizer can set a price per key, an allowlist, a blocklist, and even a start date for when the sale goes live. At this point, the keys would be lazily purchased by people coming to the event. This not only reduces the up-front cost for the funder but it can also provide more accurate data on how many people are actually coming to the event. #### Example 2: Selling Function Calls Access keys can be used for much more than just POAPs, onboarding or tickets. When using FC Drops, the keys can execute functions on external contracts. This feature can be used in conjunction with the public sale to create a marketplace for gated function calls. Imagine a simple guest-book smart contract that only allowed people to sign the book if they had a valid Keypom access key. Whoever signed the guest-book had access to all VIP events at NEARCon. You could lock access to signing the guest-book behind a Keypom drop and setup a public sale. #### Example 3: NFT Collections A very common scenario is an artist launching a new NFT collection. The artist can setup a custom marketplace whereby the keys are lazily minted and sold to the public. They can then create a custom website that takes a Keypom link and brings the user through a unique, creative experience before the NFT is minted and a wallet is optionally created. People that purchase the links can either use them to send the NFT to their existing wallet or create an entirely new wallet. ## Simple Drops The most basic type of drop is the simple kind. Any keys that are part of a simple drop can only be used for 1 thing: **transferring $NEAR**. Once the key is claimed, the claiming account will receive the $NEAR specified in the `deposit_per_use`. Simple drops are a great way to send $NEAR to claiming accounts while not storing a lot of information on the contract. Below are a couple use cases. ### Backend Servers Let's say you have a backend server that should send 10 $NEAR to the first 3 people that redeem an NFT. Rather than exposing your full access key in the backend server, you could create a simple drop that either has 3 keys or 1 key that is claimable 3 times. In the drop, you'd specify that each time the key is claimed, the specified account would receive 10 $NEAR. ### Recurring Payments Recurring payments are quite a common situation. If you need to send someone 10 $NEAR once a month for 6 months, you could create a simple drop that has a usage config with an `interval` of 1 month. In addition, you can set the time based config to have a `start` of next week. Everytime the key is used, 10 $NEAR is sent to the account. If the contractor missed a month's payment, they can claim the key late but can never use the key more than what is intended. <p align="center"> <img src="assets/flowcharts/recurring_payments.png" style="width: 65%; height: 65%" alt="Logo"> </p> ### Quick Onboarding If you need to quickly onboard users onto NEAR, you could create a simple drop with a small amount of $NEAR (enough to create a wallet) and set the usage's permissions to be `create_account_and_claim`. This means that the key can only be used to create accounts. You can then add keys as you wish to the drop and give them out to users so they can create accounts and be onboarded onto NEAR. ### Lazy Registering Keys A unique use-case for simple drops is the ability to lazy register key uses. This allows the funder to batch create many keys at a time while only paying for basic fees such as the storage used and the key's allowance. The funder would **not** need to pay for the `deposit_per_use` of each key up front. They can instead register individual key uses as they are needed. With this scenario, if an organization wanted to onboard users with a linkdrop valued at 10 $NEAR, they could create 1000 keys without needing to pay 1000 * 10 = 10,000 $NEAR up-front. They could then register keys on an as-needed basis. If they need to register 25 keys at a time, they can do this by simply calling the `register_uses` function. ## Non-Fungible Token Drops Non-Fungible Token drops are a special type that allows users to "preload" the drop with NFTs. These tokens will then be *automatically* sent to the **claiming user**. The claiming flow is fairly similar to simple drops in that users can either create an account or claim to an existing one. NFT drops are essentially a wrapper around simple drops. All the functionalities that simple drops have are carried over but now, users can receive an NFT as well as $NEAR. This brings introduces some customization and uniqueness to the use-cases. ### How does it work? Every drop has a field known as `registered_uses`. This tells the contract how many uses the drop has across all its keys. For basic simple drops that are *not* lazy registering keys, this field doesn't matter since all the uses are paid for up-front when the drop is created or when keys are added. With NFT drops, however, there is a 2 step process: - Firstly, the drop is created and all the $NEAR required is pre-paid for. This is the same as simple drops, however, the `registered_uses` are set to 0. - Once the drop is created, the owner must send the contract the NFTs in order for keys to be usable. This process is done through the `nft_transfer_call` workflow baked into the NFT standards. It's up to the owner to facilitate this process. Whenever the contract receives tokens, it will push the ID to a vector. These IDs are **popped** off whenever a key is used. A user will receive the most recent token sent to the contract as the vector is acting like a *stack*. ### NFT Config Along with the default global configurations for drops, if you'd like to create an NFT drop, you must specify the following pieces of information when the drop is created. ```rust pub struct NFTDataConfig { /// Which account ID will be sending the NFTs to the contract. If this is not specified, anyone can send NFTs for the specific drop. pub sender_id: Option<AccountId>, /// Which contract will the NFTs live on pub contract_id: AccountId, } ``` By specifying this information, the drop is locked into only accepting NFTs from the specific contract and optionally from a specified sender account. ### Use Cases NFT drops work really well for when you want to send a *pre-existing* NFT to a user along with some $NEAR. Since NFT drops are a light wrapper around simple drops, most of the use-cases are the same although people can now get NFTs as well. This means you can onboard a user with some $NEAR **and** they *get an NFT* too. ## Fungible Token Drops A Fungible Token drop is also a light wrapper around the simple drop. It works very similarly to how its NFT counterpart does. First, you'll need to create the drop and then you can fund it with assets and register key uses. You can preload a drop with as many FTs as you'd like even if you don't have the keys yet. This will spike the `registered_uses` and then you can create keys and slowly eat away from this "total supply" overtime. If the drop runs out, you can send it more FTs to top up. All the keys in the FT drop will share from this supply and everytime a key is used, the `registered_uses` will decrement and the "total supply" will get smaller. ### How does it work? As mentioned in the NFT section, every drop has a field known as `registered_uses`. This tells the contract how many uses the drop has across all its keys. For basic simple drops that are *not* lazy registering keys, this field doesn't matter since all the uses are paid for up-front when the drop is created or when keys are added. With FT drops, however, there is a 2 step process: - Firstly, the drop is created and all the $NEAR required is pre-paid for. This is the same as simple drops, however, the `registered_uses` are set to 0. - Once the drop is created, the owner must send the contract the FTs in order for keys to be usable. This process is done through the `ft_transfer_call` workflow baked into the FT standards. It's up to the owner to facilitate this process. ### FT Config Along with the default global configurations for drops, if you'd like to create a FT drop, you must specify the following pieces of information when the drop is created. ```rust pub struct FTDataConfig { /// The contract that the FTs live on. pub contract_id: AccountId, /// The account ID that will be sending the FTs to the contract. If this is not specified, anyone can send FTs for the specific drop. pub sender_id: Option<AccountId>, /// How many FTs should the contract send *each time* a key is used. pub balance_per_use: U128, } ``` By specifying this information, the drop is locked into only accepting FTs from the specific contract and optionally from a specified sender account. you can send as many FTs as you'd like and can over-pay, you *must* send at **least** enough FTs in one call to cover 1 use. As an example, if a drop is created such that 10 FTs will be sent when a key is used, you must send **at least 10** and cannot break it up into separate calls where you send 5 one time and 5 another. ### Use Cases FT drops have some awesome flexibility due to the fact that they support all the functionalities of the Simple drops, just with more use-cases and possibilities. Let's look at some use cases to see how fungible token drops can be used. #### Recurring Payments Recurring payments are quite a common situation. Let's say you need to send someone $50 USDC every week. You could create a key with 5 uses that has a time config `interval` of 1 week. You would then pre-load maybe the first week's deposit of $50 USDC and register 1 use or you could send $500 USDC for the first 10 weeks. At that point, you would simply hand over the key to the user and they can claim once a week. #### Backend Servers Taking the recurring payments problem to another level, imagine that instead of leaving the claims up to the contractor, you wanted to automatically pay them through a backend server. They would give you their NEAR account and you would send them FTs. The problem is that you don't want to expose your full access key in the server. By creating a FT drop, you can store **only the function call access key** created by Keypom in the server. Your backend would them use the key to call the `claim` function and pass in the user's account ID to send them the FTs. #### Creating a Wallet with FTs Another awesome use-case is to allow users to be onboarded onto NEAR and **also** receive FTs. As an example, You could do a promotion where you're giving away $10 USDC to the first 100 users that sign up to your mailing list. You can also give away QR codes at events that contain a new fungible token that you're launching. You can simply create a FT drop and pre-load it with the FT of your choice. In addition, you can give it 0.02 $NEAR for new wallets that are created. You can pair this with setting the usage config's `refund_deposit` flag to true which would make it so that if anyone claims the fungible tokens and they *already have a wallet*, it will automatically refund you the 0.02 $NEAR. That money should only be used for the creation of new wallets. Since your focus is on the fungible tokens, you don't want to **force users** to create a new wallet if they have one already by specifying the usage permissions to be `create_account_and_claim` but instead, you want to be refunded in case they do. ## Function Call Drops Function call drops are by far the most powerful feature that Keypom provides. FC drops allow **any** method on **any** contract to be executed (with some exceptions). In addition, there are a huge variety of customizations and features you can choose from when defining the drop that come on top of the global options. The possibilities are almost endless. State of the art NFT ticketing, lazy minting NFTs, auto registration into DAOs, analytics for marketing at events and much more. ### How do FC Drops work? Unlike NFT and FT drops, the function calls must have everything paid for **upfront**. There is no two step process so the creation is similar to Simple drops. Once the drop is created and keys are added, you can immediately start using it. #### Function Call Config When creating the drop, you have quite a lot of customization available. At the top level, there is a FC drop global config similar to how the *general* config works. ```rust pub struct FCConfig { /// How much GAS should be attached to the function call if it's a regular claim. /// If this is used, you *cannot* go through conventional linkdrop apps such as mynearwallet /// since those *always* attach 100 TGas no matter what. In addition, you will only be able to /// call `claim` if this is specified. You cannot have an `attached_gas` parameter and also /// call `create_account_and_claim. pub attached_gas: Option<Gas>, } ``` #### Method Data In addition to the global config, the user can specify a set of what's known as `MethodData`. This represents the information for the function being called. Within this data, there are also a few optional configurations you can use to extend your use cases. You'll see how powerful these can be in the use cases [section](#use-cases). ```rust pub struct MethodData { /// Contract that will be called pub receiver_id: AccountId, /// Method to call on receiver_id contract pub method_name: String, /// Arguments to pass in (stringified JSON) pub args: String, /// Amount of yoctoNEAR to attach along with the call pub attached_deposit: U128, /// Specifies what field the claiming account ID should go in when calling the function /// If None, this isn't attached to the args pub account_id_field: Option<String>, /// Specifies what field the drop ID should go in when calling the function. To insert into nested objects, use periods to separate. For example, to insert into args.metadata.field, you would specify "metadata.field" /// If Some(String), attach drop ID to args. Else, don't attach. pub drop_id_field: Option<String>, /// Specifies what field the key ID should go in when calling the function. To insert into nested objects, use periods to separate. For example, to insert into args.metadata.field, you would specify "metadata.field" /// If Some(String), attach key ID to args. Else, don't attach. pub key_id_field: Option<String>, // Specifies what field the funder id should go in when calling the function. To insert into nested objects, use periods to separate. For example, to insert into args.metadata.field, you would specify "metadata.field" // If Some(string), attach the funder ID to the args. Else, don't attach. pub funder_id_field: Option<String>, // What permissions does the user have when providing custom arguments to the function call? // By default, the user cannot provide any custom arguments pub user_args_rule: Option<UserArgsRule>, } ``` The MethodData keeps track of the method being called, receiver, arguments, and attached deposit. In addition, there are some optional fields that can be used to extend the use cases. If you have a contract that requires some more context from Keypom such as the funder ID, drop ID, key ID, and account ID that used the key, these can all be specified. We've kept it generic such that you can specify the actual argument name that these will be passed in as. For example, if you had a contract that would lazy mint an NFT and it required the account to be passed in as `receiver_id`, you could specify an `account_id_field` set to `receiver_id` such that Keypom will automatically pass in the account ID that used the key under the field `receiver_id`. Similarly, inserting fields into nested arguments is quite trivial. Let's say you wanted to insert the account ID that claimed the drop into the `receiver_id` under metadata for the following args: ```json args: { "token_id": "foobar", "metadata": { "receiver_id": INSERT_HERE } } ``` You could specify the `account_id_field` as `metadata.receiver_id` and Keypom will automatically create the `receiver_id` field and insert it into `metadata`. This would work whether or not `metadata` was already present in the args. > **NOTE:** The location for inserting the arguments *cannot* collide with another entry. In the above example, `token_id.receiver_id` could *NOT* be specified since `token_id` is mapped to `foobar` already. This logic extends to the drop ID, and key Id as well. #### Key Uses For **every key use**, you can specify a *vector* of `MethodData` which allows you to execute multiple function calls each time a key is used. These calls are scheduled 1 by 1 using a simple for loop. This means that most of the time, the function calls will be executed in the order specified in the vector but it is not *guaranteed*. It's important to note that the Gas available is split evenly between *all* the function calls and if there are too many, you might run into issues with not having enough Gas. You're responsible for ensuring that this doesn't happen. The vector of `MethodData` is *optional* for each key use. If a key use has `null` rather than `Some(Vector<MethodData>)`, it will decrement the uses and work as normal such that the `timestamp, `start` etc. are enforced. The only difference is that after the key uses are decremented and these checks are performed, the execution **finishes early**. The null case does **not** create an account or send *any* funds. It doesn't invoke any function calls and simply *returns once the checks are done*. This makes the null case act as a "burner" where you disregard any logic. This has many uses which will be explored in the use cases [section](#use-cases). If a key has more than 1 use, you can specify a *different vector* of `MethodData` for **each use**. As an example, you could specify that the first use will result in a null case and the second use will result in a lazy minting function being called. If you have multiple uses but want them all to do the same thing, you don't have to repeat the same data. Passing in only 1 vector of `MethodData` will result in **all the uses** inheriting that data. ### Security for FC Drops Since all FC drops will be signed by the Keypom contract, there are a few restrictions in place to avoid malicious behaviors. To avoid users from stealing registered assets from other drops, the following methods cannot be called via FC Drops: ```rust /// Which methods are prohibited from being called by an FC drop const DEFAULT_PROHIBITED_FC_METHODS: [&str; 6] = [ "nft_transfer", "nft_transfer_call", "nft_approve", "nft_transfer_payout", "ft_transfer", "ft_transfer_call", ]; ``` In addition, the Keypom contract cannot be the receiver of any function call. This is to avoid people from calling private methods through FC Drops. #### Keypom Arguments When a key is used and a function is called, there is a data structure that is **automatically** attached to the arguments. This is known as the `keypom_args`. It contains the information that the drop creator specified in the `MethodData`. ```rust pub struct KeypomArgs { pub account_id_field: Option<String>, pub drop_id_field: Option<String>, pub key_id_field: Option<String>, pub funder_id_field: Option<String> } ``` ##### Motivation Let's say there was an exclusive NFT contract that allowed the Keypom contract to mint NFTs as part of an FC drop. Only Keypom was given access to mint the NFTs so they could be given out as linkdrops. The organizer only wanted links that were part of their drop to be valid. For this reason, the NFT contract would only mint if Keypom called the `nft_mint` function and there was a field `series` passed in and it was equal to the drop ID created by the organizer. Let's say the owner created an exclusive drop that happened to have a drop ID of 5. They could then go to the NFT contract and restrict NFTs to only be minted if: - `series` had a value of 5. - The Keypom contract was the one calling the function. In order for this to work, when creating the drop, the owner would need to specify that the`drop_id_field` was set to a value of `series` such that the drop ID is correctly passed into the function. The problem with this approach is that the NFT contract has no way of knowing which arguments were sent by the **user** when the drop was created `as part of the MethodData `args` and which arguments are automatically populated by the Keypom contract. There is nothing stopping a malicious user from creating a new drop that has an ID of 6 but hardcoding in the actual arguments that `series` should have a value of 5. In this case, the malicious drop would have *no* `drop_id_field` and the NFT contract would have no way of knowing that the `series` value is malicious. This can be prevented if a new field is introduced representing what was automatically injected by the Keypom contract itself. At the end of the day, Keypom will **always** send correct information to the receiving contracts. If those contracts have a way to know what has been sent by Keypom and what has been manually set by users, the problem is solved. In the above scenario, the NFT contract would simply add an assertion that the `keypom_args` had the `account_id_field` set to `Some(series)` meaning that the incoming `series` field was set by Keypom and not by a malicious user. ### User Provided Arguments In the `MethodData`, there is an optional field that determines whether or not users can provide their own arguments when claiming a linkdrop and what that behaviour will look like. This is known as the `user_args_rule` and can be one of the following: ```rs /// When a user provides arguments for FC drops in `claim` or `create_account_and_claim`, what behaviour is expected? /// For `AllUser`, any arguments provided by the user will completely overwrite any previous args provided by the drop creator. /// For `FunderPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator. If there are any duplicate args, the drop funder's arguments will be used. /// For `UserPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator, but if there are any duplicate keys, the user's arguments will overwrite the drop funder's. pub enum UserArgsRule { AllUser, FunderPreferred, UserPreferred } ``` By default, if `user_args_rule` is `None` / not provided, any user provided arguments will be completely disregarded. It would act as if the user provided *no args* in the first place. These user arguments must be passed in via the `fc_args` field in `claim` and `create_account_and_claim`. This field is of type `Option<Vec<Option<String>>>` indicating that it's optional to provide the args and for each claim, a set of args can be provided. If, for a specific method, args shouldn't be passed in, the vector can have `None` as the value. The order of the args must match the order of the methods that will be executed. > **NOTE:** If a user provides `fc_args`, the length of the vector *MUST* match the number of methods being executed during the claim. #### All User If `user_args_rule` is set to `AllUser`, any arguments provided by the user will completely *overwrite* any previous args provided by the drop creator. If no args as passed in by the user, the drop creator's original args will be used. As an example, if the method data was: ```js args: JSON.stringify({ "foo": "bar", "baz": { "foo": "bar } }) ``` And the user provided the following args: ```js fc_args: JSON.stringify({ "new_field": "new_value" }) ``` Keypom would completely overwrite the funder's previous args and use the user's `fc_args` instead. #### Funder Preferred If `user_args_rule` is set to `FunderPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator. If there are any duplicate args, the drop funder's arguments will be prioritized / used. As an example, if the funder args were: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "funder_value" } }) ``` And the user provided the following args: ```js fc_args: JSON.stringify({ "funder_field": "user_value", "object": { "funder_field": "user_value", "user_field": "user_value" } }) ``` Keypom would take the user args and merge them together with the funder's but prioritize any fields that are funder specified. The resulting output would be: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "funder_value", "user_field": "user_value" } }) ``` #### User Preferred If `user_args_rule` is set to `UserPreferred`, any arguments provided by the user will be concatenated with the arguments provided by the drop creator, but if there are any duplicate keys, the *user's arguments* will overwrite the drop funder's. As an example, if the funder args were: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "funder_value" } }) ``` And the user provided the following args: ```js fc_args: JSON.stringify({ "object": { "funder_field": "user_value", "user_field": "user_value" } }) ``` Keypom would take the user args and merge them together with the funder's but prioritize any fields that are *user specified*. The resulting output would be: ```js args: JSON.stringify({ "funder_field": "funder_value", "object": { "funder_field": "user_value", "user_field": "user_value" } }) ``` ### FC Drop Use Cases Function call drops are the bread and butter of the Keypom contract. They are the most powerful and complex drops that can currently be created. With this complexity, there are an almost infinite number of use-cases that arise. #### Proof of Attendance Protocols A very common use case in the space is what's known as Proof of Attendance. Often times when people go to events, they want a way to prove that they were there. Some traditional approaches would be to submit your wallet address and you would be sent an NFT or some other form of proof at a later date. The problem with this is that it has a very high barrier to entry. Not everyone has a wallet. With Keypom, you can create a function call drop that allows people to onboard onto NEAR if they don't have a wallet or if they do, they can simply use that. As part of the onboarding / claiming process, they would receive some sort of proof of attendance such as an NFT. This can be lazy minted on-demand such that storage isn't paid up-front for all the tokens. At this point, the event organizers or the funder can distribute links to people that attend the event in-person. These links would then be claimed by users and they would receive the proof of attendance. #### Auto Registration into DAOs DAOs are a raging topic in crypto. The problem with DAOs, however, is there is a barrier to entry for users that aren't familiar with the specific chain they're built on top of. Users might not have wallets or understand how to interact with contracts. On the contrary, they might be very well versed or immersed in the DAO's topics. They shouldn't be required to create a wallet and learn the onboarding process. With Keypom, you can create a function call drop with the main purpose of registering users into a DAO. For people that have a wallet, this will act as an easy way of registering them with the click of a link. For users that don't have a wallet and are unfamiliar with NEAR, they can be onboarded and registered into the DAO with the same click of a link. #### Multisig Contracts Another amazing use-case for Keypom is allowing multisig contracts to have ZERO barrier to entry. Often times when using a multisig contract, you will entrust a key to a trusted party. This party might have no idea what NEAR is or how to interact with your contract. With Keypom, you can create a drop that will allow them to sign their transaction with a click of a link. No NEAR wallet is needed and no knowledge of the chain is required. At the end of the day, from the users perspective, they are given a link and when they click it, their portion of the multisig transaction is signed. The action is only performed on the multisig contract once all links have been clicked. This is an extremely powerful way of doing accomplishing multisig transactions with zero barrier to entry. The users don't even need to create a new account. They can simply call `claim` when the link is clicked which will fire the cross-contract call to the multisig contract and pass in the keypom arguments that will be cross-checked by that contract. #### NFT Ticketing The problem with current NFT ticketing systems is that they require users to have a wallet. This is a huge barrier to entry for people that are attending events but don't have wallets. In addition, there is often no proof of attendance for the event as the NFT is burned in order to get into the event which requires an internet connection. Keypom aims to solve these problems by having a ticketing system that has the following features. - No wallet is needed to enter the event or receive a POAP. - No wifi is needed at the door. - An NFT is minted on-demand for each user that attends the event. - Users can optionally onboard onto NEAR if they don't have a wallet. In addition, some way to provide analytics to event organizers that contains information such as links that were: - Given out but not clicked at all. - Clicked but not attended. - Partially claimed indicating the number of people that attended but did not onboard or receive a POAP. - Fully claimed indicating the number of people that attended and received a POAP. In order to accomplish this, you can create a drop that has 3 uses per key. These uses would be: 1. Array(`null`) 2. Array(`null`) 3. Array(function call to POAP contract to lazy mint an NFT) The event organizer would create the links and distribute them to people however they see fit. When a user receives the link, the first claim is automatically fired. This is a `null` case so nothing happens except for the fact that the key uses are decremented. At this point, the organizer knows that the user has clicked the link since the uses have been decremented. The next claim happens **only** when the user is at the door. Keypom would expose a QR code that can only be scanned by the bouncer's phone. This QR code would appear once the first link is clicked and contains the private key for the link. At the event, they wouldn't need any wifi to get in as they only need to show the bouncer the QR code. Once the bouncer scans it, the site would ensure that they have exactly 2 out of the 3 uses left. If they don't, they're not let in. At that point, a use is decremented from the key and the next time they visit the ticket page (when they have internet), they would be able to claim the final use and be onboarded / receive a POAP. <p align="center"> <img src="assets/flowcharts/ticketing.png" style="width: 65%; height: 65%" alt="Logo"> </p> ## Password Protected Keys Password protecting key uses is an extremely powerful feature that can unlock many use-cases. Keypom has baked flexibility and customization into the contract such that almost all use-cases involving password protection can be accomplished. Whenever a key is added to a drop, it can have a unique password for each individual use, or it can one password for all uses in general. ### How Does It Work? The Keypom implementation has been carefully designed so that users can't look at the NEAR Explorer to view what was passed into the contract either when the drop was created or when a key was used to try and copy those passwords. We also want passwords to be unique across keys so that if you know the password for 1 key, it doesn't work on a different key. In order to accomplish this, we use the concept of hashing. Imagine you have a drop with 2 keys and you want to password protect each key. Rather than forcing the drop funder to input a unique password for each key and having them remember each one, we can have them input a single **base password** and derive unique passwords from it that are paired with the key's public key. This is the most scalable option as it allows the drop funder to only need to remember 1 password and they can derive all the other ones using the hashing algorithm and public key. In the above scenario, let's say the funder inputs the base password as `mypassword1`. If a user wanted to claim the first key, they would need to input into the contract: `hash("mypassword1" + key1_public_key)` The funder would need to give the user this hash somehow (such as embedding it into the link or having an app that can derive it). It's important to note that the funder should probably **NOT** give them the base password otherwise the user could derive the passwords for all other keys (assuming those keys have the same base password). ### What is Stored On-Chain? How does Keypom verify that the user passed in the correct password? If the funder were to simply pass in `hash("mypassword1" + key1_public_key)` into the contract as an argument when the key is created, users could just look at the NEAR Explorer and copy that value. Instead, the funder needs to pass in a double hash when the key is created: `hash(hash("mypassword1" + key1_public_key))`. This is the value that is stored on-chain and when the user tries to claim the key, they would pass in just the single hash: `hash("mypassword1" + key1_public_key)`. The contract would then compute `hash(hash("mypassword1" + key1_public_key))` and compare it to the value stored on-chain. If they match, the key is claimed. Using this method, the base password is not exposed to the user, nobody can look on-chain or at the NEAR explorer and derive the password, and the password is unique across multiple keys. ## Passwords Per Key Use Unlike the passwords per key which is the same for all uses of a key, the drop creator can specify a password for each individual key use. This password follows the same pattern as the passwords per key in that the funder inputs a `hash(hash(SOMETHING))` and then the user would input `hash(SOMETHING)` and the contract would hash this and compare it to the value stored on-chain. The difference is that each individual key use can have a different value stored on-chain such that the user can be forced to input a different hash each time. This `SOMETHING` that is hashed can be similar to the global password per key example but this time, the desired key use is added: `hash("mypassword1" + key1_public_key + use_number)` In order to pass in the passwords per use, a new data structure is introduced so you only need to pass in passwords for the uses that have them. This is known as the `JsonPasswordForUse` and is as follows: ```rust pub struct JsonPasswordForUse { /// What is the password for this use (such as `hash("mypassword1" + key1_public_key + use_number)`) pub pw: String, /// Which use does this pertain to pub key_use: u64 } ```` ## Adding Your First Password Whenever keys are added to Keypom, if there's passwords involved, they must be passed in using the following format. ```rust passwords_per_use: Option<Vec<Option<Vec<JsonPasswordForUse>>>>, passwords_per_key: Option<Vec<Option<String>>>, ``` Each key that is being added either has a password, or doesn't. This is through the `Vec<Option<>`. This vector **MUST** be the same length as the number of keys created.This doesn't mean that every key needs a password, but the Vector must be the same length as the keys. As an example, if you wanted to add 3 keys to a drop and wanted only the first and last key to have a password_per_key, you would pass in: ```rust passwords_per_key: Some(vec![Some(hash(hash(STUFF))), None, Some(hash(hash(STUFF2)))]) ``` ## Complex Example To help solidify the concept of password protected keys, let's go through a complex example. Imagine Alice created a drop with a `uses_per_key` of 3. She wants to create 4 keys: - Key A: No password protection. - Key B: Password for uses 1 and 2. - Key C: Password for use 1 only. - Key D: Password that doesn't depend on the use. In this case, for Keys B and C, they will have the same base password but Alice wants to switch things up and have a different base password for Key D. When these keys are added on-chain, the `passwords_per_key` will be passed in as such: ```rust passwords_per_key: Some(vec![ None, // Key A None, // Key B None, // Key C // Key D Some( hash(hash("key_d_base_password" + key_d_public_key)) ), ]), ``` The passwords for Key B and Key C will be passed in as such: ```rust passwords_per_use: Some(vec![ None, // Key A // Key B vec![ { pw: hash(hash("keys_bc_base_password" + key_b_public_key + "1")), key_use: 1 }, { pw: hash(hash("keys_bc_base_password" + key_b_public_key + "2")), key_use: 2 } ] // Key C vec![ { pw: hash(hash("keys_bc_base_password" + key_c_public_key + "1")), key_use: 1 } ] None // Key D ]), ``` The drop funder would then give the keys out to people: ### Key A Alice gives Bob Key A and he would be able to claim it 3 times with no password required. ### Key D Alice gives Charlie Key D and he would be able to claim it 3 times with the hashed global key password: `hash("key_d_base_password" + key_d_public_key)`. When Charlie uses the key, he would input the password `hash("key_d_base_password" + key_d_public_key)` and the contract would hash that and check to see if it matches what is stored on-chain (which it does). If anyone tried to look at what Charlie passes in through the explorer, it wouldn't work since his hash contains the public key for key D and as such it is only valid for Key D. Similarly, if Charlie tried to look at the explorer when Alice created the keys and attempted to pass in `hash(hash("key_d_base_password" + key_d_public_key))`, the contract would attempt to hash this and it would NOT match up with what's in the storage. ### Key B Alice gives Eve Key B and she would need a password for claim 1 and 2. For the first claim, she needs to pass in: `hash("keys_bc_base_password" + key_b_public_key + "1")`. The contract would then check and see if the hashed version of this matches up with what's stored on-chain for that use. The second time Eve uses the key, she needs to pass in `hash("keys_bc_base_password" + key_b_public_key + "2")` and the same check is done. If Eve tries to pass in `hash("keys_bc_base_password" + key_b_public_key + "1")` for the second key use, the contract would hash it and check: ``` hash(hash("keys_bc_base_password" + key_b_public_key + "1")) == hash(hash("keys_bc_base_password" + key_b_public_key + "2")) ``` Which is incorrect and the key would not be claimed. Once Eve uses the key 2 times, the last claim is not password protected and she's free to claim it. Key C is similar to Key B except that it only has 1 password for the first use. ## Use-Cases Password protecting key uses is a true game changer for a lot of use-cases spanning from ticketing to simple marketing and engagement. #### Ticketing and POAPs Imagine you had an event and wanted to give out exclusive POAPs to people that came. You didn't want to force users to: - Have a NEAR wallet - Have wifi at the door. - Burn NFTs or tokens to get into the event. The important thing to note is that by using password protected key uses, you can **GUARANTEE** that anyone that received a POAP had to **PHYSICALLY** show up to the event. This is because the POAP would be guarded by a password. You could create a ticketing event using Keypom as outlined in the [Ticketing](#nft-ticketing) section and have a key with 2 uses. The first use would be password protected and the second use is not. The first use will get you through the door and into the event and the second contains the exclusive POAP and can onboard you. This means that anyone with the ticket, or key, can only receive the POAP if they know the password. You can have a scanner app that would scan people's tickets (tickets are just the private key). In this scanner app, the *base password* is stored and whenever the ticket is scanned, the public key is taken and the following hash is created: `hash(base password + public key)` This hash is then used to claim a use of the key and you will be let into the party. The scanner app can deterministically generate all the necessary hashes for all the tickets by simply scanning the QR code (which has the private key exposed). The tickets are worthless unless you actually show up to the event and are scanned. Once you're scanned, you can refresh your ticket page and the use the second key claim which is not password protected. This use contains the exclusive POAP and you can onboard onto NEAR. #### Marketing and Engagement Let's say that you're at an event and want people to show up to your talks and learn about your project. You can have a scanner app similar to the one mentioned in the ticketing scenario that derives the password for any use on any key. At the beginning of the event, you can give out a bunch of keys that have progressively increasing rewards gated by a password. At the end, the last key use contains a special reward that is only unlocked if the user has claimed all the previous key uses. In order for these uses to be unlocked, People must show up to your talks and get scanned. The scanner will derive the necessary password and unlock the rewards. Users will only get the exclusive reward if they come to ALL your talks. This idea can be further expanded outside the physical realm to boost engagement on your websites as an example: You want users to interact with new features of your site or join your mailing list. You can have links where uses are ONLY unlocked if the user interacts with special parts of your site such as buying a new NFT or joining your mailing list or clicking an easter egg button on your site etc. ## dApp Free Trials for Users In the upcoming Keypom V2.0, dApps will be able to integrate the Keypom wallet selector plugging to allow for free trials for their users. One of the biggest pain-points with Web3 at the moment is the fact that users need to fund wallets *before* they interact with a dApp. In Web2, a user can find value in an application by using it before they go through the messy onboarding process. Why can't Web3 be the same? Keypom will allow apps to create links that will automatically sign users into their applications and give them a free trial of the app. The user will be able to interact with things, spend $NEAR, sign transactions and gather assets through the trial. A unique feature of this is that the user will *never be redirected to the NEAR wallet* to approve transactions. Keypom will provide a seamless user experience where users can find value in applications. Once the free trial is over and users have collected assets / $NEAR through interacting with the dApp, they can *THEN* choose to onboard. With Keypom's technology, users will be locked into only interacting with the dApp specified in the link. Users can't rug the application and steal the $NEAR embedded in the link. The funds are allocated for 1 thing and 1 thing only: free trials of that one specific dApp. <p align="center"> <img src="assets/flowcharts/trial_accounts.png" style="width: 65%; height: 65%" alt="Logo"> </p> # Costs It is important to note that the Keypom contract is 100% **FEE FREE** and will remain that way for the *forseeable future*. This contract is a public good and is meant to inspire change in the NEAR ecosystem. With that being said, there are several mandatory costs that must be taken into account when using Keypom. These costs are broken down into two categories: per key and per drop. > **NOTE:** Creating an empty drop and then adding 100 keys in separate calls will incur the same cost as creating a drop with 100 keys in the same call. ## Per Drop When creating an empty drop, there is only one cost to keep in mind regardless of the drop type: - Storage cost (**~0.006 $NEAR** for simple drops) ## Per Key Whenever keys are added to a drop (either when the drop is first created or at a later date), the costs are outlined below. ### Key Costs for Simple Drop - $NEAR sent whenever the key is used (can be 0). - Access key allowance (**~0.0187 $NEAR per use**). - Storage for creating access key (**0.001 $NEAR**). - Storage cost (**~0.006 $NEAR** for simple drops) ### Additional Costs for NFT Drops Since keys aren't registered for use until **after** the contract has received the NFT, we don't know how much storage the token IDs will use on the contract. To combat this, the Keypom contract will automatically measure the storage used up for storing each token ID in the `nft_on_transfer` function and that $NEAR will be taken from the funder's balance. ### Additional Costs for FT Drops Since accounts claiming FTs may or may not be registered on the Fungible Token contract, Keypom will automatically try to register **all** accounts. This means that the drop creators must front the cost of registering users depending on the `storage_balance_bounds` returned from the FT contract. This applies to every use for every key. In addition, Keypom must be registered on the FT contract. If you create a FT drop and are the first person to ever do so for a specific FT contract on Keypom, Keypom will be automatically registered when the drop is created. This is a one time cost and once it is done, no other account will need to register Keypom for that specific FT contract. ### Additional Costs for FC Drops Drop creators have a ton of customization available to them when creation Function Call drops. A cost that they might incur is the attached deposit being sent alongside the function call. Keypom will charge creators for all the attached deposits they specify. > **NOTE:** The storage costs are dynamically calculated and will vary depending on the information you store on-chain. ## Deleting Keys and Drops Creators have the ability to delete drops and keys at any time. In this case, **all** the initial costs they incurred for the remaining keys will be refunded to them (minus Gas fees of course). ## Automatic Refunds When Keys are Used One way that Keypom optimizes the fee structure is by performing automatic refunds for some of the initial costs that creators pay for when keys are used. All the storage that is freed along with any unused allowance is automatically sent back to the creator whenever a key is used. This model drastically reduces the overall costs of creating drops and creates incentives for the keys to be used. ## Account Balances for Smooth UX In order to make the UX of using Keypom seamless, the contract introduces a debiting account model. All costs and refunds go through your account's balance which is stored on the contract. This balance can be topped up or withdrawn at any moment using the `add_to_balance()` and `withdraw_from_balance()` functions. This account balance is not *required*, however. You can create a drop by attaching a deposit to the call. Keep in mind that this will create an account balance for you behind the scenes, however. </td> </tr> </table> ## Built With - [near-sdk-rs](https://github.com/near/near-sdk-rs) - [near-api-js](https://github.com/near/near-api-js) # How Linkdrops Work For some background as to how linkdrops works on NEAR: *The funder that has an account and some $NEAR:* - creates a keypair locally `(pubKey1, privKey1)`. The blockchain doesn't know of this key's existence yet since it's all local for now. - calls `send` on the contract and passes in the `pubKey1` as an argument as well as the desired `balance` for the linkdrop. - The contract will map the `pubKey1` to the desired `balance` for the linkdrop. - The contract will then add the `pubKey1` as a **function call access key** with the ability to call `claim` and `create_account_and_claim`. This means that anyone with the `privKey1` that was created locally, can claim this linkdrop. - Funder will then create a link to send to someone that contains this `privKey1`. The link follows the following format: ``` wallet.testnet.near.org/linkdrop/{fundingContractAccountId}/{linkdropKeyPairSecretKey}?redirectUrl={redirectUrl} ``` * `fundingContractAccountId`: The contract accountId that was used to send the funds. * `linkdropKeyPairSecretKey`: The corresponding secret key to the public key sent to the contract. * `redirectUrl`: The url that wallet will redirect to after funds are successfully claimed to an existing account. The URL is sent the accountId used to claim the funds as a query param. *The receiver of the link that is claiming the linkdrop:* - Receives the link which includes `privKey1` and sends them to the NEAR wallet. - Wallet creates a new keypair `(pubKey2, privKey2)` locally. The blockchain doesn't know of this key's existence yet since it's all local for now. - Receiver will then choose an account ID such as `new_account.near`. - Wallet will then use the `privKey1` which has access to call `claim` and `create_account_and_claim` in order to call `create_account_and_claim` on the contract. - It will pass in `pubKey2` which will be used to create a full access key for the new account. - The contract will create the new account and transfer the funds to it alongside any NFT or fungible tokens pre-loaded. </p> # Getting Started There are several ways to get started using Keypom. You can use the NEAR CLI, our Keypom application, our Keypom SDK and more. In this section, we will go over how you can interact with Keypom and create drops using the NEAR-API-JS library and write simple node scripts. ## Prerequisites In order to successfully interact with this contract using the deploy scripts, you should have the following: - [NEAR account](https://docs.near.org/concepts/basics/account) - [Node JS](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) ## Deploy Scripts There are 4 deploy scripts that have been made available for you to use and easily create Keypom links. These are for: - Simple Drops - NFT Drops - FT Drops - Function Call Drops Each drop type deploy script has a version using `NEAR-API-JS`, and a version using the `Keypom-JS SDK`. The file tree for these scripts is shown below. ```bash /deploy ├── ft │ └── configurations.js │ └── ft-create-sdk.js │ └── ft-create.js │ ├── function-call │ └── configurations.js │ └── fc-create-sdk.js │ └── fc-create.js │ ├── nft │ └── configurations.js │ └── nft-create-sdk-minted.js │ └── nft-create-sdk-owned.js │ └── nft-create.js │ ├── simple │ └── configurations.js │ └── simple-create-sdk.js │ └── simple-create.js │ ├── utils ``` In order to use these scripts, open the `deploy/` directory and modify the `configurations.js` file for the drop you want to create. In this file, you can specify important information such as the number of keys you wish to create, the amount of $NEAR you want to send, how many uses per key etc. You must specify the account that you will fund the drops with under the `FUNDING_ACCOUNT_ID` variable. This account needs to have keys stored in your `~/.near-credentials` folder. To do this, simply run `near login` on your terminal and follow the prompts using the NEAR CLI. Once the `configurations.js` file has been modified to your liking, navigate back to the root directory and run the deploy script. For simple drops: ``` // Using NEAR-API-JS yarn simple // Using SDK yarn simple-sdk ``` For FT drops: ``` // Using NEAR-API-JS yarn ft // Using SDK yarn ft-sdk ``` For NFT drops: ``` // Using NEAR-API-JS yarn nft // Using SDK yarn nft-sdk ``` For Function Call drops: ``` // Using NEAR-API-JS yarn fc // Using SDK yarn fc-sdk ``` # Query Information From Keypom Keypom allows users to query a suite of different information from the contract. This information can be broken down into two separate objects that are returned. JsonDrops and JsonKeys. ```rs pub struct JsonDrop { // Drop ID for this drop pub drop_id: DropId, // owner of this specific drop pub owner_id: AccountId, // Balance for all keys of this drop. Can be 0 if specified. pub deposit_per_use: U128, // Every drop must have a type pub drop_type: JsonDropType, // The drop as a whole can have a config as well pub config: Option<DropConfig>, // Metadata for the drop pub metadata: Option<DropMetadata>, // How many uses are registered pub registered_uses: u64, // Ensure this drop can only be used when the function has the required gas to attach pub required_gas: Gas, // Keep track of the next nonce to give out to a key pub next_key_id: u64, } pub struct JsonKeyInfo { // Drop ID for the specific drop pub drop_id: DropId, pub pk: PublicKey, // How many uses this key has left. Once 0 is reached, the key is deleted pub remaining_uses: u64, // When was the last time the key was used pub last_used: u64, // How much allowance does the key have left. When the key is deleted, this is refunded to the funder's balance. pub allowance: u128, // Nonce for the current key. pub key_id: u64, } ``` ## Key Specific - **`get_key_balance(key: PublicKey)`**: Returns the $NEAR that will be sent to the claiming account when the key is used - **`get_key_total_supply()`**: Returns the total number of keys currently on the contract - **`get_keys(from_index: Option<U128>, limit: Option<u64>)`**: Paginate through all keys on the contract and return a vector of key info - **`get_key_information(key: PublicKey)`**: Return the key info for a specific key - **`get_key_information_batch(keys: Vec<PublicKey>)`**: Return a vector of key info for a set of public keys ## Drop Specific - **`get_drop_information(drop_id: Option<DropId>, key: Option<PublicKey>)`**: Return the drop info for a specific drop. This can be queried for by either passing in the drop ID or a public key. - **`get_key_supply_for_drop(drop_id: DropId)`**: Return the total number of keys for a specific drop - **`get_keys_for_drop(drop_id: DropId, from_index: Option<U128>, limit: Option<u64>)`**: Paginate through all keys for a specific drop and return a vector of key info - **`get_drop_supply_for_owner(account_id: AccountId)`**: Return the total number of drops for a specific account - **`get_drops_for_owner(account_id: AccountId, from_index: Option<U128>, limit: Option<u64>)`**: Paginate through all drops for a specific account and return a vector of drop info - **`get_nft_supply_for_drop(drop_id: DropId)`**: Get the total number of NFTs registered for a given drop. - **`get_nft_token_ids_for_drop(drop_id: DropId, from_index: Option<U128>, limit: Option<u64>)`**: Paginate through token IDs for a given drop - **`get_next_drop_id()`**: Get the next drop ID that will be used for a new drop ### Utility - **`get_root_account()`**: Get the global root account that all created accounts with be based off. - **`get_user_balance()`**: Get the current user balance for a specific account. # Running the Keypom Tests We have put together a suite of test cases that can be found in the `__tests__` folder. These range anywhere from simple config tests all the way to full blown ticketing and POAPs. In the `__tests__` folder, there are sub-folders with each type of test. Some of these sub-folders contain a `utils` folder with some utility functions used. All the tests use `workspaces-js`. In order to run all the tests, run the following command. ```bash yarn && yarn test ``` This will run through each test 1 by 1. If you wish to only run a set of specific tests, the full list of commands can be found below. ```bash "test:internals" "test:stage1" "test:stage1:simple" "test:ticketing" "test:poaps" "test:configs" "test:nft-drops" "test:ft-drops" "test:profiling" "test:passwords" ``` # Contributing First off, thanks for taking the time to contribute! Contributions are what makes the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please try to create bug reports that are: - _Reproducible._ Include steps to reproduce the problem. - _Specific._ Include as much detail as possible: which version, what environment, etc. - _Unique._ Do not duplicate existing opened issues. - _Scoped to a Single Bug._ One bug per report. Please adhere to this project's [code of conduct](docs/CODE_OF_CONDUCT.md). You can use [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli) to check for common markdown style inconsistency. # License This project is licensed under the **GPL License**. # Acknowledgements Thanks for these awesome resources that were used during the development of the **Keypom Contract**: - <https://github.com/dec0dOS/amazing-github-template> - <https://github.com/near/near-linkdrop> - <https://github.com/near/near-wallet/blob/master/packages/frontend/docs/Linkdrop.md>
glonlas_Learn-blockchains-Web3-dApps
README.md
# Crypto/DeFi/Web3.0 learning content **Welcome to this Crypto/DeFi/Web3.0 ramp-up page** This page provides information on Crypto / DeFi / Web3.0 / and Tokenomics. It is a list of curated articles and videos you can follow at your own pace. ## How to contribute? Feel free to send your pull-requests to add new content you think relevant to this knowledge sharing page. ## Table of content <!-- TOC --> - [Crypto/DeFi/Web3.0 learning content](#cryptodefiweb30-learning-content) - [How to contribute?](#how-to-contribute) - [Table of content](#table-of-content) - [Understand what is a blockchain](#understand-what-is-a-blockchain) - [Deep dive:](#deep-dive) - [Official Development documentation](#official-development-documentation) - [What is a consensus? What type of consensus exists?](#what-is-a-consensus-what-type-of-consensus-exists) - [Deep dive](#deep-dive) - [Blockchain Wallets](#blockchain-wallets) - [Wallet Connect](#wallet-connect) - [What is the notion of Web3.0. Example of products using it](#what-is-the-notion-of-web30-example-of-products-using-it) - [Denomination](#denomination) - [Smart contracts](#smart-contracts) - [Tokenomic](#tokenomic) - [dApps](#dapps) - [DeFI: Decentralized Finance vs Centralized Finance](#defi-decentralized-finance-vs-centralized-finance) - [DAO (Decentralized Autonomous Organization)](#dao-decentralized-autonomous-organization) - [Rebase DAO](#rebase-dao) - [Oracles](#oracles) - [NFT: Collectibles](#nft-collectibles) - [Other type of NFT](#other-type-of-nft) - [Play to earn](#play-to-earn) - [Hand-on code: Building products on top of Blockchain](#hand-on-code-building-products-on-top-of-blockchain) - [Building on Ethereum network](#building-on-ethereum-network) - [Blockchain (Smart Contract) EVM env:](#blockchain-smart-contract-evm-env) - [Frontend Dev env:](#frontend-dev-env) - [Building on Solana network](#building-on-solana-network) - [How to connect a Wallet to a dApp](#how-to-connect-a-wallet-to-a-dapp) - [Ethereum](#ethereum) - [Solana](#solana) - [Cardano](#cardano) - [Appendix](#appendix) - [Energy consumption and environmental footprint](#energy-consumption-and-environmental-footprint) - [Security exploit and past errors to be aware of](#security-exploit-and-past-errors-to-be-aware-of) - [Various articles on the Blockchain/Web3.0 economy](#various-articles-on-the-blockchainweb30-economy) - [Crypto regulations and Travel Rules](#crypto-regulations-and-travel-rules) - [Regulations](#regulations) - [Travel Rules](#travel-rules) <!-- /TOC --> ## Understand what is a blockchain 1. What is a blockchain 1. [Technical explanation of a blockchain](https://www.youtube.com/watch?v=Jp7T9qtuRIE&ab_channel=DistributedSystemsCourse) (3 min video) 2. [Why is there so many blockchains](https://podcasts.apple.com/us/podcast/blockchains-explained-with-dragonfly-capitals-haseeb/id1593332793?i=1000554933893) (38 min podcast) 2. Cryptocurrency wallets 1. [What is Crypto wallet](https://www.youtube.com/watch?v=SQyg9pyJ1Ac&ab_channel=WhiteboardCrypto) (5 min video) 2. [The Present & Future of Self-Custody with Coinbase Wallet Director of Engineering](https://podcasts.apple.com/us/podcast/the-present-future-of-self-custody-with/id1593332793?i=1000556375654) (33 min podcast) 3. Layer 0 Blockchain (Cross chain interoperability): 1. [What is Polkadot blockchain](https://www.youtube.com/watch?v=YlAdEQp6ekM&ab_channel=WhiteboardCrypto) (8 min video) 2. [What is Polkadot blockchain](https://polkadotters.medium.com/what-is-polkadot-85d4af1b2fe7) (Article) 3. [What is Cosmos](https://www.youtube.com/watch?v=y4d8XMBVF1A) (21 min video) 1. [Cosmos Complete Begineer's guide](https://www.youtube.com/watch?v=sgIGVsg51W8&ab_channel=CoinBureau) (21 min video) 4. Layer 1 Blockchain: 1. [Ethereum Explained](https://www.youtube.com/watch?v=BR8jgh6-6gs) (explain blockchain, contract, wallet, ...) (16 min video) 1. [What is Ethereum v2.0?](https://www.youtube.com/watch?v=pycVClxWUN8) (9 min video) 2. [What is Cardano blockchain](https://www.youtube.com/watch?v=UMUztLQNqSI&t=439s) (10 min video) 3. [What is Solana blockchain](https://www.youtube.com/watch?v=1jzROE6EhxM)(10 min video) 1. [Solana components detailed](https://www.analyticssteps.com/blogs/what-solana-features-and-working-system) (Article) 4. [What is Avalanche blockchain](https://www.youtube.com/watch?v=CbM2jidEn0s) (10 min video) 1. [Advantage of Avalanche - a high TPS EMV blockchain](https://cryptobriefing.com/what-is-avalanche-the-layer-1-blockchains-ecosystem-unpacked/) (Article) 5. [What is Fantom blockchain](https://www.youtube.com/watch?v=wPFjbhyLpCY&ab_channel=CoinBureau) (24 min video) 6. [What is NEAR blockchain](https://www.youtube.com/watch?v=r61UszUFwNY) (21 min video) 1. [NEAR: Protocol Economics Design](https://www.youtube.com/watch?v=6fKY5TaiakM) (18 min video - Updated on 23 Apr 2022) 1. [NEAR: Open the web concept](https://www.youtube.com/watch?v=s3lhhyNCRwU) (17 min video) 5. Layer 2 Blockchain (Parachain): 1. [What is a Layer-2 blockchain](https://www.youtube.com/watch?v=9pJjtEeq-N4) (8 min video) 2. [What is a Layer 2](https://www.youtube.com/watch?v=BgCgauWVTs0) (10 min video) 3. [Understand Layer 2 with Polygon: What is Polygon](https://www.youtube.com/watch?v=f7F67ZP9fsE) (10 min video) 4. [What is Polygon](https://www.youtube.com/watch?v=IijtdpAtOt0) (14 min video) 5. [Blockchain bridges explained](https://www.youtube.com/watch?v=OPfjEtACFMs) (6 min video) 6. How to [Bridge to Layer 2](https://www.youtube.com/watch?v=z09LyktKau4) (34 min video) 7. [Layer-2 will not save Ethereum](https://medium.com/coinmonks/layer-2-wont-save-ethereum-a52aa2bd719b) (Article) 8. Binance Smart Chain (BSC) 1. [BSC is a lost cause](https://github.com/binance-chain/bsc/issues/553) (post) 6. [Detail overview of blockchains](https://medium.com/coinmonks/unhyped-comparison-of-blockchain-platforms-679e122947c1) (Article) ### Deep dive: * [What is a Merkle Tree](https://www.youtube.com/watch?v=fB41w3JcR7U&ab_channel=Telusko) (3 min video) * [Anatomy of a blockchain block](https://medium.com/@eiki1212/ethereum-block-structure-explained-1893bb226bd6) (Article) * [How Ethereum nodes communicate together](https://medium.com/orbs-network/the-actual-networking-behind-the-ethereum-network-how-it-works-6e147ca36b45) (Article) ### Official Development documentation * [Ethereum Development doc](https://ethereum.org/en/developers/docs/) (official doc) * [Deep dive into Ethereum P2P network](https://github.com/ethereum/wiki/wiki/%C3%90%CE%9EVp2p-Wire-Protocol) (Ethereum doc) * [Deep dive on how Ethereum works](https://github.com/ethereum/wiki/wiki) (Ethereum doc) * [Solana Development doc](https://docs.solana.com/developing/programming-model/overview) * [Cardano Development doc](https://docs.cardano.org/) * [Bitcoin Development doc](https://developer.bitcoin.org/reference/) * [Polkadot Development doc](https://wiki.polkadot.network/docs/build-build-with-polkadot) * [Polygon Development doc](https://docs.polygon.technology/) **To the question:** “How a new node to the network with no past experience can find other nodes?” The answer is there is a list of boot nodes hardcoded in the code. A new nose without any cache will hit these Nodes as entry point, they will share other available nodes addresses. The discovery peer advertisement uses the [Kademelia DHT protocol](https://en.m.wikipedia.org/wiki/Kademlia). In 2015 when Ethereum started it had only 3 boot nodes (seems to be still the case, this is the only centralize point is the blockchain). ## What is a consensus? What type of consensus exists? 1. [What is a proof of work?](https://www.youtube.com/watch?v=XLcWy1uV8YM) (10 min video) 2. [What is a Proof of stake? How it works](https://www.youtube.com/watch?v=x83EVUZ_EWo) (10 min video) 3. [What is a Proof of stake?](https://www.youtube.com/watch?v=sRgrn9HDYpM&ab_channel=DappUniversity) (5 min video) 4. What is proof of Authority? (Used by smaller blockchain) 5. [What is proof of History?](https://medium.com/solana-labs/how-solanas-proof-of-history-is-a-huge-advancement-for-block-time-178899c89723) (Used by Solana) 6. [13 Proof of (Consensus algorithm) explained](https://www.youtube.com/watch?v=ah94PuwR1DI&t=444s)(12 min) ### Deep dive 1. [What is a consensus](https://www.youtube.com/watch?v=LFZDpF_A-BE&t=34s&ab_channel=DistributedSystemsCourse) (1 min video) 2. [Paxos Consensus](https://www.youtube.com/watch?v=SRsK-ZXTeZ0&t=809s&ab_channel=DistributedSystemsCourse) (35 min video) 3. [Bysantin Consensus](https://www.youtube.com/watch?v=_e4wNoTV3Gw&ab_channel=DistributedSystemsCourse) (27 min video) 4. [Bitcoin Blockchain consensus](https://www.youtube.com/watch?v=f1ZJPEKeTEY&ab_channel=DistributedSystemsCourse) (20 min video) 5. [Should you use Bitcoin Consensus](https://www.youtube.com/watch?v=MVPkHPEsC4Y&t=2s&ab_channel=DistributedSystemsCourse) (12 min video) ## Blockchain Wallets 1. Must watch: [Explanation of BIP-39: Wallet Mnemonic words derivation](https://youtu.be/hRXcY_tIlrw) (18 min video) 2. Must watch: [Explanation of BIP-32: Hierarchical Deterministic Wallet](https://youtu.be/2HrMlVr1QX8) (26 minvideo) 3. [BIP-39: Mnemonic word list](https://github.com/bitcoin/bips/blob/master/bip-0039/bip-0039-wordlists.md) (Github) 4. BIP-44: [Hierarchical Deterministic Wallet](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) (Github) 1. [Keys, Addresses, Wallets explained](https://www.oreilly.com/library/view/mastering-bitcoin/9781491902639/ch04.html) (Article) 5. BIP-48: [Multi-sig Wallet](https://github.com/bitcoin/bips/blob/master/bip-0048.mediawiki) (Github) 1. [Multi-signature Wallet, how does it work?](https://https://www.makeuseof.com/what-are-multi-signature-wallets/) (Article) 6. [Ethereum HD Wallet (BIP-32) spec](https://www.alibabacloud.com/blog/how-ethereum-bip-32-hardware-digital-wallet-works_597788) (Article) 1. [Ethereum HD wallet BIP-32/BIP-44 spec and implementation](https://wolovim.medium.com/ethereum-201-hd-wallets-11d0c93c87f7) (Article) 2. [Ethereum Mnemonic BIP-39 specs and implementation](https://wolovim.medium.com/ethereum-201-mnemonics-bb01a9108c38) (Article) 7. [Solana Wallet specs](https://docs.solana.com/wallet-guide/paper-wallet) (Official docs) 8. [Cardano Wallet specs](https://input-output-hk.github.io/cardano-wallet/) (official doc) ### Wallet Connect [What is Wallet connect?](https://www.youtube.com/watch?v=PHRPoSRXPI0) (13 min video) ## What is the notion of Web3.0. Example of products using it * [What is Web3.0](https://www.youtube.com/watch?v=TV7SHUGTxNU&ab_channel=DappUniversity) (14 min video) * [Web 3.0 in a nutshell](https://eshita.mirror.xyz/H5bNIXATsWUv_QbbEz6lckYcgAa2rhXEPDRkecOlCOI) (Article) ### Denomination * Fungible token: ERC-20 * Official bank token: CBDC (Central Bank Digital Currencies) * Non Fungible Token: ERC-721 ### Smart contracts * [Smart Contract explained](https://www.youtube.com/watch?v=pWGLtjG-F5c) (15 min video) * [Create your first ETH Smart Contract](https://www.youtube.com/watch?v=ooN6kZ9vqNQ) (20 min video) * [Create a Token (ERC-20)](https://www.youtube.com/watch?v=ZLFiGHIxS1c&t=204s) (22 min video) * [Make a Payment Subscription with Smart contract](https://www.youtube.com/watch?v=yMwdovqrbM4) (15 min video) * [How to upgrade an immutable Smart Contract?](https://www.youtube.com/watch?v=bdXJmWajZRY) (30 min video) * [How to upgrade a smart contract?](https://www.youtube.com/watch?v=RoXgaAvoIjg) (22 min video) * [How much data can be saved in a smart contract](https://ethereum.stackexchange.com/questions/68712/how-much-data-can-i-store-in-a-smart-contract-what-is-the-cost-and-how-it-is-im#:~:text=The%20contract%20size%20must%20be,really%20use%20all%20this%20memory%3F) (Article) * *As for the maximum amount of data a contract can store you can check Is there a (theoretical) limit for amount of data that a contract can store? . So **in theory you can store 2^261 bytes** but in practice you can never get anywhere near that limit. As Vitalik points on in his post in that link **the blockchain will cease to function before you reach that hard limit*** * [How gas is calculated?](https://hackernoon.com/ether-purchase-power-df40a38c5a2f) (Article) ### Tokenomic 1. [Deep dive into Web 3.0 and Tokenomy](https://github.com/sherminvo/TokenEconomyBook) (Book) ### dApps * [Swap token explained](https://www.youtube.com/watch?v=LpjMgS4OVzs) (12 min video) * [Bitcoin on Etherum blockchain? how is it possible?](https://www.youtube.com/watch?v=iExly7FGKAQ) (14 min video) ### DeFI: Decentralized Finance vs Centralized Finance * [History of DeFi](https://www.youtube.com/watch?v=qFBYB4W2tqU) (18 min video) * [What is DeFi?](https://www.youtube.com/watch?v=k9HYC0EJU6E) (12 min video) * [What is a DEX (Decentralized Exchange)](https://www.youtube.com/watch?v=2tTVJL4bpTU) (7 min video) * [What is a Liquidity pool](https://www.youtube.com/watch?v=cizLhxSKrAc) (10 min video) * [What are Flash loan](https://www.youtube.com/watch?v=Aw7yvGFtOvI) (18 min video) * [What is a Yearn finance](https://www.youtube.com/watch?v=qG1goOptZ5w) (10 min video) * [Lending Borrowing finance explained](https://www.youtube.com/watch?v=WwE3lUq51gQ) (14 min video) * [Impermanent loss explained](https://www.youtube.com/watch?v=8XJ1MSTEuU0) (10 min video) * [How hacker steal tokens with DeFi?](https://rekt.news/) (list of articles) #### DAO (Decentralized Autonomous Organization) 1. [What is a DAO](https://www.youtube.com/watch?v=ubZKD_BAvpo) (2 min video) 2. [Dive deep is all type of a DAO](https://www.youtube.com/watch?v=MFEXFvCFywc) (22 min video) ### Rebase DAO 1. [What is OlympusDAO](https://docs.olympusdao.finance/main/) (The project that created this trend) 2. [Dive deep into Rebase DAO Tokenomics](https://www.youtube.com/watch?v=-ZodrK_V8Fw&t=1s&ab_channel=SiamKidd) ### Oracles 1. [What is an oracle?](https://www.youtube.com/watch?v=ZJfkNzyO7-U&ab_channel=Chainlink) (6 min video) ### NFT: Collectibles 1. [NFTs explained](https://www.youtube.com/watch?v=Xdkkux6OxfM) (10 min video) 2. [Example of NFT platform](https://www.youtube.com/watch?v=z8MCevWETm4) (30 min video) 3. [How to create a collectible](https://www.youtube.com/watch?v=YPbgjPPC1d0) (NFT ERC-721) (2 hours video) Deep dive: * [CryptoKitties](https://www.cryptokitties.co/) (website) * [CryptoPunk](https://www.larvalabs.com/cryptopunks) (website) * [Opensea](https://opensea.io/) (website) ### Other type of NFT 1. [Arianee protocol](https://www.youtube.com/watch?v=Z7v41l4I-Gc&ab_channel=Arianee)(2 min video) (Prevent product counterfeit) 2. [ENS (decentralized domain name)](https://www.youtube.com/watch?v=P8RlPsjGaR8) (7 min video) 3. [Build a transparent Supply chain with blockchain](https://hbr.org/2020/05/building-a-transparent-supply-chain) (Article) ### Play to earn * [Top 10 Crypto games](https://medium.com/general_knowledge/top-10-crypto-gaming-projects-bonus-612249019a5e) (Article) * [What is Play to Earn](https://www.youtube.com/watch?v=zchIkjXtOtk) (20 min video) * [Play to earn game: Axie Infinity](https://www.youtube.com/watch?v=mXEYCXCPI5c) (6 min video) Deep dive: * [Axies infinity](https://axieinfinity.com/) (website) * [Skyweaver Trading card game](https://www.skyweaver.net/) (website) ## Hand-on code: Building products on top of Blockchain ### Building on Ethereum network * [Beginner: Step by step creating a dApps](https://www.youtube.com/watch?v=nvw27RCTaEw) (1.5 hour video) * [Master Solidity development](https://www.youtube.com/watch?v=YJ-D1RMI0T0) (2 hours video) * [Build a token swap dApps](https://www.youtube.com/watch?v=99pYGpTWcXM) (3 hours video) * [Create an Instagram clone with Blockchain](https://www.youtube.com/watch?v=8rhueOcTu8k) (2 hours video) * [Full course](https://www.youtube.com/watch?v=M576WGiDBdQ&list=WL&index=39) (16 hours video) * [Create a token Airdrop](https://www.youtube.com/watch?v=YXsMgSgE_Pw) (34 min video) * https://buildspace.so/ #### Blockchain (Smart Contract) EVM env: * [Remix IDE](https://remix.ethereum.org/) (Web IDE) * [Ganache](https://www.trufflesuite.com/ganache) (Development Blockchain) * [TruffleSuite](https://www.trufflesuite.com/) (Truffle, Smart contract testing framework) #### Frontend Dev env: * [Web3.js](https://web3js.readthedocs.io/en/v1.3.4/): JS Connector to the blockchain (ideally to connect the website to the blockchain) * [Ether.js](https://docs.ethers.io/v5/) * React.js * [Express.js](https://expressjs.com/) * [Expo](https://expo.dev/) ### Building on Solana network * [Create Solana Token and NFT](https://www.youtube.com/watch?v=L4WWQzOBNIg) (41 min video) * [Solana dev doc](https://docs.solana.com/) * [Solang a Solidity compiler for Solana](https://solang.readthedocs.io/en/latest/) ### How to connect a Wallet to a dApp #### Ethereum * [Ethereum: What is Wallet connect?](https://www.youtube.com/watch?v=PHRPoSRXPI0) (13 min video) * [Ethereum: Wallet connect SDK](https://github.com/WalletConnect) (Github) * [Ethereum/Solana: Coinbase Wallet SDK](https://github.com/coinbase/coinbase-wallet-sdk) (Github) #### Solana * [Solana wallet connection](https://solana-labs.github.io/wallet-adapter/) (Github) #### Cardano One day... but for now: * [Cardano DApp Wallet Connector - demo app and example code](https://forum.cardano.org/t/cardano-dapp-wallet-connector-demo-app-and-example-code/95691) (Forum) # Appendix ## Energy consumption and environmental footprint **Open discussion:** with blockchain layer 1 and layer 2 multiplying, company who want to onboard to blockchain/web3.0 will also have to consider their environmental footprint. For instance, I foresee companies moving a blockchain because the cost in energy will be lesser than another blockchain. On this topic I found the following articles, but please free to add your in comment of this thread. 1. [Ethereum average energy consumption per transaction compared to that of VISA as of October 21, 2021](https://www.statista.com/statistics/1265891/ethereum-energy-consumption-transaction-comparison-visa/) 2. [Bitcoin average energy consumption per transaction compared to that of VISA as of October 21, 2021](https://www.statista.com/statistics/881541/bitcoin-energy-consumption-transaction-comparison-visa/) (even if Bitcoin does not present any interest on Web3 for now) 3. [Solana’s Energy Use Report: November 2021](https://solana.com/news/solana-energy-usage-report-november-2021) 4. [Tezos Excels As An Energy-Efficient Blockchain According To New PWC Report](https://xtz.news/latest-tezos-news/tezos-excels-as-energy-efficient-blockchain-according-to-new-pwc-report/) 5. [Why the debate about crypto's energy consumption is flawed](https://www.weforum.org/agenda/2022/03/crypto-energy-consumption/) ## Security exploit and past errors to be aware of 1. [Kucoin hack, $45M stolen](https://rekt.news/epic-hack-homie/) (Article) 1. [Poly Network hack, $611M stolen](https://rekt.news/polynetwork-rekt/) (Article) 1. [Ethereum Uniswap V3 LP lower performance than V2 LP](https://rekt.news/uniswap-v3-lp-rekt/) (Article) 1. [Solana BadgerDAO, $120M Stolen ](https://www.theblockcrypto.com/post/126072/defi-protocol-badgerdao-exploited-for-120-million-in-front-end-attack) (Article) 1. [Bitmart hack, $196M stolen](https://rekt.news/bitmart-rekt/) (Article) 1. [Compound hack, $147M stolen](https://rekt.news/compound-rekt/) (Article) 1. [Avalanche Snowdog DAO Rekt](https://rekt.news/snowdog-rekt/) (Article) 1. [Crypto.COM $33.7M stolen](https://rekt.news/cryptocom-rekt/) (Article) 1. [Solana Wormhole $326 stolen](https://rekt.news/wormhole-rekt/) (Article) 1. [Solana Cashio infinite minting](https://rekt.news/cashio-rekt/) (Article) 1. [Ronin network $624M stolen](https://rekt.news/ronin-rekt/) (Article) ## Various articles on the Blockchain/Web3.0 economy 1. [Tokens are a new digital primitive, analogous to the website](https://cdixon.mirror.xyz/0veLm9KKWae4T6_H3siLpKF933NSdC3F75jhPQw_qWE) (Article) 2. [What is Olympus Pro (Bond as a service, Liquidity provide)](https://olympusdao.medium.com/introducing-olympus-pro-d8db3052fca5) (Article) ## Crypto regulations and Travel Rules ### Regulations * [MiCA: A Guide to the EU’s Proposed Markets in Crypto-Assets Regulation](https://www.sygna.io/blog/what-is-mica-markets-in-crypto-assets-eu-regulation-guide/) (Article) * [Korean crypto exchange Coinone will no longer allow withdrawals to unverified external wallets](https://www.theblockcrypto.com/post/128735/korea-crypto-exchange-%E2%80%8Ecoinone-withdrawals-external-wallets) (Article) ### Travel Rules * [Guide to the FATF Travel Rule for Cryptocurrency](https://ciphertrace.com/the-complete-guide-to-the-fatf-travel-rule-for-cryptocurrency/) (Article) * [European Parliament Proposes Expanding 'Travel Rule' to Every Single Crypto Transaction](https://www.coindesk.com/policy/2022/03/07/european-parliament-proposes-expanding-travel-rule-to-every-single-crypto-transaction/) (Article)
kulapio_kulap-near-token-airdrop-poc
README.md babel.config.js package-lock.json package.json public index.html src config.js main.js router index.js vue.config.js
# kulap-near-token-airdrop-poc ## Project setup ``` yarn install ``` ### Compiles and hot-reloads for development ``` yarn serve ``` ### Compiles and minifies for production ``` yarn build ``` ### Customize configuration See [Configuration Reference](https://cli.vuejs.org/config/).
manhdevit_gmail.near
.gitpod.yml README.md contract README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts index.ts tsconfig.json compile.js node_modules .bin acorn.cmd acorn.ps1 asb.cmd asb.ps1 asbuild.cmd asbuild.ps1 asc.cmd asc.ps1 asinit.cmd asinit.ps1 asp.cmd asp.ps1 aspect.cmd aspect.ps1 assemblyscript-build.cmd assemblyscript-build.ps1 eslint.cmd eslint.ps1 esparse.cmd esparse.ps1 esvalidate.cmd esvalidate.ps1 js-yaml.cmd js-yaml.ps1 mkdirp.cmd mkdirp.ps1 near-vm-as.cmd near-vm-as.ps1 near-vm.cmd near-vm.ps1 nearley-railroad.cmd nearley-railroad.ps1 nearley-test.cmd nearley-test.ps1 nearley-unparse.cmd nearley-unparse.ps1 nearleyc.cmd nearleyc.ps1 node-which.cmd node-which.ps1 rimraf.cmd rimraf.ps1 semver.cmd semver.ps1 shjs.cmd shjs.ps1 wasm-opt.cmd wasm-opt.ps1 .package-lock.json @as-covers assembly CONTRIBUTING.md README.md index.ts package.json tsconfig.json core CONTRIBUTING.md README.md package.json glue README.md lib index.d.ts index.js package.json transform README.md lib index.d.ts index.js util.d.ts util.js node_modules visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js toString.d.ts toString.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformRange.d.ts transformRange.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json package.json @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json @babel code-frame README.md lib index.js package.json helper-validator-identifier README.md lib identifier.js index.js keyword.js package.json scripts generate-identifier-regex.js highlight README.md lib index.js node_modules ansi-styles index.js package.json readme.md chalk index.js package.json readme.md templates.js types index.d.ts color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name .eslintrc.json README.md index.js package.json test.js escape-string-regexp index.js package.json readme.md has-flag index.js package.json readme.md supports-color browser.js index.js package.json readme.md package.json @eslint eslintrc CHANGELOG.md README.md conf config-schema.js environments.js eslint-all.js eslint-recommended.js lib cascading-config-array-factory.js config-array-factory.js config-array config-array.js config-dependency.js extracted-config.js ignore-pattern.js index.js override-tester.js flat-compat.js index.js shared ajv.js config-ops.js config-validator.js deprecation-warnings.js naming.js relative-module-resolver.js types.js package.json @humanwhocodes config-array README.md api.js package.json object-schema .eslintrc.js .travis.yml README.md package.json src index.js merge-strategy.js object-schema.js validation-strategy.js tests merge-strategy.js object-schema.js validation-strategy.js acorn-jsx README.md index.d.ts index.js package.json xhtml.js acorn CHANGELOG.md README.md dist acorn.d.ts acorn.js acorn.mjs.d.ts bin.js package.json ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js ansi-colors README.md index.js package.json symbols.js types index.d.ts ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md argparse CHANGELOG.md README.md index.js lib action.js action append.js append constant.js count.js help.js store.js store constant.js false.js true.js subparsers.js version.js action_container.js argparse.js argument error.js exclusive.js group.js argument_parser.js const.js help added_formatters.js formatter.js namespace.js utils.js package.json as-bignum README.md assembly __tests__ as-pect.d.ts i128.spec.as.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts package.json asbuild README.md dist cli.d.ts cli.js commands build.d.ts build.js fmt.d.ts fmt.js index.d.ts index.js init cmd.d.ts cmd.js files asconfigJson.d.ts asconfigJson.js aspecConfig.d.ts aspecConfig.js assembly_files.d.ts assembly_files.js eslintConfig.d.ts eslintConfig.js gitignores.d.ts gitignores.js index.d.ts index.js indexJs.d.ts indexJs.js packageJson.d.ts packageJson.js test_files.d.ts test_files.js index.d.ts index.js interfaces.d.ts interfaces.js run.d.ts run.js test.d.ts test.js index.d.ts index.js main.d.ts main.js utils.d.ts utils.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md assembly JSON.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts index.js package.json temp-docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md assemblyscript-regex .eslintrc.js .github workflows benchmark.yml release.yml test.yml README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __spec_tests__ generated.spec.ts __tests__ alterations.spec.ts as-pect.d.ts boundary-assertions.spec.ts capture-group.spec.ts character-classes.spec.ts character-sets.spec.ts characters.ts empty.ts quantifiers.spec.ts range-quantifiers.spec.ts regex.spec.ts utils.ts char.ts env.ts index.ts nfa matcher.ts nfa.ts types.ts walker.ts parser node.ts parser.ts string-iterator.ts walker.ts regexp.ts tsconfig.json util.ts benchmark benchmark.js package.json spec test-generator.js ts index.ts tsconfig.json assemblyscript-temporal .github workflows node.js.yml release.yml .vscode launch.json README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __tests__ README.md as-pect.d.ts date.spec.ts duration.spec.ts empty.ts plaindate.spec.ts plaindatetime.spec.ts plainmonthday.spec.ts plaintime.spec.ts plainyearmonth.spec.ts timezone.spec.ts zoneddatetime.spec.ts constants.ts date.ts duration.ts enums.ts env.ts index.ts instant.ts now.ts plaindate.ts plaindatetime.ts plainmonthday.ts plaintime.ts plainyearmonth.ts timezone.ts tsconfig.json tz __tests__ index.spec.ts rule.spec.ts zone.spec.ts iana.ts index.ts rule.ts zone.ts utils.ts zoneddatetime.ts development.md package.json tzdb README.md iana theory.html zoneinfo2tdf.pl assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json astral-regex index.d.ts index.js package.json readme.md axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match .github FUNDING.yml LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json callsites index.d.ts index.js package.json readme.md camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js cross-spawn CHANGELOG.md README.md index.js lib enoent.js parse.js util escape.js readShebang.js resolveCommand.js package.json csv-stringify README.md lib browser index.js sync.js es5 index.d.ts index.js sync.d.ts sync.js index.d.ts index.js sync.d.ts sync.js package.json debug README.md package.json src browser.js common.js index.js node.js decamelize index.js package.json readme.md deep-is .travis.yml example cmp.js index.js package.json test NaN.js cmp.js neg-vs-pos-0.js diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js doctrine CHANGELOG.md README.md lib doctrine.js typed.js utility.js package.json emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js enquirer CHANGELOG.md README.md index.d.ts index.js lib ansi.js combos.js completer.js interpolate.js keypress.js placeholder.js prompt.js prompts autocomplete.js basicauth.js confirm.js editable.js form.js index.js input.js invisible.js list.js multiselect.js numeral.js password.js quiz.js scale.js select.js snippet.js sort.js survey.js text.js toggle.js render.js roles.js state.js styles.js symbols.js theme.js timer.js types array.js auth.js boolean.js index.js number.js string.js utils.js package.json env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js escape-string-regexp index.d.ts index.js package.json readme.md eslint-scope CHANGELOG.md README.md lib definition.js index.js pattern-visitor.js reference.js referencer.js scope-manager.js scope.js variable.js package.json eslint-utils README.md index.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json eslint CHANGELOG.md README.md bin eslint.js conf category-list.json config-schema.js default-cli-options.js eslint-all.js eslint-recommended.js replacements.json lib api.js cli-engine cli-engine.js file-enumerator.js formatters checkstyle.js codeframe.js compact.js html.js jslint-xml.js json-with-metadata.js json.js junit.js stylish.js table.js tap.js unix.js visualstudio.js hash.js index.js lint-result-cache.js load-rules.js xml-escape.js cli.js config default-config.js flat-config-array.js flat-config-schema.js rule-validator.js eslint eslint.js index.js init autoconfig.js config-file.js config-initializer.js config-rule.js npm-utils.js source-code-utils.js linter apply-disable-directives.js code-path-analysis code-path-analyzer.js code-path-segment.js code-path-state.js code-path.js debug-helpers.js fork-context.js id-generator.js config-comment-parser.js index.js interpolate.js linter.js node-event-generator.js report-translator.js rule-fixer.js rules.js safe-emitter.js source-code-fixer.js timing.js options.js rule-tester index.js rule-tester.js rules accessor-pairs.js array-bracket-newline.js array-bracket-spacing.js array-callback-return.js array-element-newline.js arrow-body-style.js arrow-parens.js arrow-spacing.js block-scoped-var.js block-spacing.js brace-style.js callback-return.js camelcase.js capitalized-comments.js class-methods-use-this.js comma-dangle.js comma-spacing.js comma-style.js complexity.js computed-property-spacing.js consistent-return.js consistent-this.js constructor-super.js curly.js default-case-last.js default-case.js default-param-last.js dot-location.js dot-notation.js eol-last.js eqeqeq.js for-direction.js func-call-spacing.js func-name-matching.js func-names.js func-style.js function-call-argument-newline.js function-paren-newline.js generator-star-spacing.js getter-return.js global-require.js grouped-accessor-pairs.js guard-for-in.js handle-callback-err.js id-blacklist.js id-denylist.js id-length.js id-match.js implicit-arrow-linebreak.js indent-legacy.js indent.js index.js init-declarations.js jsx-quotes.js key-spacing.js keyword-spacing.js line-comment-position.js linebreak-style.js lines-around-comment.js lines-around-directive.js lines-between-class-members.js max-classes-per-file.js max-depth.js max-len.js max-lines-per-function.js max-lines.js max-nested-callbacks.js max-params.js max-statements-per-line.js max-statements.js multiline-comment-style.js multiline-ternary.js new-cap.js new-parens.js newline-after-var.js newline-before-return.js newline-per-chained-call.js no-alert.js no-array-constructor.js no-async-promise-executor.js no-await-in-loop.js no-bitwise.js no-buffer-constructor.js no-caller.js no-case-declarations.js no-catch-shadow.js no-class-assign.js no-compare-neg-zero.js no-cond-assign.js no-confusing-arrow.js no-console.js no-const-assign.js no-constant-condition.js no-constructor-return.js no-continue.js no-control-regex.js no-debugger.js no-delete-var.js no-div-regex.js no-dupe-args.js no-dupe-class-members.js no-dupe-else-if.js no-dupe-keys.js no-duplicate-case.js no-duplicate-imports.js no-else-return.js no-empty-character-class.js no-empty-function.js no-empty-pattern.js no-empty.js no-eq-null.js no-eval.js no-ex-assign.js no-extend-native.js no-extra-bind.js no-extra-boolean-cast.js no-extra-label.js no-extra-parens.js no-extra-semi.js no-fallthrough.js no-floating-decimal.js no-func-assign.js no-global-assign.js no-implicit-coercion.js no-implicit-globals.js no-implied-eval.js no-import-assign.js no-inline-comments.js no-inner-declarations.js no-invalid-regexp.js no-invalid-this.js no-irregular-whitespace.js no-iterator.js no-label-var.js no-labels.js no-lone-blocks.js no-lonely-if.js no-loop-func.js no-loss-of-precision.js no-magic-numbers.js no-misleading-character-class.js no-mixed-operators.js no-mixed-requires.js no-mixed-spaces-and-tabs.js no-multi-assign.js no-multi-spaces.js no-multi-str.js no-multiple-empty-lines.js no-native-reassign.js no-negated-condition.js no-negated-in-lhs.js no-nested-ternary.js no-new-func.js no-new-object.js no-new-require.js no-new-symbol.js no-new-wrappers.js no-new.js no-nonoctal-decimal-escape.js no-obj-calls.js no-octal-escape.js no-octal.js no-param-reassign.js no-path-concat.js no-plusplus.js no-process-env.js no-process-exit.js no-promise-executor-return.js no-proto.js no-prototype-builtins.js no-redeclare.js no-regex-spaces.js no-restricted-exports.js no-restricted-globals.js no-restricted-imports.js no-restricted-modules.js no-restricted-properties.js no-restricted-syntax.js no-return-assign.js no-return-await.js no-script-url.js no-self-assign.js no-self-compare.js no-sequences.js no-setter-return.js no-shadow-restricted-names.js no-shadow.js no-spaced-func.js no-sparse-arrays.js no-sync.js no-tabs.js no-template-curly-in-string.js no-ternary.js no-this-before-super.js no-throw-literal.js no-trailing-spaces.js no-undef-init.js no-undef.js no-undefined.js no-underscore-dangle.js no-unexpected-multiline.js no-unmodified-loop-condition.js no-unneeded-ternary.js no-unreachable-loop.js no-unreachable.js no-unsafe-finally.js no-unsafe-negation.js no-unsafe-optional-chaining.js no-unused-expressions.js no-unused-labels.js no-unused-vars.js no-use-before-define.js no-useless-backreference.js no-useless-call.js no-useless-catch.js no-useless-computed-key.js no-useless-concat.js no-useless-constructor.js no-useless-escape.js no-useless-rename.js no-useless-return.js no-var.js no-void.js no-warning-comments.js no-whitespace-before-property.js no-with.js nonblock-statement-body-position.js object-curly-newline.js object-curly-spacing.js object-property-newline.js object-shorthand.js one-var-declaration-per-line.js one-var.js operator-assignment.js operator-linebreak.js padded-blocks.js padding-line-between-statements.js prefer-arrow-callback.js prefer-const.js prefer-destructuring.js prefer-exponentiation-operator.js prefer-named-capture-group.js prefer-numeric-literals.js prefer-object-spread.js prefer-promise-reject-errors.js prefer-reflect.js prefer-regex-literals.js prefer-rest-params.js prefer-spread.js prefer-template.js quote-props.js quotes.js radix.js require-atomic-updates.js require-await.js require-jsdoc.js require-unicode-regexp.js require-yield.js rest-spread-spacing.js semi-spacing.js semi-style.js semi.js sort-imports.js sort-keys.js sort-vars.js space-before-blocks.js space-before-function-paren.js space-in-parens.js space-infix-ops.js space-unary-ops.js spaced-comment.js strict.js switch-colon-spacing.js symbol-description.js template-curly-spacing.js template-tag-spacing.js unicode-bom.js use-isnan.js utils ast-utils.js fix-tracker.js keywords.js lazy-loading-rule-map.js patterns letters.js unicode index.js is-combining-character.js is-emoji-modifier.js is-regional-indicator-symbol.js is-surrogate-pair.js valid-jsdoc.js valid-typeof.js vars-on-top.js wrap-iife.js wrap-regex.js yield-star-spacing.js yoda.js shared ajv.js ast-utils.js config-validator.js deprecation-warnings.js logging.js relative-module-resolver.js runtime-info.js string-utils.js traverser.js types.js source-code index.js source-code.js token-store backward-token-comment-cursor.js backward-token-cursor.js cursor.js cursors.js decorative-cursor.js filter-cursor.js forward-token-comment-cursor.js forward-token-cursor.js index.js limit-cursor.js padded-token-cursor.js skip-cursor.js utils.js messages all-files-ignored.js extend-config-missing.js failed-to-read-json.js file-not-found.js no-config-found.js plugin-conflict.js plugin-invalid.js plugin-missing.js print-config-with-directory-path.js whitespace-found.js package.json espree CHANGELOG.md README.md espree.js lib ast-node-types.js espree.js features.js options.js token-translator.js visitor-keys.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json esprima README.md bin esparse.js esvalidate.js dist esprima.js package.json esquery README.md dist esquery.esm.js esquery.esm.min.js esquery.js esquery.lite.js esquery.lite.min.js esquery.min.js license.txt node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json parser.js esrecurse README.md esrecurse.js gulpfile.babel.js node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json estraverse README.md estraverse.js gulpfile.js package.json esutils README.md lib ast.js code.js keyword.js utils.js package.json fast-deep-equal README.md es6 index.d.ts index.js react.d.ts react.js index.d.ts index.js package.json react.d.ts react.js fast-json-stable-stringify .eslintrc.yml .github FUNDING.yml .travis.yml README.md benchmark index.js test.json example key_cmp.js nested.js str.js value_cmp.js index.d.ts index.js package.json test cmp.js nested.js str.js to-json.js fast-levenshtein LICENSE.md README.md levenshtein.js package.json file-entry-cache README.md cache.js changelog.md package.json find-up index.d.ts index.js package.json readme.md flat-cache README.md changelog.md package.json src cache.js del.js utils.js flatted .github FUNDING.yml README.md SPECS.md cjs index.js package.json es.js esm index.js index.js min.js package.json php flatted.php types.d.ts follow-redirects README.md http.js https.js index.js node_modules debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js ms index.js license.md package.json readme.md package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json function-bind .jscs.json .travis.yml README.md implementation.js index.js package.json test index.js functional-red-black-tree README.md bench test.js package.json rbtree.js test test.js get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob-parent CHANGELOG.md README.md index.js package.json glob README.md changelog.md common.js glob.js package.json sync.js globals globals.json index.d.ts index.js package.json readme.md has-flag index.d.ts index.js package.json readme.md has README.md package.json src index.js test index.js hasurl README.md index.js package.json ignore CHANGELOG.md README.md index.d.ts index.js legacy.js package.json import-fresh index.d.ts index.js package.json readme.md imurmurhash README.md imurmurhash.js imurmurhash.min.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json interpret README.md index.js mjs-stub.js package.json is-core-module CHANGELOG.md README.md core.json index.js package.json test index.js is-extglob README.md index.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md is-glob README.md index.js package.json isarray .travis.yml README.md component.json index.js package.json test.js isexe README.md index.js mode.js package.json test basic.js windows.js isobject README.md index.js package.json js-base64 LICENSE.md README.md base64.d.ts base64.js package.json js-tokens CHANGELOG.md README.md index.js package.json js-yaml CHANGELOG.md README.md bin js-yaml.js dist js-yaml.js js-yaml.min.js index.js lib js-yaml.js js-yaml common.js dumper.js exception.js loader.js mark.js schema.js schema core.js default_full.js default_safe.js failsafe.js json.js type.js type binary.js bool.js float.js int.js js function.js regexp.js undefined.js map.js merge.js null.js omap.js pairs.js seq.js set.js str.js timestamp.js package.json json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js json-stable-stringify-without-jsonify .travis.yml example key_cmp.js nested.js str.js value_cmp.js index.js package.json test cmp.js nested.js replacer.js space.js str.js to-json.js levn README.md lib cast.js index.js parse-string.js package.json line-column README.md lib line-column.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.merge README.md index.js package.json lodash.sortby README.md index.js package.json lodash.truncate README.md index.js package.json long README.md dist long.js index.js package.json src long.js lru-cache README.md index.js package.json minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md natural-compare README.md index.js package.json near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts datetime.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ empty.ts exportAs.ts model.ts sentences.ts singleton copy.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json optionator CHANGELOG.md README.md lib help.js index.js util.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md parent-module index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md path-key index.d.ts index.js package.json readme.md path-parse README.md index.js package.json prelude-ls CHANGELOG.md README.md lib Func.js List.js Num.js Obj.js Str.js index.js package.json progress CHANGELOG.md Readme.md index.js lib node-progress.js package.json punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json rechoir .travis.yml README.md index.js lib extension.js normalize.js register.js package.json regexpp README.md index.d.ts index.js package.json require-directory .travis.yml index.js package.json require-from-string index.js package.json readme.md require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json resolve-from index.js package.json readme.md resolve SECURITY.md appveyor.yml example async.js sync.js index.js lib async.js caller.js core.js core.json is-core.js node-modules-paths.js normalize-options.js sync.js package.json test core.js dotdot.js dotdot abc index.js index.js faulty_basedir.js filter.js filter_sync.js mock.js mock_sync.js module_dir.js module_dir xmodules aaa index.js ymodules aaa index.js zmodules bbb main.js package.json node-modules-paths.js node_path.js node_path x aaa index.js ccc index.js y bbb index.js ccc index.js nonstring.js pathfilter.js pathfilter deep_ref main.js precedence.js precedence aaa.js aaa index.js main.js bbb.js bbb main.js resolver.js resolver baz doom.js package.json quux.js browser_field a.js b.js package.json cup.coffee dot_main index.js package.json dot_slash_main index.js package.json foo.js incorrect_main index.js package.json invalid_main package.json mug.coffee mug.js multirepo lerna.json package.json packages package-a index.js package.json package-b index.js package.json nested_symlinks mylib async.js package.json sync.js other_path lib other-lib.js root.js quux foo index.js same_names foo.js foo index.js symlinked _ node_modules foo.js package bar.js package.json without_basedir main.js resolver_sync.js shadowed_core.js shadowed_core node_modules util index.js subdirs.js symlinks.js ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json semver CHANGELOG.md README.md bin semver.js classes comparator.js index.js range.js semver.js functions clean.js cmp.js coerce.js compare-build.js compare-loose.js compare.js diff.js eq.js gt.js gte.js inc.js lt.js lte.js major.js minor.js neq.js parse.js patch.js prerelease.js rcompare.js rsort.js satisfies.js sort.js valid.js index.js internal constants.js debug.js identifiers.js parse-options.js re.js package.json preload.js ranges gtr.js intersects.js ltr.js max-satisfying.js min-satisfying.js min-version.js outside.js simplify.js subset.js to-comparators.js valid.js set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json shebang-command index.js package.json readme.md shebang-regex index.d.ts index.js package.json readme.md shelljs CHANGELOG.md README.md commands.js global.js make.js package.json plugin.js shell.js src cat.js cd.js chmod.js common.js cp.js dirs.js echo.js error.js exec-child.js exec.js find.js grep.js head.js ln.js ls.js mkdir.js mv.js popd.js pushd.js pwd.js rm.js sed.js set.js sort.js tail.js tempdir.js test.js to.js toEnd.js touch.js uniq.js which.js slice-ansi index.js package.json readme.md sprintf-js README.md bower.json demo angular.html dist angular-sprintf.min.js sprintf.min.js gruntfile.js package.json src angular-sprintf.js sprintf.js test test.js string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md strip-json-comments index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md table README.md dist alignString.d.ts alignString.js alignTableData.d.ts alignTableData.js calculateCellHeight.d.ts calculateCellHeight.js calculateCellWidths.d.ts calculateCellWidths.js calculateColumnWidths.d.ts calculateColumnWidths.js calculateRowHeights.d.ts calculateRowHeights.js createStream.d.ts createStream.js drawBorder.d.ts drawBorder.js drawContent.d.ts drawContent.js drawHeader.d.ts drawHeader.js drawRow.d.ts drawRow.js drawTable.d.ts drawTable.js generated validators.d.ts validators.js getBorderCharacters.d.ts getBorderCharacters.js index.d.ts index.js makeStreamConfig.d.ts makeStreamConfig.js makeTableConfig.d.ts makeTableConfig.js mapDataUsingRowHeights.d.ts mapDataUsingRowHeights.js padTableData.d.ts padTableData.js stringifyTableData.d.ts stringifyTableData.js table.d.ts table.js truncateTableData.d.ts truncateTableData.js types api.d.ts api.js internal.d.ts internal.js utils.d.ts utils.js validateConfig.d.ts validateConfig.js validateTableData.d.ts validateTableData.js wrapCell.d.ts wrapCell.js wrapString.d.ts wrapString.js wrapWord.d.ts wrapWord.js node_modules ajv .runkit_example.js README.md dist 2019.d.ts 2019.js 2020.d.ts 2020.js ajv.d.ts ajv.js compile codegen code.d.ts code.js index.d.ts index.js scope.d.ts scope.js errors.d.ts errors.js index.d.ts index.js jtd parse.d.ts parse.js serialize.d.ts serialize.js types.d.ts types.js names.d.ts names.js ref_error.d.ts ref_error.js resolve.d.ts resolve.js rules.d.ts rules.js util.d.ts util.js validate applicability.d.ts applicability.js boolSchema.d.ts boolSchema.js dataType.d.ts dataType.js defaults.d.ts defaults.js index.d.ts index.js keyword.d.ts keyword.js subschema.d.ts subschema.js core.d.ts core.js jtd.d.ts jtd.js refs data.json json-schema-2019-09 index.d.ts index.js meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.d.ts index.js meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.d.ts jtd-schema.js runtime equal.d.ts equal.js parseJson.d.ts parseJson.js quote.d.ts quote.js timestamp.d.ts timestamp.js ucs2length.d.ts ucs2length.js validation_error.d.ts validation_error.js standalone index.d.ts index.js instance.d.ts instance.js types index.d.ts index.js json-schema.d.ts json-schema.js jtd-schema.d.ts jtd-schema.js vocabularies applicator additionalItems.d.ts additionalItems.js additionalProperties.d.ts additionalProperties.js allOf.d.ts allOf.js anyOf.d.ts anyOf.js contains.d.ts contains.js dependencies.d.ts dependencies.js dependentSchemas.d.ts dependentSchemas.js if.d.ts if.js index.d.ts index.js items.d.ts items.js items2020.d.ts items2020.js not.d.ts not.js oneOf.d.ts oneOf.js patternProperties.d.ts patternProperties.js prefixItems.d.ts prefixItems.js properties.d.ts properties.js propertyNames.d.ts propertyNames.js thenElse.d.ts thenElse.js code.d.ts code.js core id.d.ts id.js index.d.ts index.js ref.d.ts ref.js discriminator index.d.ts index.js types.d.ts types.js draft2020.d.ts draft2020.js draft7.d.ts draft7.js dynamic dynamicAnchor.d.ts dynamicAnchor.js dynamicRef.d.ts dynamicRef.js index.d.ts index.js recursiveAnchor.d.ts recursiveAnchor.js recursiveRef.d.ts recursiveRef.js errors.d.ts errors.js format format.d.ts format.js index.d.ts index.js jtd discriminator.d.ts discriminator.js elements.d.ts elements.js enum.d.ts enum.js error.d.ts error.js index.d.ts index.js metadata.d.ts metadata.js nullable.d.ts nullable.js optionalProperties.d.ts optionalProperties.js properties.d.ts properties.js ref.d.ts ref.js type.d.ts type.js union.d.ts union.js values.d.ts values.js metadata.d.ts metadata.js next.d.ts next.js unevaluated index.d.ts index.js unevaluatedItems.d.ts unevaluatedItems.js unevaluatedProperties.d.ts unevaluatedProperties.js validation const.d.ts const.js dependentRequired.d.ts dependentRequired.js enum.d.ts enum.js index.d.ts index.js limitContains.d.ts limitContains.js limitItems.d.ts limitItems.js limitLength.d.ts limitLength.js limitNumber.d.ts limitNumber.js limitProperties.d.ts limitProperties.js multipleOf.d.ts multipleOf.js pattern.d.ts pattern.js required.d.ts required.js uniqueItems.d.ts uniqueItems.js lib 2019.ts 2020.ts ajv.ts compile codegen code.ts index.ts scope.ts errors.ts index.ts jtd parse.ts serialize.ts types.ts names.ts ref_error.ts resolve.ts rules.ts util.ts validate applicability.ts boolSchema.ts dataType.ts defaults.ts index.ts keyword.ts subschema.ts core.ts jtd.ts refs data.json json-schema-2019-09 index.ts meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.ts meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.ts runtime equal.ts parseJson.ts quote.ts timestamp.ts ucs2length.ts validation_error.ts standalone index.ts instance.ts types index.ts json-schema.ts jtd-schema.ts vocabularies applicator additionalItems.ts additionalProperties.ts allOf.ts anyOf.ts contains.ts dependencies.ts dependentSchemas.ts if.ts index.ts items.ts items2020.ts not.ts oneOf.ts patternProperties.ts prefixItems.ts properties.ts propertyNames.ts thenElse.ts code.ts core id.ts index.ts ref.ts discriminator index.ts types.ts draft2020.ts draft7.ts dynamic dynamicAnchor.ts dynamicRef.ts index.ts recursiveAnchor.ts recursiveRef.ts errors.ts format format.ts index.ts jtd discriminator.ts elements.ts enum.ts error.ts index.ts metadata.ts nullable.ts optionalProperties.ts properties.ts ref.ts type.ts union.ts values.ts metadata.ts next.ts unevaluated index.ts unevaluatedItems.ts unevaluatedProperties.ts validation const.ts dependentRequired.ts enum.ts index.ts limitContains.ts limitItems.ts limitLength.ts limitNumber.ts limitProperties.ts multipleOf.ts pattern.ts required.ts uniqueItems.ts package.json json-schema-traverse .eslintrc.yml .github FUNDING.yml workflows build.yml publish.yml README.md index.d.ts index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js normalize-windows-path.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js strip-absolute-path.js strip-trailing-slashes.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js package.json text-table .travis.yml example align.js center.js dotalign.js doubledot.js table.js index.js package.json test align.js ansi-colors.js center.js dotalign.js doubledot.js table.js tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist cjs decorator.js index.js mixin-tracking.js mixins.js proxy.js settings.js types.js util.js esm index.js index.min.js types decorator.d.ts index.d.ts mixin-tracking.d.ts mixins.d.ts proxy.d.ts settings.d.ts types.d.ts util.d.ts package.json type-check README.md lib check.js index.js parse-type.js package.json type-fest base.d.ts index.d.ts package.json readme.md source async-return-type.d.ts asyncify.d.ts basic.d.ts conditional-except.d.ts conditional-keys.d.ts conditional-pick.d.ts entries.d.ts entry.d.ts except.d.ts fixed-length-array.d.ts iterable-element.d.ts literal-union.d.ts merge-exclusive.d.ts merge.d.ts mutable.d.ts opaque.d.ts package-json.d.ts partial-deep.d.ts promisable.d.ts promise-value.d.ts readonly-deep.d.ts require-at-least-one.d.ts require-exactly-one.d.ts set-optional.d.ts set-required.d.ts set-return-type.d.ts stringified.d.ts tsconfig-json.d.ts union-to-intersection.d.ts utilities.d.ts value-of.d.ts ts41 camel-case.d.ts delimiter-case.d.ts index.d.ts kebab-case.d.ts pascal-case.d.ts snake-case.d.ts universal-url README.md browser.js index.js package.json uri-js README.md dist es5 uri.all.d.ts uri.all.js uri.all.min.d.ts uri.all.min.js esnext index.d.ts index.js regexps-iri.d.ts regexps-iri.js regexps-uri.d.ts regexps-uri.js schemes http.d.ts http.js https.d.ts https.js mailto.d.ts mailto.js urn-uuid.d.ts urn-uuid.js urn.d.ts urn.js ws.d.ts ws.js wss.d.ts wss.js uri.d.ts uri.js util.d.ts util.js package.json v8-compile-cache CHANGELOG.md README.md package.json v8-compile-cache.js visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json which CHANGELOG.md README.md package.json which.js word-wrap README.md index.d.ts index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json package-lock.json package.json | features not yet implemented issues with the tests differences between PCRE and JS regex | | | dist global.e50bbfba.css global.e50bbfba.js index.html index.js logo-black.eab7a939.svg logo-white.7fec831f.svg src.e31bb0bc.js mail 04 954013e7193c5daf243b39c0b068d3.json 05 8e4c414c6c47e1e02d56b8e6584580.json 07 c1d74056f9f2caf76f56c0a03dd0bf.json 08 ba5499f68a3ebe8cc0bbfc7c452d73.json 09 506f4f43272058229c09b440c2862c.json 0a 356c53c25ba8842ddcac9b387bf7c1.json 0f 72b45f7499b5471ab9182df67dc768.json 11 afca25b6eb95696c72f37a9608f74e.json b70d1117395343c9ca6d7f46e8c540.json 13 faad0f3bf3ee8bccace200e81b0156.json 15 1fb9b25f53dbf24f91c546ac6a8913.json 19 c8139f6adda58c66585f609f94dedb.json 1a 3b0235c6abc99708f30cf4c8c9b3bc.json 24 a052e0c26d2d1cbe40da7eeed98292.json 26 408a6bc33f75cad3ce1297a1f60cac.json 28 1b69647919277a1bc67ceadb648c84.json faa36158d06efb25c170fbfc39413c.json 35 e2a80eb673baae9e37294b7d850955.json 38 ef61c9cdf27fe07d3120631e24c8d4.json 3a ec8bf5fb4f698089ffedb41032dc93.json 3d 6921bde6384d9d81e38cfd31545e32.json debc6265251376f43a99004660ad08.json 42 197dda48e93cf0faa3ea8954b5f96b.json 47 d2f6e4ffeae48d81f6f780976b018a.json 49 a0fb23d649d49584ab7c7449400f03.json 4b a3c6ef61401847e4da9e7a25f7f467.json 4d 1db34909aafb90c705aaf7bbd4d90c.json 4e 5475063f4fb01de8770e695296ad07.json 51 37ecb80815cc1917e193cb56bd2589.json 58 930c1a91cb6d409bac86178c6b2644.json 5a d02bd4e35d0f03f2fc4325f2ebe182.json 5d af6b98c8001ad8e1ce6dd9a884b225.json 5e a5151716cbc78e8a367d3c2d98df72.json 61 1c98266f5d5a67f675cb9baf92a620.json 62 bc77afdad33f1d857f150c485e971a.json 64 117492ea8b337e25979d1660bbe1bf.json 65 98c565ab403beeab67aa6fc41a9dd5.json 6a cdbbed01db582f3db529cddfb793e5.json 6b 2a38c31ea11ae95d5a7089d63e616b.json 6c 2fa8f799673211aca462b73a81937f.json 6d 309792d6e5dffb0353b85c968ea381.json 76 154344a802b3520451e099195180c5.json 78 4fcdc83a1a43e92d337c64daf1e02f.json 7f 60af5561dc76976b4191540c51cd0f.json 81 dc12d8db73716af54716acd9ca8346.json 86 0bbffafb10014af4148e7e0bee698b.json 22988d15914c968f94efc9c1c9d9db.json 6b385dfd83340eb24c94f4b2da9dc5.json 87 09ab172367377b5b83632623d7b3ea.json 88 864c5d69a1141f66d2945430ab064c.json 8f 344f456f4f97c496d76449376e5556.json 98 1edfa2584792c53b4db9ed320e05fd.json 99 44d101858dec8b58ccc3c9e0939449.json 5af8450d41994465c747145b1e1f28.json 9a 6b3cbd57907c2bd02ec218a1a0e949.json 9c b984f72ac202e66c2d233fb7d94de7.json 9e 75b160fbf4e30314885183d8895ef5.json 9f 3e374500ef603133b74da1594031d0.json a4 09fbd03639e34e4e5f447393e6c122.json 2e182f801a907a8e094815d539b827.json aa 3fb76a071be9ce0572dc07eebef8df.json e72a79efb8d71099706833e3853830.json ac 6ba65ed9f3479db0df94382757c770.json ad 789cc4a184de87cd22017a6e6dfbd6.json ae 123c3a0ea2eb06b687395a1a58e683.json af b90632a5ce590f85da2679442e715a.json b0 208018e9aae3f3b1d7a4d9d31474be.json b2 cea7981b27fed15fb14070663815af.json ba 72e723f37ac079da4da3a9e403dee1.json bd a180e6c8100d48a4500c76f4715e9a.json bf 84ac2f2c445c2e609a872e24875a95.json c5 3fa9be3d73200eedb373a623e5ac17.json c7 8772e0ae0df21b7aa8957ca0a07d34.json cd 15d79e80b057b73a1255bb2be94dce.json cf 6822e1e50268266fa7f4fb10d0b435.json da 88968e801730478c199d16cd5f4743.json f4f50c86e17a660612959058aa6608.json df 0c4a93f85e99d02d2cb6e1ad03209e.json 71cbe03331b6fe6baefb9a91d13bb6.json e0 b9f1755701e5bcca22c793f2c57f34.json e2 3d283760428064a813f805e2d35a67.json 81adbf5e861445ef830e67cee03724.json e4 b45e5459bc2b47edb418da22720524.json ee e22db4561f4e1ec9920edeff5aded7.json ef cd0a2e10e24689634bd6b2701d92bd.json f0 a05abc4b73a4a7c9f3cfa8134dce0c.json f7 30df677bbb301c40c902f8d90d8069.json fa ddfbc90a7ae2db5bed83f197381f01.json fb 24d72ebdc2bf91dd2b1c2069ea96d1.json package.json src assets logo-black.svg logo-white.svg config.js global.css index.html index.js main.test.js utils.js wallet login index.html
### esutils [![Build Status](https://secure.travis-ci.org/estools/esutils.svg)](http://travis-ci.org/estools/esutils) esutils ([esutils](http://github.com/estools/esutils)) is utility box for ECMAScript language tools. ### API ### ast #### ast.isExpression(node) Returns true if `node` is an Expression as defined in ECMA262 edition 5.1 section [11](https://es5.github.io/#x11). #### ast.isStatement(node) Returns true if `node` is a Statement as defined in ECMA262 edition 5.1 section [12](https://es5.github.io/#x12). #### ast.isIterationStatement(node) Returns true if `node` is an IterationStatement as defined in ECMA262 edition 5.1 section [12.6](https://es5.github.io/#x12.6). #### ast.isSourceElement(node) Returns true if `node` is a SourceElement as defined in ECMA262 edition 5.1 section [14](https://es5.github.io/#x14). #### ast.trailingStatement(node) Returns `Statement?` if `node` has trailing `Statement`. ```js if (cond) consequent; ``` When taking this `IfStatement`, returns `consequent;` statement. #### ast.isProblematicIfStatement(node) Returns true if `node` is a problematic IfStatement. If `node` is a problematic `IfStatement`, `node` cannot be represented as an one on one JavaScript code. ```js { type: 'IfStatement', consequent: { type: 'WithStatement', body: { type: 'IfStatement', consequent: {type: 'EmptyStatement'} } }, alternate: {type: 'EmptyStatement'} } ``` The above node cannot be represented as a JavaScript code, since the top level `else` alternate belongs to an inner `IfStatement`. ### code #### code.isDecimalDigit(code) Return true if provided code is decimal digit. #### code.isHexDigit(code) Return true if provided code is hexadecimal digit. #### code.isOctalDigit(code) Return true if provided code is octal digit. #### code.isWhiteSpace(code) Return true if provided code is white space. White space characters are formally defined in ECMA262. #### code.isLineTerminator(code) Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262. #### code.isIdentifierStart(code) Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262. #### code.isIdentifierPart(code) Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262. ### keyword #### keyword.isKeywordES5(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections [7.6.1.1](http://es5.github.io/#x7.6.1.1) and [7.6.1.2](http://es5.github.io/#x7.6.1.2), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isKeywordES6(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections [11.6.2.1](http://ecma-international.org/ecma-262/6.0/#sec-keywords) and [11.6.2.2](http://ecma-international.org/ecma-262/6.0/#sec-future-reserved-words), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isReservedWordES5(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section [7.6.1](http://es5.github.io/#x7.6.1). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isReservedWordES6(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section [11.6.2](http://ecma-international.org/ecma-262/6.0/#sec-reserved-words). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isRestrictedWord(id) Returns `true` if provided identifier string is one of `eval` or `arguments`. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section [12.1.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers-static-semantics-early-errors). #### keyword.isIdentifierNameES5(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). #### keyword.isIdentifierNameES6(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section [11.6](http://ecma-international.org/ecma-262/6.0/#sec-names-and-keywords). #### keyword.isIdentifierES5(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. #### keyword.isIdentifierES6(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section [12.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. ### License Copyright (C) 2013 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # lodash.truncate v4.4.2 The [lodash](https://lodash.com/) method `_.truncate` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.truncate ``` In Node.js: ```js var truncate = require('lodash.truncate'); ``` See the [documentation](https://lodash.com/docs#truncate) or [package source](https://github.com/lodash/lodash/blob/4.4.2-npm-packages/lodash.truncate) for more details. # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/images/ajv_logo.png"> # Ajv: Another JSON Schema Validator The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07. [![Build Status](https://travis-ci.org/ajv-validator/ajv.svg?branch=master)](https://travis-ci.org/ajv-validator/ajv) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm (beta)](https://img.shields.io/npm/v/ajv/beta)](https://www.npmjs.com/package/ajv/v/7.0.0-beta.0) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv v7 beta is released [Ajv version 7.0.0-beta.0](https://github.com/ajv-validator/ajv/tree/v7-beta) is released with these changes: - to reduce the mistakes in JSON schemas and unexpected validation results, [strict mode](./docs/strict-mode.md) is added - it prohibits ignored or ambiguous JSON Schema elements. - to make code injection from untrusted schemas impossible, [code generation](./docs/codegen.md) is fully re-written to be safe. - to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas. - schemas are compiled to ES6 code (ES5 code generation is supported with an option). - to improve reliability and maintainability the code is migrated to TypeScript. **Please note**: - the support for JSON-Schema draft-04 is removed - if you have schemas using "id" attributes you have to replace them with "\$id" (or continue using version 6 that will be supported until 02/28/2021). - all formats are separated to ajv-formats package - they have to be explicitely added if you use them. See [release notes](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) for the details. To install the new version: ```bash npm install ajv@beta ``` See [Getting started with v7](https://github.com/ajv-validator/ajv/tree/v7-beta#usage) for code example. ## Mozilla MOSS grant and OpenJS Foundation [<img src="https://www.poberezkin.com/images/mozilla.png" width="240" height="68">](https://www.mozilla.org/en-US/moss/) &nbsp;&nbsp;&nbsp; [<img src="https://www.poberezkin.com/images/openjs.png" width="220" height="68">](https://openjsf.org/blog/2020/08/14/ajv-joins-openjs-foundation-as-an-incubation-project/) Ajv has been awarded a grant from Mozilla’s [Open Source Support (MOSS) program](https://www.mozilla.org/en-US/moss/) in the “Foundational Technology” track! It will sponsor the development of Ajv support of [JSON Schema version 2019-09](https://tools.ietf.org/html/draft-handrews-json-schema-02) and of [JSON Type Definition](https://tools.ietf.org/html/draft-ucarion-json-type-definition-04). Ajv also joined [OpenJS Foundation](https://openjsf.org/) – having this support will help ensure the longevity and stability of Ajv for all its users. This [blog post](https://www.poberezkin.com/posts/2020-08-14-ajv-json-validator-mozilla-open-source-grant-openjs-foundation.html) has more details. I am looking for the long term maintainers of Ajv – working with [ReadySet](https://www.thereadyset.co/), also sponsored by Mozilla, to establish clear guidelines for the role of a "maintainer" and the contribution standards, and to encourage a wider, more inclusive, contribution from the community. ## Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Using version 6 [JSON Schema draft-07](http://json-schema.org/latest/json-schema-validation.html) is published. [Ajv version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0) that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes). __Please note__: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance: ```javascript ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json')); ``` To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId: ```javascript var ajv = new Ajv({schemaId: 'id'}); // If you want to use both draft-04 and draft-06/07 schemas: // var ajv = new Ajv({schemaId: 'auto'}); ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json')); ``` ## Contents - [Performance](#performance) - [Features](#features) - [Getting started](#getting-started) - [Frequently Asked Questions](https://github.com/ajv-validator/ajv/blob/master/FAQ.md) - [Using in browser](#using-in-browser) - [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) - [Command line interface](#command-line-interface) - Validation - [Keywords](#validation-keywords) - [Annotation keywords](#annotation-keywords) - [Formats](#formats) - [Combining schemas with $ref](#ref) - [$data reference](#data-reference) - NEW: [$merge and $patch keywords](#merge-and-patch-keywords) - [Defining custom keywords](#defining-custom-keywords) - [Asynchronous schema compilation](#asynchronous-schema-compilation) - [Asynchronous validation](#asynchronous-validation) - [Security considerations](#security-considerations) - [Security contact](#security-contact) - [Untrusted schemas](#untrusted-schemas) - [Circular references in objects](#circular-references-in-javascript-objects) - [Trusted schemas](#security-risks-of-trusted-schemas) - [ReDoS attack](#redos-attack) - Modifying data during validation - [Filtering data](#filtering-data) - [Assigning defaults](#assigning-defaults) - [Coercing data types](#coercing-data-types) - API - [Methods](#api) - [Options](#options) - [Validation errors](#validation-errors) - [Plugins](#plugins) - [Related packages](#related-packages) - [Some packages using Ajv](#some-packages-using-ajv) - [Tests, Contributing, Changes history](#tests) - [Support, Code of conduct, License](#open-source-software-support) ## Performance Ajv generates code using [doT templates](https://github.com/olado/doT) to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=32,4,1&chs=600x416&chxl=-1:|djv|ajv|json-schema-validator-generator|jsen|is-my-json-valid|themis|z-schema|jsck|skeemas|json-schema-library|tv4&chd=t:100,98,72.1,66.8,50.1,15.1,6.1,3.8,1.2,0.7,0.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements full JSON Schema [draft-06/07](http://json-schema.org/) and draft-04 standards: - all validation keywords (see [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md)) - full support of remote refs (remote schemas have to be added with `addSchema` or compiled to be available) - support of circular references between schemas - correct string lengths for strings with unicode pairs (can be turned off) - [formats](#formats) defined by JSON Schema draft-07 standard and custom formats (can be turned off) - [validates schemas against meta-schema](#api-validateschema) - supports [browsers](#using-in-browser) and Node.js 0.10-14.x - [asynchronous loading](#asynchronous-schema-compilation) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](#options) - [error messages with parameters](#validation-errors) describing error reasons to allow creating custom error messages - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [filtering data](#filtering-data) from additional properties - [assigning defaults](#assigning-defaults) to missing properties and items - [coercing data](#coercing-data-types) to the types specified in `type` keywords - [custom keywords](#defining-custom-keywords) - draft-06/07 keywords `const`, `contains`, `propertyNames` and `if/then/else` - draft-06 boolean schemas (`true`/`false` as a schema to always pass/fail). - keywords `switch`, `patternRequired`, `formatMaximum` / `formatMinimum` and `formatExclusiveMaximum` / `formatExclusiveMinimum` from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [$data reference](#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](#asynchronous-validation) of custom formats and keywords ## Install ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://tonicdev.com/npm/ajv The fastest validation call: ```javascript // Node.js require: var Ajv = require('ajv'); // or ESM/TypeScript import import Ajv from 'ajv'; var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true} var validate = ajv.compile(schema); var valid = validate(data); if (!valid) console.log(validate.errors); ``` or with less code ```javascript // ... var valid = ajv.validate(schema, data); if (!valid) console.log(ajv.errors); // ... ``` or ```javascript // ... var valid = ajv.addSchema(schema, 'mySchema') .validate('mySchema', data); if (!valid) console.log(ajv.errorsText()); // ... ``` See [API](#api) and [Options](#options) for more details. Ajv compiles schemas to functions and caches them in all cases (using schema serialized with [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again. The best performance is achieved when using compiled functions returned by `compile` or `getSchema` methods (there is no additional function call). __Please note__: every time a validation function or `ajv.validate` are called `errors` property is overwritten. You need to copy `errors` array reference to another variable if you want to use it later (e.g., in the callback). See [Validation errors](#validation-errors) __Note for TypeScript users__: `ajv` provides its own TypeScript declarations out of the box, so you don't need to install the deprecated `@types/ajv` module. ## Using in browser You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle. If you need to use Ajv in several bundles you can create a separate UMD bundle using `npm run bundle` script (thanks to [siddo420](https://github.com/siddo420)). Then you need to load Ajv in the browser: ```html <script src="ajv.min.js"></script> ``` This bundle can be used with different module systems; it creates global `Ajv` if no module system is found. The browser bundle is available on [cdnjs](https://cdnjs.com/libraries/ajv). Ajv is tested with these browsers: [![Sauce Test Status](https://saucelabs.com/browser-matrix/epoberezkin.svg)](https://saucelabs.com/u/epoberezkin) __Please note__: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue [#234](https://github.com/ajv-validator/ajv/issues/234)). ### Ajv and Content Security Policies (CSP) If you're using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a `script-src` directive that includes the value `'unsafe-eval'`. :warning: NOTE, however, that `unsafe-eval` is NOT recommended in a secure CSP[[1]](https://developer.chrome.com/extensions/contentSecurityPolicy#relaxing-eval), as it has the potential to open the document to cross-site scripting (XSS) attacks. In order to make use of Ajv without easing your CSP, you can [pre-compile a schema using the CLI](https://github.com/ajv-validator/ajv-cli#compile-schemas). This will transpile the schema JSON into a JavaScript file that exports a `validate` function that works simlarly to a schema compiled at runtime. Note that pre-compilation of schemas is performed using [ajv-pack](https://github.com/ajv-validator/ajv-pack) and there are [some limitations to the schema features it can compile](https://github.com/ajv-validator/ajv-pack#limitations). A successfully pre-compiled schema is equivalent to the same schema compiled at runtime. ## Command line interface CLI is available as a separate npm package [ajv-cli](https://github.com/ajv-validator/ajv-cli). It supports: - compiling JSON Schemas to test their validity - BETA: generating standalone module exporting a validation function to be used without Ajv (using [ajv-pack](https://github.com/ajv-validator/ajv-pack)) - migrate schemas to draft-07 (using [json-schema-migrate](https://github.com/epoberezkin/json-schema-migrate)) - validating data file(s) against JSON Schema - testing expected validity of data against JSON Schema - referenced schemas - custom meta-schemas - files in JSON, JSON5, YAML, and JavaScript format - all Ajv options - reporting changes in data after validation in [JSON-patch](https://tools.ietf.org/html/rfc6902) format ## Validation keywords Ajv supports all validation keywords from draft-07 of JSON Schema standard: - [type](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#type) - [for numbers](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-numbers) - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf - [for strings](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-strings) - maxLength, minLength, pattern, format - [for arrays](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-arrays) - maxItems, minItems, uniqueItems, items, additionalItems, [contains](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#contains) - [for objects](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-objects) - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, [propertyNames](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#propertynames) - [for all types](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-all-types) - enum, [const](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#const) - [compound keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#compound-keywords) - not, oneOf, anyOf, allOf, [if/then/else](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#ifthenelse) With [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package Ajv also supports validation keywords from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) for JSON Schema standard: - [patternRequired](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#patternrequired-proposed) - like `required` but with patterns that some property should match. - [formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#formatmaximum--formatminimum-and-exclusiveformatmaximum--exclusiveformatminimum-proposed) - setting limits for date, time, etc. See [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md) for more details. ## Annotation keywords JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation. - `title` and `description`: information about the data represented by that schema - `$comment` (NEW in draft-07): information for developers. With option `$comment` Ajv logs or passes the comment string to the user-supplied function. See [Options](#options). - `default`: a default value of the data instance, see [Assigning defaults](#assigning-defaults). - `examples` (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema. - `readOnly` and `writeOnly` (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.). - `contentEncoding`: [RFC 2045](https://tools.ietf.org/html/rfc2045#section-6.1 ), e.g., "base64". - `contentMediaType`: [RFC 2046](https://tools.ietf.org/html/rfc2046), e.g., "image/png". __Please note__: Ajv does not implement validation of the keywords `examples`, `contentEncoding` and `contentMediaType` but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance. ## Formats Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use "format" keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see [ReDoS attack](#redos-attack). __Please note__: if you need to use "format" keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios. The following formats are implemented for string validation with "format" keyword: - _date_: full-date according to [RFC3339](http://tools.ietf.org/html/rfc3339#section-5.6). - _time_: time with optional time-zone. - _date-time_: date-time from the same source (time-zone is mandatory). `date`, `time` and `date-time` validate ranges in `full` mode and only regexp in `fast` mode (see [options](#options)). - _uri_: full URI. - _uri-reference_: URI reference, including full and relative URIs. - _uri-template_: URI template according to [RFC6570](https://tools.ietf.org/html/rfc6570) - _url_ (deprecated): [URL record](https://url.spec.whatwg.org/#concept-url). - _email_: email address. - _hostname_: host name according to [RFC1034](http://tools.ietf.org/html/rfc1034#section-3.5). - _ipv4_: IP address v4. - _ipv6_: IP address v6. - _regex_: tests whether a string is a valid regular expression by passing it to RegExp constructor. - _uuid_: Universally Unique IDentifier according to [RFC4122](http://tools.ietf.org/html/rfc4122). - _json-pointer_: JSON-pointer according to [RFC6901](https://tools.ietf.org/html/rfc6901). - _relative-json-pointer_: relative JSON-pointer according to [this draft](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00). __Please note__: JSON Schema draft-07 also defines formats `iri`, `iri-reference`, `idn-hostname` and `idn-email` for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here. There are two modes of format validation: `fast` and `full`. This mode affects formats `date`, `time`, `date-time`, `uri`, `uri-reference`, and `email`. See [Options](#options) for details. You can add additional formats and replace any of the formats above using [addFormat](#api-addformat) method. The option `unknownFormats` allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See [Options](#options) for details. You can find regular expressions used for format validation and the sources that were used in [formats.js](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js). ## <a name="ref"></a>Combining schemas with $ref You can structure your validation logic across multiple schema files and have schemas reference each other using `$ref` keyword. Example: ```javascript var schema = { "$id": "http://example.com/schemas/schema.json", "type": "object", "properties": { "foo": { "$ref": "defs.json#/definitions/int" }, "bar": { "$ref": "defs.json#/definitions/str" } } }; var defsSchema = { "$id": "http://example.com/schemas/defs.json", "definitions": { "int": { "type": "integer" }, "str": { "type": "string" } } }; ``` Now to compile your schema you can either pass all schemas to Ajv instance: ```javascript var ajv = new Ajv({schemas: [schema, defsSchema]}); var validate = ajv.getSchema('http://example.com/schemas/schema.json'); ``` or use `addSchema` method: ```javascript var ajv = new Ajv; var validate = ajv.addSchema(defsSchema) .compile(schema); ``` See [Options](#options) and [addSchema](#api) method. __Please note__: - `$ref` is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of `addSchema` method or as a property name in `schemas` option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using `compileAsync` method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See [Asynchronous schema compilation](#asynchronous-schema-compilation). ## $data reference With `$data` option you can use values from the validated data as the values for the schema keywords. See [proposal](https://github.com/json-schema-org/json-schema-spec/issues/51) for more information about how it works. `$data` reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems. The value of "$data" should be a [JSON-pointer](https://tools.ietf.org/html/rfc6901) to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a [relative JSON-pointer](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00) (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema). Examples. This schema requires that the value in property `smaller` is less or equal than the value in the property larger: ```javascript var ajv = new Ajv({$data: true}); var schema = { "properties": { "smaller": { "type": "number", "maximum": { "$data": "1/larger" } }, "larger": { "type": "number" } } }; var validData = { smaller: 5, larger: 7 }; ajv.validate(schema, validData); // true ``` This schema requires that the properties have the same format as their field names: ```javascript var schema = { "additionalProperties": { "type": "string", "format": { "$data": "0#" } } }; var validData = { 'date-time': '1963-06-19T08:30:06.283185Z', email: 'joe.bloggs@example.com' } ``` `$data` reference is resolved safely - it won't throw even if some property is undefined. If `$data` resolves to `undefined` the validation succeeds (with the exclusion of `const` keyword). If `$data` resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails. ## $merge and $patch keywords With the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) you can use the keywords `$merge` and `$patch` that allow extending JSON Schemas with patches using formats [JSON Merge Patch (RFC 7396)](https://tools.ietf.org/html/rfc7396) and [JSON Patch (RFC 6902)](https://tools.ietf.org/html/rfc6902). To add keywords `$merge` and `$patch` to Ajv instance use this code: ```javascript require('ajv-merge-patch')(ajv); ``` Examples. Using `$merge`: ```json { "$merge": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": { "properties": { "q": { "type": "number" } } } } } ``` Using `$patch`: ```json { "$patch": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": [ { "op": "add", "path": "/properties/q", "value": { "type": "number" } } ] } } ``` The schemas above are equivalent to this schema: ```json { "type": "object", "properties": { "p": { "type": "string" }, "q": { "type": "number" } }, "additionalProperties": false } ``` The properties `source` and `with` in the keywords `$merge` and `$patch` can use absolute or relative `$ref` to point to other schemas previously added to the Ajv instance or to the fragments of the current schema. See the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) for more information. ## Defining custom keywords The advantages of using custom keywords are: - allow creating validation scenarios that cannot be expressed using JSON Schema - simplify your schemas - help bringing a bigger part of the validation logic to your schemas - make your schemas more expressive, less verbose and closer to your application domain - implement custom data processors that modify your data (`modifying` option MUST be used in keyword definition) and/or create side effects while the data is being validated If a keyword is used only for side-effects and its validation result is pre-defined, use option `valid: true/false` in keyword definition to simplify both generated code (no error handling in case of `valid: true`) and your keyword functions (no need to return any validation result). The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas. You can define custom keywords with [addKeyword](#api-addkeyword) method. Keywords are defined on the `ajv` instance level - new instances will not have previously defined keywords. Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema. Example. `range` and `exclusiveRange` keywords using compiled schema: ```javascript ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) { var min = sch[0]; var max = sch[1]; return parentSchema.exclusiveRange === true ? function (data) { return data > min && data < max; } : function (data) { return data >= min && data <= max; } } }); var schema = { "range": [2, 4], "exclusiveRange": true }; var validate = ajv.compile(schema); console.log(validate(2.01)); // true console.log(validate(3.99)); // true console.log(validate(2)); // false console.log(validate(4)); // false ``` Several custom keywords (typeof, instanceof, range and propertyNames) are defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - they can be used for your schemas and as a starting point for your own custom keywords. See [Defining custom keywords](https://github.com/ajv-validator/ajv/blob/master/CUSTOM.md) for more details. ## Asynchronous schema compilation During asynchronous compilation remote references are loaded using supplied function. See `compileAsync` [method](#api-compileAsync) and `loadSchema` [option](#options). Example: ```javascript var ajv = new Ajv({ loadSchema: loadSchema }); ajv.compileAsync(schema).then(function (validate) { var valid = validate(data); // ... }); function loadSchema(uri) { return request.json(uri).then(function (res) { if (res.statusCode >= 400) throw new Error('Loading error: ' + res.statusCode); return res.body; }); } ``` __Please note__: [Option](#options) `missingRefs` should NOT be set to `"ignore"` or `"fail"` for asynchronous compilation to work. ## Asynchronous validation Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add `async: true` in the keyword or format definition (see [addFormat](#api-addformat), [addKeyword](#api-addkeyword) and [Defining custom keywords](#defining-custom-keywords)). If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have `"$async": true` keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without `$async` keyword Ajv will throw an exception during schema compilation. __Please note__: all asynchronous subschemas that are referenced from the current or other schemas should have `"$async": true` keyword as well, otherwise the schema compilation will fail. Validation function for an asynchronous custom format/keyword should return a promise that resolves with `true` or `false` (or rejects with `new Ajv.ValidationError(errors)` if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to [es7 async functions](http://tc39.github.io/ecmascript-asyncawait/) that can optionally be transpiled with [nodent](https://github.com/MatAtBread/nodent). Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via `processCode` option. See [Options](#options). The compiled validation function has `$async: true` property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas. Validation result will be a promise that resolves with validated data or rejects with an exception `Ajv.ValidationError` that contains the array of validation errors in `errors` property. Example: ```javascript var ajv = new Ajv; // require('ajv-async')(ajv); ajv.addKeyword('idExists', { async: true, type: 'number', validate: checkIdExists }); function checkIdExists(schema, data) { return knex(schema.table) .select('id') .where('id', data) .then(function (rows) { return !!rows.length; // true if record is found }); } var schema = { "$async": true, "properties": { "userId": { "type": "integer", "idExists": { "table": "users" } }, "postId": { "type": "integer", "idExists": { "table": "posts" } } } }; var validate = ajv.compile(schema); validate({ userId: 1, postId: 19 }) .then(function (data) { console.log('Data is valid', data); // { userId: 1, postId: 19 } }) .catch(function (err) { if (!(err instanceof Ajv.ValidationError)) throw err; // data is invalid console.log('Validation errors:', err.errors); }); ``` ### Using transpilers with asynchronous validation functions. [ajv-async](https://github.com/ajv-validator/ajv-async) uses [nodent](https://github.com/MatAtBread/nodent) to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser). #### Using nodent ```javascript var ajv = new Ajv; require('ajv-async')(ajv); // in the browser if you want to load ajv-async bundle separately you can: // window.ajvAsync(ajv); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` #### Using other transpilers ```javascript var ajv = new Ajv({ processCode: transpileFunc }); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` See [Options](#options). ## Security considerations JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider. ##### Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ##### Untrusted schemas Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application. If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. [#557](https://github.com/ajv-validator/ajv/issues/557)) - validating certain data can be slow It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in `pattern` and `patternProperties` keywords. Regardless the measures you take, using untrusted schemas increases security risks. ##### Circular references in JavaScript objects Ajv does not support schemas and validated data that have circular references in objects. See [issue #802](https://github.com/ajv-validator/ajv/issues/802). An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references. ##### Security risks of trusted schemas Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to): - `pattern` and `format` for large strings - in some cases using `maxLength` can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see [ReDoS attack](#redos-attack)). - `patternProperties` for large property names - use `propertyNames` to mitigate, but some regular expressions can have exponential evaluation time as well. - `uniqueItems` for large non-scalar arrays - use `maxItems` to mitigate __Please note__: The suggestions above to prevent slow validation would only work if you do NOT use `allErrors: true` in production code (using it would continue validation after validation errors). You can validate your JSON schemas against [this meta-schema](https://github.com/ajv-validator/ajv/blob/master/lib/refs/json-schema-secure.json) to check that these recommendations are followed: ```javascript const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json')); const schema1 = {format: 'email'}; isSchemaSecure(schema1); // false const schema2 = {format: 'email', maxLength: MAX_LENGTH}; isSchemaSecure(schema2); // true ``` __Please note__: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results. ##### Content Security Policies (CSP) See [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) ## ReDoS attack Certain regular expressions can lead to the exponential evaluation time even with relatively short strings. Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see [safe-regex](https://github.com/substack/safe-regex), for example. __Please note__: some formats that Ajv implements use [regular expressions](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js) that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources __it is strongly recommended__ to consider the following: - making assessment of "format" implementations in Ajv. - using `format: 'fast'` option that simplifies some of the regular expressions (although it does not guarantee that they are safe). - replacing format implementations provided by Ajv with your own implementations of "format" keyword that either uses different regular expressions or another approach to format validation. Please see [addFormat](#api-addformat) method. - disabling format validation by ignoring "format" keyword with option `format: false` Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios. ## Filtering data With [option `removeAdditional`](#options) (added by [andyscott](https://github.com/andyscott)) you can filter data during the validation. This option modifies original data. Example: ```javascript var ajv = new Ajv({ removeAdditional: true }); var schema = { "additionalProperties": false, "properties": { "foo": { "type": "number" }, "bar": { "additionalProperties": { "type": "number" }, "properties": { "baz": { "type": "string" } } } } } var data = { "foo": 0, "additional1": 1, // will be removed; `additionalProperties` == false "bar": { "baz": "abc", "additional2": 2 // will NOT be removed; `additionalProperties` != false }, } var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 } ``` If `removeAdditional` option in the example above were `"all"` then both `additional1` and `additional2` properties would have been removed. If the option were `"failing"` then property `additional1` would have been removed regardless of its value and property `additional2` would have been removed only if its value were failing the schema in the inner `additionalProperties` (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed). __Please note__: If you use `removeAdditional` option with `additionalProperties` keyword inside `anyOf`/`oneOf` keywords your validation can fail with this schema, for example: ```json { "type": "object", "oneOf": [ { "properties": { "foo": { "type": "string" } }, "required": [ "foo" ], "additionalProperties": false }, { "properties": { "bar": { "type": "integer" } }, "required": [ "bar" ], "additionalProperties": false } ] } ``` The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties. With the option `removeAdditional: true` the validation will pass for the object `{ "foo": "abc"}` but will fail for the object `{"bar": 1}`. It happens because while the first subschema in `oneOf` is validated, the property `bar` is removed because it is an additional property according to the standard (because it is not included in `properties` keyword in the same schema). While this behaviour is unexpected (issues [#129](https://github.com/ajv-validator/ajv/issues/129), [#134](https://github.com/ajv-validator/ajv/issues/134)), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way: ```json { "type": "object", "properties": { "foo": { "type": "string" }, "bar": { "type": "integer" } }, "additionalProperties": false, "oneOf": [ { "required": [ "foo" ] }, { "required": [ "bar" ] } ] } ``` The schema above is also more efficient - it will compile into a faster function. ## Assigning defaults With [option `useDefaults`](#options) Ajv will assign values from `default` keyword in the schemas of `properties` and `items` (when it is the array of schemas) to the missing properties and items. With the option value `"empty"` properties and items equal to `null` or `""` (empty string) will be considered missing and assigned defaults. This option modifies original data. __Please note__: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema. Example 1 (`default` in `properties`): ```javascript var ajv = new Ajv({ useDefaults: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "string", "default": "baz" } }, "required": [ "foo", "bar" ] }; var data = { "foo": 1 }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": "baz" } ``` Example 2 (`default` in `items`): ```javascript var schema = { "type": "array", "items": [ { "type": "number" }, { "type": "string", "default": "foo" } ] } var data = [ 1 ]; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // [ 1, "foo" ] ``` `default` keywords in other cases are ignored: - not in `properties` or `items` subschemas - in schemas inside `anyOf`, `oneOf` and `not` (see [#42](https://github.com/ajv-validator/ajv/issues/42)) - in `if` subschema of `switch` keyword - in schemas generated by custom macro keywords The [`strictDefaults` option](#options) customizes Ajv's behavior for the defaults that Ajv ignores (`true` raises an error, and `"log"` outputs a warning). ## Coercing data types When you are validating user inputs all your data properties are usually strings. The option `coerceTypes` allows you to have your data types coerced to the types specified in your schema `type` keywords, both to pass the validation and to use the correctly typed data afterwards. This option modifies original data. __Please note__: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value. Example 1: ```javascript var ajv = new Ajv({ coerceTypes: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "boolean" } }, "required": [ "foo", "bar" ] }; var data = { "foo": "1", "bar": "false" }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": false } ``` Example 2 (array coercions): ```javascript var ajv = new Ajv({ coerceTypes: 'array' }); var schema = { "properties": { "foo": { "type": "array", "items": { "type": "number" } }, "bar": { "type": "boolean" } } }; var data = { "foo": "1", "bar": ["false"] }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": [1], "bar": false } ``` The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords). See [Coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md) for details. ## API ##### new Ajv(Object options) -&gt; Object Create Ajv instance. ##### .compile(Object schema) -&gt; Function&lt;Object data&gt; Generate validating function and cache the compiled schema for future use. Validating function returns a boolean value. This function has properties `errors` and `schema`. Errors encountered during the last validation are assigned to `errors` property (it is assigned `null` if there was no errors). `schema` property contains the reference to the original schema. The schema passed to this method will be validated against meta-schema unless `validateSchema` option is false. If schema is invalid, an error will be thrown. See [options](#options). ##### <a name="api-compileAsync"></a>.compileAsync(Object schema [, Boolean meta] [, Function callback]) -&gt; Promise Asynchronous version of `compile` method that loads missing remote schemas using asynchronous function in `options.loadSchema`. This function returns a Promise that resolves to a validation function. An optional callback passed to `compileAsync` will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when: - missing schema can't be loaded (`loadSchema` returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved. - schema (or some loaded/referenced schema) is invalid. The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded. You can asynchronously compile meta-schema by passing `true` as the second parameter. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### .validate(Object schema|String key|String ref, data) -&gt; Boolean Validate data using passed schema (it will be compiled and cached). Instead of the schema you can use the key that was previously passed to `addSchema`, the schema id if it was present in the schema or any previously resolved reference. Validation errors will be available in the `errors` property of Ajv instance (`null` if there were no errors). __Please note__: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later. If the schema is asynchronous (has `$async` keyword on the top level) this method returns a Promise. See [Asynchronous validation](#asynchronous-validation). ##### .addSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole. Array of schemas can be passed (schemas should have ids), the second parameter will be ignored. Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key. Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data. Although `addSchema` does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time. By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by `validateSchema` option. __Please note__: Ajv uses the [method chaining syntax](https://en.wikipedia.org/wiki/Method_chaining) for all methods with the prefix `add*` and `remove*`. This allows you to do nice things like the following. ```javascript var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri); ``` ##### .addMetaSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of `addSchema` because there may be instance options that would compile a meta schema incorrectly (at the moment it is `removeAdditional` option). There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option `meta` is set to `false`. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See `validateSchema`. ##### <a name="api-validateschema"></a>.validateSchema(Object schema) -&gt; Boolean Validates schema. This method should be used to validate schemas rather than `validate` due to the inconsistency of `uri` format in JSON Schema standard. By default this method is called automatically when the schema is added, so you rarely need to use it directly. If schema doesn't have `$schema` property, it is validated against draft 6 meta-schema (option `meta` should not be false). If schema has `$schema` property, then the schema with this id (that should be previously added) is used to validate passed schema. Errors will be available at `ajv.errors`. ##### .getSchema(String key) -&gt; Function&lt;Object data&gt; Retrieve compiled schema previously added with `addSchema` by the key passed to `addSchema` or by its full reference (id). The returned validating function has `schema` property with the reference to the original schema. ##### .removeSchema([Object schema|String key|String ref|RegExp pattern]) -&gt; Ajv Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references. Schema can be removed using: - key passed to `addSchema` - it's full reference (id) - RegExp that should match schema id or key (meta-schemas won't be removed) - actual schema object that will be stable-stringified to remove schema from cache If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared. ##### <a name="api-addformat"></a>.addFormat(String name, String|RegExp|Function|Object format) -&gt; Ajv Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance. Strings are converted to RegExp. Function should return validation result as `true` or `false`. If object is passed it should have properties `validate`, `compare` and `async`: - _validate_: a string, RegExp or a function as described above. - _compare_: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords `formatMaximum`/`formatMinimum` (defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package). It should return `1` if the first value is bigger than the second value, `-1` if it is smaller and `0` if it is equal. - _async_: an optional `true` value if `validate` is an asynchronous function; in this case it should return a promise that resolves with a value `true` or `false`. - _type_: an optional type of data that the format applies to. It can be `"string"` (default) or `"number"` (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass. Custom formats can be also added via `formats` option. ##### <a name="api-addkeyword"></a>.addKeyword(String keyword, Object definition) -&gt; Ajv Add custom validation keyword to Ajv instance. Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance. Keyword must start with a letter, `_` or `$`, and may continue with letters, numbers, `_`, `$`, or `-`. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions. Example Keywords: - `"xyz-example"`: valid, and uses prefix for the xyz project to avoid name collisions. - `"example"`: valid, but not recommended as it could collide with future versions of JSON Schema etc. - `"3-example"`: invalid as numbers are not allowed to be the first character in a keyword Keyword definition is an object with the following properties: - _type_: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types. - _validate_: validating function - _compile_: compiling function - _macro_: macro function - _inline_: compiling function that returns code (as string) - _schema_: an optional `false` value used with "validate" keyword to not pass schema - _metaSchema_: an optional meta-schema for keyword schema - _dependencies_: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation - _modifying_: `true` MUST be passed if keyword modifies data - _statements_: `true` can be passed in case inline keyword generates statements (as opposed to expression) - _valid_: pass `true`/`false` to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - _$data_: an optional `true` value to support [$data reference](#data-reference) as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - _async_: an optional `true` value if the validation function is asynchronous (whether it is compiled or passed in _validate_ property); in this case it should return a promise that resolves with a value `true` or `false`. This option is ignored in case of "macro" and "inline" keywords. - _errors_: an optional boolean or string `"full"` indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation. _compile_, _macro_ and _inline_ are mutually exclusive, only one should be used at a time. _validate_ can be used separately or in addition to them to support $data reference. __Please note__: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate `type` keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed. See [Defining custom keywords](#defining-custom-keywords) for more details. ##### .getKeyword(String keyword) -&gt; Object|Boolean Returns custom keyword definition, `true` for pre-defined keywords and `false` if the keyword is unknown. ##### .removeKeyword(String keyword) -&gt; Ajv Removes custom or pre-defined keyword so you can redefine them. While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results. __Please note__: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use `removeSchema` method and compile them again. ##### .errorsText([Array&lt;Object&gt; errors [, Object options]]) -&gt; String Returns the text with all errors in a String. Options can have properties `separator` (string used to separate errors, ", " by default) and `dataVar` (the variable name that dataPaths are prefixed with, "data" by default). ## Options Defaults: ```javascript { // validation and reporting options: $data: false, allErrors: false, verbose: false, $comment: false, // NEW in Ajv version 6.0 jsonPointers: false, uniqueItems: true, unicode: true, nullable: false, format: 'fast', formats: {}, unknownFormats: true, schemas: {}, logger: undefined, // referenced schema options: schemaId: '$id', missingRefs: true, extendRefs: 'ignore', // recommended 'fail' loadSchema: undefined, // function(uri: string): Promise {} // options to modify validated data: removeAdditional: false, useDefaults: false, coerceTypes: false, // strict mode options strictDefaults: false, strictKeywords: false, strictNumbers: false, // asynchronous validation options: transpile: undefined, // requires ajv-async package // advanced options: meta: true, validateSchema: true, addUsedSchema: true, inlineRefs: true, passContext: false, loopRequired: Infinity, ownProperties: false, multipleOfPrecision: false, errorDataPath: 'object', // deprecated messages: true, sourceCode: false, processCode: undefined, // function (str: string, schema: object): string {} cache: new Cache, serialize: undefined } ``` ##### Validation and reporting options - _$data_: support [$data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See [API](#api). - _allErrors_: check all rules collecting all errors. Default is to return after the first error. - _verbose_: include the reference to the part of the schema (`schema` and `parentSchema`) and validated data in errors (false by default). - _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$comment` keyword to a function. Option values: - `false` (default): ignore $comment keyword. - `true`: log the keyword value to console. - function: pass the keyword value, its schema path and root schema to the specified function - _jsonPointers_: set `dataPath` property of errors using [JSON Pointers](https://tools.ietf.org/html/rfc6901) instead of JavaScript property access notation. - _uniqueItems_: validate `uniqueItems` keyword (true by default). - _unicode_: calculate correct length of strings with unicode pairs (true by default). Pass `false` to use `.length` of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - _nullable_: support keyword "nullable" from [Open API 3 specification](https://swagger.io/docs/specification/data-models/data-types/). - _format_: formats validation mode. Option values: - `"fast"` (default) - simplified and fast validation (see [Formats](#formats) for details of which formats are available and affected by this option). - `"full"` - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - `false` - ignore all format keywords. - _formats_: an object with custom formats. Keys and values will be passed to `addFormat` method. - _keywords_: an object with custom keywords. Keys and values will be passed to `addKeyword` method. - _unknownFormats_: handling of unknown formats. Option values: - `true` (default) - if an unknown format is encountered the exception is thrown during schema compilation. If `format` keyword value is [$data reference](#data-reference) and it is unknown the validation will fail. - `[String]` - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. If `format` keyword value is [$data reference](#data-reference) and it is not in this array the validation will fail. - `"ignore"` - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification. - _schemas_: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method `addSchema(value, key)` will be called for each schema in this object. - _logger_: sets the logging method. Default is the global `console` object that should have methods `log`, `warn` and `error`. See [Error logging](#error-logging). Option values: - custom logger - it should have methods `log`, `warn` and `error`. If any of these methods is missing an exception will be thrown. - `false` - logging is disabled. ##### Referenced schema options - _schemaId_: this option defines which keywords are used as schema URI. Option value: - `"$id"` (default) - only use `$id` keyword as schema URI (as specified in JSON Schema draft-06/07), ignore `id` keyword (if it is present a warning will be logged). - `"id"` - only use `id` keyword as schema URI (as specified in JSON Schema draft-04), ignore `$id` keyword (if it is present a warning will be logged). - `"auto"` - use both `$id` and `id` keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation. - _missingRefs_: handling of missing referenced schemas. Option values: - `true` (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties `missingRef` (with hash fragment) and `missingSchema` (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). - `"ignore"` - to log error during compilation and always pass validation. - `"fail"` - to log error and successfully compile schema but fail validation if this rule is checked. - _extendRefs_: validation of other keywords when `$ref` is present in the schema. Option values: - `"ignore"` (default) - when `$ref` is used other keywords are ignored (as per [JSON Reference](https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03#section-3) standard). A warning will be logged during the schema compilation. - `"fail"` (recommended) - if other validation keywords are used together with `$ref` the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing. - `true` - validate all keywords in the schemas with `$ref` (the default behaviour in versions before 5.0.0). - _loadSchema_: asynchronous function that will be used to load remote schemas when `compileAsync` [method](#api-compileAsync) is used and some reference is missing (option `missingRefs` should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### Options to modify validated data - _removeAdditional_: remove additional properties - see example in [Filtering data](#filtering-data). This option is not used if schema is added with `addMetaSchema` method. Option values: - `false` (default) - not to remove additional properties - `"all"` - all additional properties are removed, regardless of `additionalProperties` keyword in schema (and no validation is made for them). - `true` - only additional properties with `additionalProperties` keyword equal to `false` are removed. - `"failing"` - additional properties that fail schema validation will be removed (where `additionalProperties` keyword is `false` or schema). - _useDefaults_: replace missing or undefined properties and items with the values from corresponding `default` keywords. Default behaviour is to ignore `default` keywords. This option is not used if schema is added with `addMetaSchema` method. See examples in [Assigning defaults](#assigning-defaults). Option values: - `false` (default) - do not use defaults - `true` - insert defaults by value (object literal is used). - `"empty"` - in addition to missing or undefined, use defaults for properties and items that are equal to `null` or `""` (an empty string). - `"shared"` (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well. - _coerceTypes_: change data type of data to match `type` keyword. See the example in [Coercing data types](#coercing-data-types) and [coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md). Option values: - `false` (default) - no type coercion. - `true` - coerce scalar data types. - `"array"` - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema). ##### Strict mode options - _strictDefaults_: report ignored `default` keywords in schemas. Option values: - `false` (default) - ignored defaults are not reported - `true` - if an ignored default is present, throw an error - `"log"` - if an ignored default is present, log warning - _strictKeywords_: report unknown keywords in schemas. Option values: - `false` (default) - unknown keywords are not reported - `true` - if an unknown keyword is present, throw an error - `"log"` - if an unknown keyword is present, log warning - _strictNumbers_: validate numbers strictly, failing validation for NaN and Infinity. Option values: - `false` (default) - NaN or Infinity will pass validation for numeric types - `true` - NaN or Infinity will not pass validation for numeric types ##### Asynchronous validation options - _transpile_: Requires [ajv-async](https://github.com/ajv-validator/ajv-async) package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values: - `undefined` (default) - transpile with [nodent](https://github.com/MatAtBread/nodent) if async functions are not supported. - `true` - always transpile with nodent. - `false` - do not transpile; if async functions are not supported an exception will be thrown. ##### Advanced options - _meta_: add [meta-schema](http://json-schema.org/documentation.html) so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no `$schema` keyword. This default meta-schema MUST have `$schema` keyword. - _validateSchema_: validate added/compiled schemas against meta-schema (true by default). `$schema` property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with `addMetaSchema` method. Option values: - `true` (default) - if the validation fails, throw the exception. - `"log"` - if the validation fails, log error. - `false` - skip schema validation. - _addUsedSchema_: by default methods `compile` and `validate` add schemas to the instance if they have `$id` (or `id`) property that doesn't start with "#". If `$id` is present and it is not unique the exception will be thrown. Set this option to `false` to skip adding schemas to the instance and the `$id` uniqueness check when these methods are used. This option does not affect `addSchema` method. - _inlineRefs_: Affects compilation of referenced schemas. Option values: - `true` (default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. - `false` - to not inline referenced schemas (they will be compiled as separate functions). - integer number - to limit the maximum number of keywords of the schema that will be inlined. - _passContext_: pass validation context to custom keyword functions. If this option is `true` and you pass some context to the compiled validation function with `validate.call(context, data)`, the `context` will be available as `this` in your custom keywords. By default `this` is Ajv instance. - _loopRequired_: by default `required` keyword is compiled into a single expression (or a sequence of statements in `allErrors` mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which `required` keyword will be validated in a loop - smaller validation function size but also worse performance. - _ownProperties_: by default Ajv iterates over all enumerable object properties; when this option is `true` only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - _multipleOfPrecision_: by default `multipleOf` keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue [#84](https://github.com/ajv-validator/ajv/issues/84)). If you need to use fractional dividers set this option to some positive integer N to have `multipleOf` validated using this formula: `Math.abs(Math.round(division) - division) < 1e-N` (it is slower but allows for float arithmetics deviations). - _errorDataPath_ (deprecated): set `dataPath` to point to 'object' (default) or to 'property' when validating keywords `required`, `additionalProperties` and `dependencies`. - _messages_: Include human-readable messages in errors. `true` by default. `false` can be passed when custom messages are used (e.g. with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n)). - _sourceCode_: add `sourceCode` property to validating function (for debugging; this code can be different from the result of toString call). - _processCode_: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options: - `beautify` that formatted the generated function using [js-beautify](https://github.com/beautify-web/js-beautify). If you want to beautify the generated code pass a function calling `require('js-beautify').js_beautify` as `processCode: code => js_beautify(code)`. - `transpile` that transpiled asynchronous validation function. You can still use `transpile` option with [ajv-async](https://github.com/ajv-validator/ajv-async) package. See [Asynchronous validation](#asynchronous-validation) for more information. - _cache_: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache [sacjs](https://github.com/epoberezkin/sacjs) can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods `put(key, value)`, `get(key)`, `del(key)` and `clear()`. - _serialize_: an optional function to serialize schema to cache key. Pass `false` to use schema itself as a key (e.g., if WeakMap used as a cache). By default [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) is used. ## Validation errors In case of validation failure, Ajv assigns the array of errors to `errors` property of validation function (or to `errors` property of Ajv instance when `validate` or `validateSchema` methods were called). In case of [asynchronous validation](#asynchronous-validation), the returned promise is rejected with exception `Ajv.ValidationError` that has `errors` property. ### Error objects Each error is an object with the following properties: - _keyword_: validation keyword. - _dataPath_: the path to the part of the data that was validated. By default `dataPath` uses JavaScript property access notation (e.g., `".prop[1].subProp"`). When the option `jsonPointers` is true (see [Options](#options)) `dataPath` will be set using JSON pointer standard (e.g., `"/prop/1/subProp"`). - _schemaPath_: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation. - _params_: the object with the additional information about error that can be used to create custom error messages (e.g., using [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package). See below for parameters set by all keywords. - _message_: the standard error message (can be excluded with option `messages` set to false). - _schema_: the schema of the keyword (added with `verbose` option). - _parentSchema_: the schema containing the keyword (added with `verbose` option) - _data_: the data validated by the keyword (added with `verbose` option). __Please note__: `propertyNames` keyword schema validation errors have an additional property `propertyName`, `dataPath` points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property `keyword` equal to `"propertyNames"`. ### Error parameters Properties of `params` object in errors depend on the keyword that failed validation. - `maxItems`, `minItems`, `maxLength`, `minLength`, `maxProperties`, `minProperties` - property `limit` (number, the schema of the keyword). - `additionalItems` - property `limit` (the maximum number of allowed items in case when `items` keyword is an array of schemas and `additionalItems` is false). - `additionalProperties` - property `additionalProperty` (the property not used in `properties` and `patternProperties` keywords). - `dependencies` - properties: - `property` (dependent property), - `missingProperty` (required missing dependency - only the first one is reported currently) - `deps` (required dependencies, comma separated list as a string), - `depsCount` (the number of required dependencies). - `format` - property `format` (the schema of the keyword). - `maximum`, `minimum` - properties: - `limit` (number, the schema of the keyword), - `exclusive` (boolean, the schema of `exclusiveMaximum` or `exclusiveMinimum`), - `comparison` (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=") - `multipleOf` - property `multipleOf` (the schema of the keyword) - `pattern` - property `pattern` (the schema of the keyword) - `required` - property `missingProperty` (required property that is missing). - `propertyNames` - property `propertyName` (an invalid property name). - `patternRequired` (in ajv-keywords) - property `missingPattern` (required pattern that did not match any property). - `type` - property `type` (required type(s), a string, can be a comma-separated list) - `uniqueItems` - properties `i` and `j` (indices of duplicate items). - `const` - property `allowedValue` pointing to the value (the schema of the keyword). - `enum` - property `allowedValues` pointing to the array of values (the schema of the keyword). - `$ref` - property `ref` with the referenced schema URI. - `oneOf` - property `passingSchemas` (array of indices of passing schemas, null if no schema passes). - custom keywords (in case keyword definition doesn't create errors) - property `keyword` (the keyword name). ### Error logging Using the `logger` option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - **Required Methods**: `log`, `warn`, `error` ```javascript var otherLogger = new OtherLogger(); var ajv = new Ajv({ logger: { log: console.log.bind(console), warn: function warn() { otherLogger.logWarn.apply(otherLogger, arguments); }, error: function error() { otherLogger.logError.apply(otherLogger, arguments); console.error.apply(console, arguments); } } }); ``` ## Plugins Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions: - it exports a function - this function accepts ajv instance as the first parameter and returns the same instance to allow chaining - this function can accept an optional configuration as the second parameter If you have published a useful plugin please submit a PR to add it to the next section. ## Related packages - [ajv-async](https://github.com/ajv-validator/ajv-async) - plugin to configure async validation mode - [ajv-bsontype](https://github.com/BoLaMN/ajv-bsontype) - plugin to validate mongodb's bsonType formats - [ajv-cli](https://github.com/jessedc/ajv-cli) - command line interface - [ajv-errors](https://github.com/ajv-validator/ajv-errors) - plugin for custom error messages - [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) - internationalised error messages - [ajv-istanbul](https://github.com/ajv-validator/ajv-istanbul) - plugin to instrument generated validation code to measure test coverage of your schemas - [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) - plugin with custom validation keywords (select, typeof, etc.) - [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) - plugin with keywords $merge and $patch - [ajv-pack](https://github.com/ajv-validator/ajv-pack) - produces a compact module exporting validation functions - [ajv-formats-draft2019](https://github.com/luzlab/ajv-formats-draft2019) - format validators for draft2019 that aren't already included in ajv (ie. `idn-hostname`, `idn-email`, `iri`, `iri-reference` and `duration`). ## Some packages using Ajv - [webpack](https://github.com/webpack/webpack) - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser - [jsonscript-js](https://github.com/JSONScript/jsonscript-js) - the interpreter for [JSONScript](http://www.jsonscript.org) - scripted processing of existing endpoints and services - [osprey-method-handler](https://github.com/mulesoft-labs/osprey-method-handler) - Express middleware for validating requests and responses based on a RAML method object, used in [osprey](https://github.com/mulesoft/osprey) - validating API proxy generated from a RAML definition - [har-validator](https://github.com/ahmadnassri/har-validator) - HTTP Archive (HAR) validator - [jsoneditor](https://github.com/josdejong/jsoneditor) - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org - [JSON Schema Lint](https://github.com/nickcmaynard/jsonschemalint) - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com - [objection](https://github.com/vincit/objection.js) - SQL-friendly ORM for Node.js - [table](https://github.com/gajus/table) - formats data into a string table - [ripple-lib](https://github.com/ripple/ripple-lib) - a JavaScript API for interacting with [Ripple](https://ripple.com) in Node.js and the browser - [restbase](https://github.com/wikimedia/restbase) - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content - [hippie-swagger](https://github.com/CacheControl/hippie-swagger) - [Hippie](https://github.com/vesln/hippie) wrapper that provides end to end API testing with swagger validation - [react-form-controlled](https://github.com/seeden/react-form-controlled) - React controlled form components with validation - [rabbitmq-schema](https://github.com/tjmehta/rabbitmq-schema) - a schema definition module for RabbitMQ graphs and messages - [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema - [chai-ajv-json-schema](https://github.com/peon374/chai-ajv-json-schema) - chai plugin to us JSON Schema with expect in mocha tests - [grunt-jsonschema-ajv](https://github.com/SignpostMarv/grunt-jsonschema-ajv) - Grunt plugin for validating files against JSON Schema - [extract-text-webpack-plugin](https://github.com/webpack-contrib/extract-text-webpack-plugin) - extract text from bundle into a file - [electron-builder](https://github.com/electron-userland/electron-builder) - a solution to package and build a ready for distribution Electron app - [addons-linter](https://github.com/mozilla/addons-linter) - Mozilla Add-ons Linter - [gh-pages-generator](https://github.com/epoberezkin/gh-pages-generator) - multi-page site generator converting markdown files to GitHub pages - [ESLint](https://github.com/eslint/eslint) - the pluggable linting utility for JavaScript and JSX ## Tests ``` npm install git submodule update --init npm test ``` ## Contributing All validation functions are generated using doT templates in [dot](https://github.com/ajv-validator/ajv/tree/master/lib/dot) folder. Templates are precompiled so doT is not a run-time dependency. `npm run build` - compiles templates to [dotjs](https://github.com/ajv-validator/ajv/tree/master/lib/dotjs) folder. `npm run watch` - automatically compiles templates when files in dot folder change Please see [Contributing guidelines](https://github.com/ajv-validator/ajv/blob/master/CONTRIBUTING.md) ## Changes history See https://github.com/ajv-validator/ajv/releases __Please note__: [Changes in version 7.0.0-beta](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](https://github.com/ajv-validator/ajv/blob/master/CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](https://github.com/ajv-validator/ajv/blob/master/LICENSE) # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # assemblyscript-regex A regex engine for AssemblyScript. [AssemblyScript](https://www.assemblyscript.org/) is a new language, based on TypeScript, that runs on WebAssembly. AssemblyScript has a lightweight standard library, but lacks support for Regular Expression. The project fills that gap! This project exposes an API that mirrors the JavaScript [RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp) class: ```javascript const regex = new RegExp("fo*", "g"); const str = "table football, foul"; let match: Match | null = regex.exec(str); while (match != null) { // first iteration // match.index = 6 // match.matches[0] = "foo" // second iteration // match.index = 16 // match.matches[0] = "fo" match = regex.exec(str); } ``` ## Project status The initial focus of this implementation has been feature support and functionality over performance. It currently supports a sufficient number of regex features to be considered useful, including most character classes, common assertions, groups, alternations, capturing groups and quantifiers. The next phase of development will focussed on more extensive testing and performance. The project currently has reasonable unit test coverage, focussed on positive and negative test cases on a per-feature basis. It also includes a more exhaustive test suite with test cases borrowed from another regex library. ### Feature support Based on the classfication within the [MDN cheatsheet](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions/Cheatsheet) **Character sets** - [x] . - [x] \d - [x] \D - [x] \w - [x] \W - [x] \s - [x] \S - [x] \t - [x] \r - [x] \n - [x] \v - [x] \f - [ ] [\b] - [ ] \0 - [ ] \cX - [x] \xhh - [x] \uhhhh - [ ] \u{hhhh} or \u{hhhhh} - [x] \ **Assertions** - [x] ^ - [x] $ - [ ] \b - [ ] \B **Other assertions** - [ ] x(?=y) Lookahead assertion - [ ] x(?!y) Negative lookahead assertion - [ ] (?<=y)x Lookbehind assertion - [ ] (?<!y)x Negative lookbehind assertion **Groups and ranges** - [x] x|y - [x] [xyz][a-c] - [x] [^xyz][^a-c] - [x] (x) capturing group - [ ] \n back reference - [ ] (?<Name>x) named capturing group - [x] (?:x) Non-capturing group **Quantifiers** - [x] x\* - [x] x+ - [x] x? - [x] x{n} - [x] x{n,} - [x] x{n,m} - [ ] x\*? / x+? / ... **RegExp** - [x] global - [ ] sticky - [x] case insensitive - [x] multiline - [x] dotAll - [ ] unicode ### Development This project is open source, MIT licenced and your contributions are very much welcomed. To get started, check out the repository and install dependencies: ``` $ npm install ``` A few general points about the tools and processes this project uses: - This project uses prettier for code formatting and eslint to provide additional syntactic checks. These are both run on `npm test` and as part of the CI build. - The unit tests are executed using [as-pect](https://github.com/jtenner/as-pect) - a native AssemblyScript test runner - The specification tests are within the `spec` folder. The `npm run test:generate` target transforms these tests into as-pect tests which execute as part of the standard build / test cycle - In order to support improved debugging you can execute this library as TypeScript (rather than WebAssembly), via the `npm run tsrun` target. # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde (olalonde@gmail.com) - James Talmage (james@talmage.io) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # is-extglob [![NPM version](https://img.shields.io/npm/v/is-extglob.svg?style=flat)](https://www.npmjs.com/package/is-extglob) [![NPM downloads](https://img.shields.io/npm/dm/is-extglob.svg?style=flat)](https://npmjs.org/package/is-extglob) [![Build Status](https://img.shields.io/travis/jonschlinkert/is-extglob.svg?style=flat)](https://travis-ci.org/jonschlinkert/is-extglob) > Returns true if a string has an extglob. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-extglob ``` ## Usage ```js var isExtglob = require('is-extglob'); ``` **True** ```js isExtglob('?(abc)'); isExtglob('@(abc)'); isExtglob('!(abc)'); isExtglob('*(abc)'); isExtglob('+(abc)'); ``` **False** Escaped extglobs: ```js isExtglob('\\?(abc)'); isExtglob('\\@(abc)'); isExtglob('\\!(abc)'); isExtglob('\\*(abc)'); isExtglob('\\+(abc)'); ``` Everything else... ```js isExtglob('foo.js'); isExtglob('!foo.js'); isExtglob('*.js'); isExtglob('**/abc.js'); isExtglob('abc/*.js'); isExtglob('abc/(aaa|bbb).js'); isExtglob('abc/[a-z].js'); isExtglob('abc/{a,b}.js'); isExtglob('abc/?.js'); isExtglob('abc.js'); isExtglob('abc/def/ghi.js'); ``` ## History **v2.0** Adds support for escaping. Escaped exglobs no longer return true. ## About ### Related projects * [has-glob](https://www.npmjs.com/package/has-glob): Returns `true` if an array has a glob pattern. | [homepage](https://github.com/jonschlinkert/has-glob "Returns `true` if an array has a glob pattern.") * [is-glob](https://www.npmjs.com/package/is-glob): Returns `true` if the given string looks like a glob pattern or an extglob pattern… [more](https://github.com/jonschlinkert/is-glob) | [homepage](https://github.com/jonschlinkert/is-glob "Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a bet") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/jonschlinkert/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ### License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/is-extglob/blob/master/LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.1.31, on October 12, 2016._ # URI.js URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI ([RFC 3987](http://www.ietf.org/rfc/rfc3987.txt)), IDNA ([RFC 5890](http://www.ietf.org/rfc/rfc5890.txt)), IPv6 Address ([RFC 5952](http://www.ietf.org/rfc/rfc5952.txt)), IPv6 Zone Identifier ([RFC 6874](http://www.ietf.org/rfc/rfc6874.txt)) specifications. URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated). ## API ### Parsing URI.parse("uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body"); //returns: //{ // scheme : "uri", // userinfo : "user:pass", // host : "example.com", // port : 123, // path : "/one/two.three", // query : "q1=a1&q2=a2", // fragment : "body" //} ### Serializing URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer" ### Resolving URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g" ### Normalizing URI.normalize("HTTP://ABC.com:80/%7Esmith/home.html") === "http://abc.com/~smith/home.html" ### Comparison URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true ### IP Support //IPv4 normalization URI.normalize("//192.068.001.000") === "//192.68.1.0" //IPv6 normalization URI.normalize("//[2001:0:0DB8::0:0001]") === "//[2001:0:db8::1]" //IPv6 zone identifier support URI.parse("//[2001:db8::7%25en1]"); //returns: //{ // host : "2001:db8::7%en1" //} ### IRI Support //convert IRI to URI URI.serialize(URI.parse("http://examplé.org/rosé")) === "http://xn--exampl-gva.org/ros%C3%A9" //convert URI to IRI URI.serialize(URI.parse("http://xn--exampl-gva.org/ros%C3%A9"), {iri:true}) === "http://examplé.org/rosé" ### Options All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties: * `scheme` (string) Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior. * `reference` (string) If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme. * `tolerant` (boolean, false) If set to `true`, the parser will relax URI resolving rules. * `absolutePath` (boolean, false) If set to `true`, the serializer will not resolve a relative `path` component. * `iri` (boolean, false) If set to `true`, the serializer will unescape non-ASCII characters as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `unicodeSupport` (boolean, false) If set to `true`, the parser will unescape non-ASCII characters in the parsed output as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `domainHost` (boolean, false) If set to `true`, the library will treat the `host` component as a domain name, and convert IDNs (International Domain Names) as per [RFC 5891](http://www.ietf.org/rfc/rfc5891.txt). ## Scheme Extendable URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. Currently, URI.js has built in support for the following schemes: * http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\] * https \[[RFC 2818](http://www.ietf.org/rfc/rfc2818.txt)\] * ws \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * wss \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * mailto \[[RFC 6068](http://www.ietf.org/rfc/rfc6068.txt)\] * urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\] * urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\] ### HTTP/HTTPS Support URI.equal("HTTP://ABC.COM:80", "http://abc.com/") === true URI.equal("https://abc.com", "HTTPS://ABC.COM:443/") === true ### WS/WSS Support URI.parse("wss://example.com/foo?bar=baz"); //returns: //{ // scheme : "wss", // host: "example.com", // resourceName: "/foo?bar=baz", // secure: true, //} URI.equal("WS://ABC.COM:80/chat#one", "ws://abc.com/chat") === true ### Mailto Support URI.parse("mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!"); //returns: //{ // scheme : "mailto", // to : ["alpha@example.com", "bravo@example.com"], // subject : "SUBSCRIBE", // body : "Sign me up!" //} URI.serialize({ scheme : "mailto", to : ["alpha@example.com"], subject : "REMOVE", body : "Please remove me", headers : { cc : "charlie@example.com" } }) === "mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me" ### URN Support URI.parse("urn:example:foo"); //returns: //{ // scheme : "urn", // nid : "example", // nss : "foo", //} #### URN UUID Support URI.parse("urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6"); //returns: //{ // scheme : "urn", // nid : "uuid", // uuid : "f81d4fae-7dec-11d0-a765-00a0c91e6bf6", //} ## Usage To load in a browser, use the following tag: <script type="text/javascript" src="uri-js/dist/es5/uri.all.min.js"></script> To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line: npm install uri-js # OR yarn add uri-js Then, in your code, load it using: const URI = require("uri-js"); If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using: import * as URI from "uri-js"; Or you can load just what you need using named exports: import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from "uri-js"; ## Breaking changes ### Breaking changes from 3.x URN parsing has been completely changed to better align with the specification. Scheme is now always `urn`, but has two new properties: `nid` which contains the Namspace Identifier, and `nss` which contains the Namespace Specific String. The `nss` property will be removed by higher order scheme handlers, such as the UUID URN scheme handler. The UUID of a URN can now be found in the `uuid` property. ### Breaking changes from 2.x URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful. ### Breaking changes from 1.x The `errors` array on parsed components is now an `error` string. # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 66 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. # fast-json-stable-stringify Deterministic `JSON.stringify()` - a faster version of [@substack](https://github.com/substack)'s json-stable-strigify without [jsonify](https://github.com/substack/jsonify). You can also pass in a custom comparison function. [![Build Status](https://travis-ci.org/epoberezkin/fast-json-stable-stringify.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-json-stable-stringify) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-json-stable-stringify/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-json-stable-stringify?branch=master) # example ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; console.log(stringify(obj)); ``` output: ``` {"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8} ``` # methods ``` js var stringify = require('fast-json-stable-stringify') ``` ## var str = stringify(obj, opts) Return a deterministic stringified string `str` from the object `obj`. ## options ### cmp If `opts` is given, you can supply an `opts.cmp` to have a custom comparison function for object keys. Your function `opts.cmp` is called with these parameters: ``` js opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue }) ``` For example, to sort on the object key names in reverse order you could write: ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; var s = stringify(obj, function (a, b) { return a.key < b.key ? 1 : -1; }); console.log(s); ``` which results in the output string: ``` {"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3} ``` Or if you wanted to sort on the object values in reverse order, you could write: ``` var stringify = require('fast-json-stable-stringify'); var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 }; var s = stringify(obj, function (a, b) { return a.value < b.value ? 1 : -1; }); console.log(s); ``` which outputs: ``` {"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10} ``` ### cycles Pass `true` in `opts.cycles` to stringify circular property as `__cycle__` - the result will not be a valid JSON string in this case. TypeError will be thrown in case of circular object without this option. # install With [npm](https://npmjs.org) do: ``` npm install fast-json-stable-stringify ``` # benchmark To run benchmark (requires Node.js 6+): ``` node benchmark ``` Results: ``` fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled) json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled) fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled) faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled) The fastest is fast-stable-stringify ``` ## Enterprise support fast-json-stable-stringify package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-json-stable-stringify?utm_source=npm-fast-json-stable-stringify&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. # license [MIT](https://github.com/epoberezkin/fast-json-stable-stringify/blob/master/LICENSE) Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. ## assemblyscript-temporal An implementation of temporal within AssemblyScript, with an initial focus on non-timezone-aware classes and functionality. ### Why? AssemblyScript has minimal `Date` support, however, the JS Date API itself is terrible and people tend not to use it that often. As a result libraries like moment / luxon have become staple replacements. However, there is now a [relatively mature TC39 proposal](https://github.com/tc39/proposal-temporal) that adds greatly improved date support to JS. The goal of this project is to implement Temporal for AssemblyScript. ### Usage This library currently supports the following types: #### `PlainDateTime` A `PlainDateTime` represents a calendar date and wall-clock time that does not carry time zone information, e.g. December 7th, 1995 at 3:00 PM (in the Gregorian calendar). For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindatetime.html), this implementation follows the specification as closely as possible. You can create a `PlainDateTime` from individual components, a string or an object literal: ```javascript datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.year; // 2019; datetime.month; // 11; // ... datetime.nanosecond; // 789; datetime = PlainDateTime.from("1976-11-18T12:34:56"); datetime.toString(); // "1976-11-18T12:34:56" datetime = PlainDateTime.from({ year: 1966, month: 3, day: 3 }); datetime.toString(); // "1966-03-03T00:00:00" ``` There are various ways you can manipulate a date: ```javascript // use 'with' to copy a date but with various property values overriden datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.with({ year: 2019 }).toString(); // "2019-11-18T15:23:30.123456789" // use 'add' or 'substract' to add / subtract a duration datetime = PlainDateTime.from("2020-01-12T15:00"); datetime.add({ months: 1 }).toString(); // "2020-02-12T15:00:00"); // add / subtract support Duration objects or object literals datetime.add(new Duration(1)).toString(); // "2021-01-12T15:00:00"); ``` You can compare dates and check for equality ```javascript dt1 = PlainDateTime.from("1976-11-18"); dt2 = PlainDateTime.from("2019-10-29"); PlainDateTime.compare(dt1, dt1); // 0 PlainDateTime.compare(dt1, dt2); // -1 dt1.equals(dt1); // true ``` Currently `PlainDateTime` only supports the ISO 8601 (Gregorian) calendar. #### `PlainDate` A `PlainDate` object represents a calendar date that is not associated with a particular time or time zone, e.g. August 24th, 2006. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindate.html), this implementation follows the specification as closely as possible. The `PlainDate` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainTime` A `PlainTime` object represents a wall-clock time that is not associated with a particular date or time zone, e.g. 7:39 PM. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaintime.html), this implementation follows the specification as closely as possible. The `PlainTime` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainMonthDay` A date without a year component. This is useful to express things like "Bastille Day is on the 14th of July". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainmonthday.html) , this implementation follows the specification as closely as possible. ```javascript const monthDay = PlainMonthDay.from({ month: 7, day: 14 }); // => 07-14 const date = monthDay.toPlainDate({ year: 2030 }); // => 2030-07-14 date.dayOfWeek; // => 7 ``` The `PlainMonthDay` API is almost identical to `PlainDateTime`, so see above for more API usage examples. #### `PlainYearMonth` A date without a day component. This is useful to express things like "the October 2020 meeting". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainyearmonth.html) , this implementation follows the specification as closely as possible. The `PlainYearMonth` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `now` The `now` object has several methods which give information about the current time and date. ```javascript dateTime = now.plainDateTimeISO(); dateTime.toString(); // 2021-04-01T12:05:47.357 ``` ## Contributing This project is open source, MIT licensed and your contributions are very much welcomed. There is a [brief document that outlines implementation progress and priorities](./development.md). binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. # rechoir [![Build Status](https://secure.travis-ci.org/tkellen/js-rechoir.png)](http://travis-ci.org/tkellen/js-rechoir) > Require any supported file as a node module. [![NPM](https://nodei.co/npm/rechoir.png)](https://nodei.co/npm/rechoir/) ## What is it? This module, in conjunction with [interpret]-like objects can register any file type the npm ecosystem has a module loader for. This library is a dependency of [Liftoff]. ## API ### prepare(config, filepath, requireFrom) Look for a module loader associated with the provided file and attempt require it. If necessary, run any setup required to inject it into [require.extensions](http://nodejs.org/api/globals.html#globals_require_extensions). `config` An [interpret]-like configuration object. `filepath` A file whose type you'd like to register a module loader for. `requireFrom` An optional path to start searching for the module required to load the requested file. Defaults to the directory of `filepath`. If calling this method is successful (aka: it doesn't throw), you can now require files of the type you requested natively. An error with a `failures` property will be thrown if the module loader(s) configured for a given extension cannot be registered. If a loader is already registered, this will simply return `true`. **Note:** While rechoir will automatically load and register transpilers like `coffee-script`, you must provide a local installation. The transpilers are **not** bundled with this module. #### Usage ```js const config = require('interpret').extensions; const rechoir = require('rechoir'); rechoir.prepare(config, './test/fixtures/test.coffee'); rechoir.prepare(config, './test/fixtures/test.csv'); rechoir.prepare(config, './test/fixtures/test.toml'); console.log(require('./test/fixtures/test.coffee')); console.log(require('./test/fixtures/test.csv')); console.log(require('./test/fixtures/test.toml')); ``` [interpret]: http://github.com/tkellen/js-interpret [Liftoff]: http://github.com/tkellen/js-liftoff ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) # path-parse [![Build Status](https://travis-ci.org/jbgutierrez/path-parse.svg?branch=master)](https://travis-ci.org/jbgutierrez/path-parse) > Node.js [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) [ponyfill](https://ponyfill.com). ## Install ``` $ npm install --save path-parse ``` ## Usage ```js var pathParse = require('path-parse'); pathParse('/home/user/dir/file.txt'); //=> { // root : "/", // dir : "/home/user/dir", // base : "file.txt", // ext : ".txt", // name : "file" // } ``` ## API See [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) docs. ### pathParse(path) ### pathParse.posix(path) The Posix specific version. ### pathParse.win32(path) The Windows specific version. ## License MIT © [Javier Blanco](http://jbgutierrez.info) # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = new LRU(options) , otherCache = new LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. Setting it to a non-number or negative number will throw a `TypeError`. Setting it to 0 makes it be `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a `TypeError`. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. * `updateAgeOnGet` When using time-expiring entries with `maxAge`, setting this to `true` will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.) ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` You can also just load the module for the function that you care about, if you'd like to minimize your footprint. ```js // load the whole API at once in a single object const semver = require('semver') // or just load the bits you need // all of them listed here, just pick and choose what you want // classes const SemVer = require('semver/classes/semver') const Comparator = require('semver/classes/comparator') const Range = require('semver/classes/range') // functions for working with versions const semverParse = require('semver/functions/parse') const semverValid = require('semver/functions/valid') const semverClean = require('semver/functions/clean') const semverInc = require('semver/functions/inc') const semverDiff = require('semver/functions/diff') const semverMajor = require('semver/functions/major') const semverMinor = require('semver/functions/minor') const semverPatch = require('semver/functions/patch') const semverPrerelease = require('semver/functions/prerelease') const semverCompare = require('semver/functions/compare') const semverRcompare = require('semver/functions/rcompare') const semverCompareLoose = require('semver/functions/compare-loose') const semverCompareBuild = require('semver/functions/compare-build') const semverSort = require('semver/functions/sort') const semverRsort = require('semver/functions/rsort') // low-level comparators between versions const semverGt = require('semver/functions/gt') const semverLt = require('semver/functions/lt') const semverEq = require('semver/functions/eq') const semverNeq = require('semver/functions/neq') const semverGte = require('semver/functions/gte') const semverLte = require('semver/functions/lte') const semverCmp = require('semver/functions/cmp') const semverCoerce = require('semver/functions/coerce') // working with ranges const semverSatisfies = require('semver/functions/satisfies') const semverMaxSatisfying = require('semver/ranges/max-satisfying') const semverMinSatisfying = require('semver/ranges/min-satisfying') const semverToComparators = require('semver/ranges/to-comparators') const semverMinVersion = require('semver/ranges/min-version') const semverValidRange = require('semver/ranges/valid') const semverOutside = require('semver/ranges/outside') const semverGtr = require('semver/ranges/gtr') const semverLtr = require('semver/ranges/ltr') const semverIntersects = require('semver/ranges/intersects') const simplifyRange = require('semver/ranges/simplify') const rangeSubset = require('semver/ranges/subset') ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0-0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0-0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0-0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0-0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0-0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0-0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0-0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0-0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0-0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0-0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0-0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0-0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0-0` * `^0.2.3` := `>=0.2.3 <0.3.0-0` * `^0.0.3` := `>=0.0.3 <0.0.4-0` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4-0` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0-0` * `^0.0.x` := `>=0.0.0 <0.1.0-0` * `^0.0` := `>=0.0.0 <0.1.0-0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0-0` * `^0.x` := `>=0.0.0 <1.0.0-0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect * `simplifyRange(versions, range)`: Return a "simplified" range that matches the same items in `versions` list as the range specified. Note that it does *not* guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with `||` programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned. * `subset(subRange, superRange)`: Return `true` if the `subRange` range is entirely contained by the `superRange` range. Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` ## Exported Modules <!-- TODO: Make sure that all of these items are documented (classes aren't, eg), and then pull the module name into the documentation for that specific thing. --> You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main `require('semver')` export uses getter functions to lazily load the parts of the API that are used. The following modules are available: * `require('semver')` * `require('semver/classes')` * `require('semver/classes/comparator')` * `require('semver/classes/range')` * `require('semver/classes/semver')` * `require('semver/functions/clean')` * `require('semver/functions/cmp')` * `require('semver/functions/coerce')` * `require('semver/functions/compare')` * `require('semver/functions/compare-build')` * `require('semver/functions/compare-loose')` * `require('semver/functions/diff')` * `require('semver/functions/eq')` * `require('semver/functions/gt')` * `require('semver/functions/gte')` * `require('semver/functions/inc')` * `require('semver/functions/lt')` * `require('semver/functions/lte')` * `require('semver/functions/major')` * `require('semver/functions/minor')` * `require('semver/functions/neq')` * `require('semver/functions/parse')` * `require('semver/functions/patch')` * `require('semver/functions/prerelease')` * `require('semver/functions/rcompare')` * `require('semver/functions/rsort')` * `require('semver/functions/satisfies')` * `require('semver/functions/sort')` * `require('semver/functions/valid')` * `require('semver/ranges/gtr')` * `require('semver/ranges/intersects')` * `require('semver/ranges/ltr')` * `require('semver/ranges/max-satisfying')` * `require('semver/ranges/min-satisfying')` * `require('semver/ranges/min-version')` * `require('semver/ranges/outside')` * `require('semver/ranges/to-comparators')` * `require('semver/ranges/valid')` # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ## Timezone support In order to provide support for timezones, without relying on the JavaScript host or any other time-zone aware environment, this library makes use of teh IANA Timezone Database directly: https://www.iana.org/time-zones The database files are parsed by the scripts in this folder, which emit AssemblyScript code which is used to process the various rules at runtime. # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo: [demo](https://estools.github.io/esquery/) The following selectors are supported: * AST node type: `ForStatement` * [wildcard](http://dev.w3.org/csswg/selectors4/#universal-selector): `*` * [attribute existence](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr]` * [attribute value](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr="foo"]` or `[attr=123]` * attribute regex: `[attr=/foo.*/]` or (with flags) `[attr=/foo.*/is]` * attribute conditions: `[attr!="foo"]`, `[attr>2]`, `[attr<3]`, `[attr>=2]`, or `[attr<=3]` * nested attribute: `[attr.level2="foo"]` * field: `FunctionDeclaration > Identifier.id` * [First](http://dev.w3.org/csswg/selectors4/#the-first-child-pseudo) or [last](http://dev.w3.org/csswg/selectors4/#the-last-child-pseudo) child: `:first-child` or `:last-child` * [nth-child](http://dev.w3.org/csswg/selectors4/#the-nth-child-pseudo) (no ax+b support): `:nth-child(2)` * [nth-last-child](http://dev.w3.org/csswg/selectors4/#the-nth-last-child-pseudo) (no ax+b support): `:nth-last-child(1)` * [descendant](http://dev.w3.org/csswg/selectors4/#descendant-combinators): `ancestor descendant` * [child](http://dev.w3.org/csswg/selectors4/#child-combinators): `parent > child` * [following sibling](http://dev.w3.org/csswg/selectors4/#general-sibling-combinators): `node ~ sibling` * [adjacent sibling](http://dev.w3.org/csswg/selectors4/#adjacent-sibling-combinators): `node + adjacent` * [negation](http://dev.w3.org/csswg/selectors4/#negation-pseudo): `:not(ForStatement)` * [has](https://drafts.csswg.org/selectors-4/#has-pseudo): `:has(ForStatement)` * [matches-any](http://dev.w3.org/csswg/selectors4/#matches): `:matches([attr] > :first-child, :last-child)` * [subject indicator](http://dev.w3.org/csswg/selectors4/#subject): `!IfStatement > [name="foo"]` * class of AST node: `:statement`, `:expression`, `:declaration`, `:function`, or `:pattern` [![Build Status](https://travis-ci.org/estools/esquery.png?branch=master)](https://travis-ci.org/estools/esquery) [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # eslint-utils [![npm version](https://img.shields.io/npm/v/eslint-utils.svg)](https://www.npmjs.com/package/eslint-utils) [![Downloads/month](https://img.shields.io/npm/dm/eslint-utils.svg)](http://www.npmtrends.com/eslint-utils) [![Build Status](https://github.com/mysticatea/eslint-utils/workflows/CI/badge.svg)](https://github.com/mysticatea/eslint-utils/actions) [![Coverage Status](https://codecov.io/gh/mysticatea/eslint-utils/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/eslint-utils) [![Dependency Status](https://david-dm.org/mysticatea/eslint-utils.svg)](https://david-dm.org/mysticatea/eslint-utils) ## 🏁 Goal This package provides utility functions and classes for make ESLint custom rules. For examples: - [getStaticValue](https://eslint-utils.mysticatea.dev/api/ast-utils.html#getstaticvalue) evaluates static value on AST. - [ReferenceTracker](https://eslint-utils.mysticatea.dev/api/scope-utils.html#referencetracker-class) checks the members of modules/globals as handling assignments and destructuring. ## 📖 Usage See [documentation](https://eslint-utils.mysticatea.dev/). ## 📰 Changelog See [releases](https://github.com/mysticatea/eslint-utils/releases). ## ❤️ Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run clean` removes the coverage result of `npm test` command. - `npm run coverage` shows the coverage result of the last `npm test` command. - `npm run lint` runs ESLint. - `npm run watch` runs tests on each file change. # function-bind <!-- [![build status][travis-svg]][travis-url] [![NPM version][npm-badge-svg]][npm-url] [![Coverage Status][5]][6] [![gemnasium Dependency Status][7]][8] [![Dependency status][deps-svg]][deps-url] [![Dev Dependency status][dev-deps-svg]][dev-deps-url] --> <!-- [![browser support][11]][12] --> Implementation of function.prototype.bind ## Example I mainly do this for unit tests I run on phantomjs. PhantomJS does not have Function.prototype.bind :( ```js Function.prototype.bind = require("function-bind") ``` ## Installation `npm install function-bind` ## Contributors - Raynos ## MIT Licenced [travis-svg]: https://travis-ci.org/Raynos/function-bind.svg [travis-url]: https://travis-ci.org/Raynos/function-bind [npm-badge-svg]: https://badge.fury.io/js/function-bind.svg [npm-url]: https://npmjs.org/package/function-bind [5]: https://coveralls.io/repos/Raynos/function-bind/badge.png [6]: https://coveralls.io/r/Raynos/function-bind [7]: https://gemnasium.com/Raynos/function-bind.png [8]: https://gemnasium.com/Raynos/function-bind [deps-svg]: https://david-dm.org/Raynos/function-bind.svg [deps-url]: https://david-dm.org/Raynos/function-bind [dev-deps-svg]: https://david-dm.org/Raynos/function-bind/dev-status.svg [dev-deps-url]: https://david-dm.org/Raynos/function-bind#info=devDependencies [11]: https://ci.testling.com/Raynos/function-bind.png [12]: https://ci.testling.com/Raynos/function-bind # sprintf.js **sprintf.js** is a complete open source JavaScript sprintf implementation for the *browser* and *node.js*. Its prototype is simple: string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]]) The placeholders in the format string are marked by `%` and are followed by one or more of these elements, in this order: * An optional number followed by a `$` sign that selects which argument index to use for the value. If not specified, arguments will be placed in the same order as the placeholders in the input string. * An optional `+` sign that forces to preceed the result with a plus or minus sign on numeric values. By default, only the `-` sign is used on negative numbers. * An optional padding specifier that says what character to use for padding (if specified). Possible values are `0` or any other character precedeed by a `'` (single quote). The default is to pad with *spaces*. * An optional `-` sign, that causes sprintf to left-align the result of this placeholder. The default is to right-align the result. * An optional number, that says how many characters the result should have. If the value to be returned is shorter than this number, the result will be padded. When used with the `j` (JSON) type specifier, the padding length specifies the tab size used for indentation. * An optional precision modifier, consisting of a `.` (dot) followed by a number, that says how many digits should be displayed for floating point numbers. When used with the `g` type specifier, it specifies the number of significant digits. When used on a string, it causes the result to be truncated. * A type specifier that can be any of: * `%` — yields a literal `%` character * `b` — yields an integer as a binary number * `c` — yields an integer as the character with that ASCII value * `d` or `i` — yields an integer as a signed decimal number * `e` — yields a float using scientific notation * `u` — yields an integer as an unsigned decimal number * `f` — yields a float as is; see notes on precision above * `g` — yields a float as is; see notes on precision above * `o` — yields an integer as an octal number * `s` — yields a string as is * `x` — yields an integer as a hexadecimal number (lower-case) * `X` — yields an integer as a hexadecimal number (upper-case) * `j` — yields a JavaScript object or array as a JSON encoded string ## JavaScript `vsprintf` `vsprintf` is the same as `sprintf` except that it accepts an array of arguments, rather than a variable number of arguments: vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) ## Argument swapping You can also swap the arguments. That is, the order of the placeholders doesn't have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to: sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") And, of course, you can repeat the placeholders without having to increase the number of arguments. ## Named arguments Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - `(` and `)` - and begin with a keyword that refers to a key: var user = { name: "Dolly" } sprintf("Hello %(name)s", user) // Hello Dolly Keywords in replacement fields can be optionally followed by any number of keywords or indexes: var users = [ {name: "Dolly"}, {name: "Molly"}, {name: "Polly"} ] sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly Note: mixing positional and named placeholders is not (yet) supported ## Computed values You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly. sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890 sprintf("Current date and time: %s", function() { return new Date().toString() }) # AngularJS You can now use `sprintf` and `vsprintf` (also aliased as `fmt` and `vfmt` respectively) in your AngularJS projects. See `demo/`. # Installation ## Via Bower bower install sprintf ## Or as a node.js module npm install sprintf-js ### Usage var sprintf = require("sprintf-js").sprintf, vsprintf = require("sprintf-js").vsprintf sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) # License **sprintf.js** is licensed under the terms of the 3-clause BSD license. # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows command prompt notes ##### CMD On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Example: ```cmd set DEBUG=* & node app.js ``` ##### PowerShell (VS Code default) PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Example: ```cmd $env:DEBUG='app';node app.js ``` Then, run the program to be debugged as usual. npm script example: ```js "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js", ``` ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Extend You can simply extend debugger ```js const log = require('debug')('auth'); //creates new debug instance with extended namespace const logSign = log.extend('sign'); const logLogin = log.extend('login'); log('hello'); // auth hello logSign('hello'); //auth:sign hello logLogin('hello'); //auth:login hello ``` ## Set dynamically You can also enable debug dynamically by calling the `enable()` method : ```js let debug = require('debug'); console.log(1, debug.enabled('test')); debug.enable('test'); console.log(2, debug.enabled('test')); debug.disable(); console.log(3, debug.enabled('test')); ``` print : ``` 1 false 2 true 3 false ``` Usage : `enable(namespaces)` `namespaces` can include modes separated by a colon and wildcards. Note that calling `enable()` completely overrides previously set DEBUG variable : ``` $ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))' => false ``` `disable()` Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with. For example: ```js let debug = require('debug'); debug.enable('foo:*,-foo:bar'); let namespaces = debug.disable(); debug.enable(namespaces); ``` Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical. ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] [![Downloads][downloads-image]][downloads-url] [![Join the chat at https://gitter.im/eslint/doctrine](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/eslint/doctrine?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # Doctrine Doctrine is a [JSDoc](http://usejsdoc.org) parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file). ## Installation You can install Doctrine using [npm](https://npmjs.com): ``` $ npm install doctrine --save-dev ``` Doctrine can also be used in web browsers using [Browserify](http://browserify.org). ## Usage Require doctrine inside of your JavaScript: ```js var doctrine = require("doctrine"); ``` ### parse() The primary method is `parse()`, which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are: * `unwrap` - set to `true` to delete the leading `/**`, any `*` that begins a line, and the trailing `*/` from the source text. Default: `false`. * `tags` - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if `tags` is `["param"]`, then only `@param` tags will be returned. Default: `null`. * `recoverable` - set to `true` to keep parsing even when syntax errors occur. Default: `false`. * `sloppy` - set to `true` to allow optional parameters to be specified in brackets (`@param {string} [foo]`). Default: `false`. * `lineNumbers` - set to `true` to add `lineNumber` to each node, specifying the line on which the node is found in the source. Default: `false`. * `range` - set to `true` to add `range` to each node, specifying the start and end index of the node in the original comment. Default: `false`. Here's a simple example: ```js var ast = doctrine.parse( [ "/**", " * This function comment is parsed by doctrine", " * @param {{ok:String}} userName", "*/" ].join('\n'), { unwrap: true }); ``` This example returns the following AST: { "description": "This function comment is parsed by doctrine", "tags": [ { "title": "param", "description": null, "type": { "type": "RecordType", "fields": [ { "type": "FieldType", "key": "ok", "value": { "type": "NameExpression", "name": "String" } } ] }, "name": "userName" } ] } See the [demo page](http://eslint.org/doctrine/demo/) more detail. ## Team These folks keep the project moving and are resources for help: * Nicholas C. Zakas ([@nzakas](https://github.com/nzakas)) - project lead * Yusuke Suzuki ([@constellation](https://github.com/constellation)) - reviewer ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/doctrine/issues). ## Frequently Asked Questions ### Can I pass a whole JavaScript file to Doctrine? No. Doctrine can only parse JSDoc comments, so you'll need to pass just the JSDoc comment to Doctrine in order to work. ### License #### doctrine Copyright JS Foundation and other contributors, https://js.foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. #### esprima some of functions is derived from esprima Copyright (C) 2012, 2011 [Ariya Hidayat](http://ariya.ofilabs.com/about) (twitter: [@ariyahidayat](http://twitter.com/ariyahidayat)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #### closure-compiler some of extensions is derived from closure-compiler Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ ### Where to ask for help? Join our [Chatroom](https://gitter.im/eslint/doctrine) [npm-image]: https://img.shields.io/npm/v/doctrine.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/doctrine [travis-image]: https://img.shields.io/travis/eslint/doctrine/master.svg?style=flat-square [travis-url]: https://travis-ci.org/eslint/doctrine [coveralls-image]: https://img.shields.io/coveralls/eslint/doctrine/master.svg?style=flat-square [coveralls-url]: https://coveralls.io/r/eslint/doctrine?branch=master [downloads-image]: http://img.shields.io/npm/dm/doctrine.svg?style=flat-square [downloads-url]: https://www.npmjs.com/package/doctrine # Acorn-JSX [![Build Status](https://travis-ci.org/acornjs/acorn-jsx.svg?branch=master)](https://travis-ci.org/acornjs/acorn-jsx) [![NPM version](https://img.shields.io/npm/v/acorn-jsx.svg)](https://www.npmjs.org/package/acorn-jsx) This is plugin for [Acorn](http://marijnhaverbeke.nl/acorn/) - a tiny, fast JavaScript parser, written completely in JavaScript. It was created as an experimental alternative, faster [React.js JSX](http://facebook.github.io/react/docs/jsx-in-depth.html) parser. Later, it replaced the [official parser](https://github.com/facebookarchive/esprima) and these days is used by many prominent development tools. ## Transpiler Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out [Babel](https://babeljs.io/) and [Buble](https://buble.surge.sh/) transpilers which use `acorn-jsx` under the hood. ## Usage Requiring this module provides you with an Acorn plugin that you can use like this: ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');"); ``` Note that official spec doesn't support mix of XML namespaces and object-style access in tag names (#27) like in `<namespace:Object.Property />`, so it was deprecated in `acorn-jsx@3.0`. If you still want to opt-in to support of such constructions, you can pass the following option: ```javascript acorn.Parser.extend(jsx({ allowNamespacedObjects: true })) ``` Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely: ```javascript acorn.Parser.extend(jsx({ allowNamespaces: false })) ``` Note that by default `allowNamespaces` is enabled for spec compliancy. ## License This plugin is issued under the [MIT license](./LICENSE). JS-YAML - YAML 1.2 parser / writer for JavaScript ================================================= [![Build Status](https://travis-ci.org/nodeca/js-yaml.svg?branch=master)](https://travis-ci.org/nodeca/js-yaml) [![NPM version](https://img.shields.io/npm/v/js-yaml.svg)](https://www.npmjs.org/package/js-yaml) __[Online Demo](http://nodeca.github.com/js-yaml/)__ This is an implementation of [YAML](http://yaml.org/), a human-friendly data serialization language. Started as [PyYAML](http://pyyaml.org/) port, it was completely rewritten from scratch. Now it's very fast, and supports 1.2 spec. Installation ------------ ### YAML module for node.js ``` npm install js-yaml ``` ### CLI executable If you want to inspect your YAML files from CLI, install js-yaml globally: ``` npm install -g js-yaml ``` #### Usage ``` usage: js-yaml [-h] [-v] [-c] [-t] file Positional arguments: file File with YAML document(s) Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -c, --compact Display errors in compact mode -t, --trace Show stack trace on error ``` ### Bundled YAML library for browsers ``` html <!-- esprima required only for !!js/function --> <script src="esprima.js"></script> <script src="js-yaml.min.js"></script> <script type="text/javascript"> var doc = jsyaml.load('greeting: hello\nname: world'); </script> ``` Browser support was done mostly for the online demo. If you find any errors - feel free to send pull requests with fixes. Also note, that IE and other old browsers needs [es5-shims](https://github.com/kriskowal/es5-shim) to operate. Notes: 1. We have no resources to support browserified version. Don't expect it to be well tested. Don't expect fast fixes if something goes wrong there. 2. `!!js/function` in browser bundle will not work by default. If you really need it - load `esprima` parser first (via amd or directly). 3. `!!bin` in browser will return `Array`, because browsers do not support node.js `Buffer` and adding Buffer shims is completely useless on practice. API --- Here we cover the most 'useful' methods. If you need advanced details (creating your own tags), see [wiki](https://github.com/nodeca/js-yaml/wiki) and [examples](https://github.com/nodeca/js-yaml/tree/master/examples) for more info. ``` javascript const yaml = require('js-yaml'); const fs = require('fs'); // Get document, or throw exception on error try { const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8')); console.log(doc); } catch (e) { console.log(e); } ``` ### safeLoad (string [ , options ]) **Recommended loading way.** Parses `string` as single YAML document. Returns either a plain object, a string or `undefined`, or throws `YAMLException` on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data. options: - `filename` _(default: null)_ - string to be used as a file path in error/warning messages. - `onWarning` _(default: null)_ - function to call on warning messages. Loader will call this function with an instance of `YAMLException` for each warning. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ - specifies a schema to use. - `FAILSAFE_SCHEMA` - only strings, arrays and plain objects: http://www.yaml.org/spec/1.2/spec.html#id2802346 - `JSON_SCHEMA` - all JSON-supported types: http://www.yaml.org/spec/1.2/spec.html#id2803231 - `CORE_SCHEMA` - same as `JSON_SCHEMA`: http://www.yaml.org/spec/1.2/spec.html#id2804923 - `DEFAULT_SAFE_SCHEMA` - all supported YAML types, without unsafe ones (`!!js/undefined`, `!!js/regexp` and `!!js/function`): http://yaml.org/type/ - `DEFAULT_FULL_SCHEMA` - all supported YAML types. - `json` _(default: false)_ - compatibility with JSON.parse behaviour. If true, then duplicate keys in a mapping will override values rather than throwing an error. NOTE: This function **does not** understand multi-document sources, it throws exception on those. NOTE: JS-YAML **does not** support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use `Null` and `NULL` as `null`, etc. The core schema also has no such restrictions. It allows binary notation for integers. ### load (string [ , options ]) **Use with care with untrusted sources**. The same as `safeLoad()` but uses `DEFAULT_FULL_SCHEMA` by default - adds some JavaScript-specific types: `!!js/function`, `!!js/regexp` and `!!js/undefined`. For untrusted sources, you must additionally validate object structure to avoid injections: ``` javascript const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"'; // I'm just converting that string, what could possibly go wrong? require('js-yaml').load(untrusted_code) + '' ``` ### safeLoadAll (string [, iterator] [, options ]) Same as `safeLoad()`, but understands multi-document sources. Applies `iterator` to each document if specified, or returns array of documents. ``` javascript const yaml = require('js-yaml'); yaml.safeLoadAll(data, function (doc) { console.log(doc); }); ``` ### loadAll (string [, iterator] [ , options ]) Same as `safeLoadAll()` but uses `DEFAULT_FULL_SCHEMA` by default. ### safeDump (object [ , options ]) Serializes `object` as a YAML document. Uses `DEFAULT_SAFE_SCHEMA`, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the `skipInvalid` option to `true`. options: - `indent` _(default: 2)_ - indentation width to use (in spaces). - `noArrayIndent` _(default: false)_ - when true, will not add an indentation level to array elements - `skipInvalid` _(default: false)_ - do not throw on invalid types (like function in the safe schema) and skip pairs and single values with such types. - `flowLevel` (default: -1) - specifies level of nesting, when to switch from block to flow style for collections. -1 means block style everwhere - `styles` - "tag" => "style" map. Each tag may have own set of styles. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ specifies a schema to use. - `sortKeys` _(default: `false`)_ - if `true`, sort keys when dumping YAML. If a function, use the function to sort the keys. - `lineWidth` _(default: `80`)_ - set max line width. - `noRefs` _(default: `false`)_ - if `true`, don't convert duplicate objects into references - `noCompatMode` _(default: `false`)_ - if `true` don't try to be compatible with older yaml versions. Currently: don't quote "yes", "no" and so on, as required for YAML 1.1 - `condenseFlow` _(default: `false`)_ - if `true` flow sequences will be condensed, omitting the space between `a, b`. Eg. `'[a,b]'`, and omitting the space between `key: value` and quoting the key. Eg. `'{"a":b}'` Can be useful when using yaml for pretty URL query params as spaces are %-encoded. The following table show availlable styles (e.g. "canonical", "binary"...) available for each tag (.e.g. !!null, !!int ...). Yaml output is shown on the right side after `=>` (default setting) or `->`: ``` none !!null "canonical" -> "~" "lowercase" => "null" "uppercase" -> "NULL" "camelcase" -> "Null" !!int "binary" -> "0b1", "0b101010", "0b1110001111010" "octal" -> "01", "052", "016172" "decimal" => "1", "42", "7290" "hexadecimal" -> "0x1", "0x2A", "0x1C7A" !!bool "lowercase" => "true", "false" "uppercase" -> "TRUE", "FALSE" "camelcase" -> "True", "False" !!float "lowercase" => ".nan", '.inf' "uppercase" -> ".NAN", '.INF' "camelcase" -> ".NaN", '.Inf' ``` Example: ``` javascript safeDump (object, { 'styles': { '!!null': 'canonical' // dump null as ~ }, 'sortKeys': true // sort object keys }); ``` ### dump (object [ , options ]) Same as `safeDump()` but without limits (uses `DEFAULT_FULL_SCHEMA` by default). Supported YAML types -------------------- The list of standard YAML tags and corresponding JavaScipt types. See also [YAML tag discussion](http://pyyaml.org/wiki/YAMLTagDiscussion) and [YAML types repository](http://yaml.org/type/). ``` !!null '' # null !!bool 'yes' # bool !!int '3...' # number !!float '3.14...' # number !!binary '...base64...' # buffer !!timestamp 'YYYY-...' # date !!omap [ ... ] # array of key-value pairs !!pairs [ ... ] # array or array pairs !!set { ... } # array of objects with given keys and null values !!str '...' # string !!seq [ ... ] # array !!map { ... } # object ``` **JavaScript-specific tags** ``` !!js/regexp /pattern/gim # RegExp !!js/undefined '' # Undefined !!js/function 'function () {...}' # Function ``` Caveats ------- Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling `toString()` method) them at the moment of adding them. ``` yaml --- ? [ foo, bar ] : - baz ? { foo: bar } : - baz - baz ``` ``` javascript { "foo,bar": ["baz"], "[object Object]": ["baz", "baz"] } ``` Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded. ``` yaml &anchor foo: foo: bar *anchor: duplicate key baz: bat *anchor: duplicate key ``` js-yaml for enterprise ---------------------- Available as part of the Tidelift Subscription The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-js-yaml?utm_source=npm-js-yaml&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) # word-wrap [![NPM version](https://img.shields.io/npm/v/word-wrap.svg?style=flat)](https://www.npmjs.com/package/word-wrap) [![NPM monthly downloads](https://img.shields.io/npm/dm/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![NPM total downloads](https://img.shields.io/npm/dt/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/word-wrap.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/word-wrap) > Wrap words to a specified length. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save word-wrap ``` ## Usage ```js var wrap = require('word-wrap'); wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'); ``` Results in: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. ``` ## Options ![image](https://cloud.githubusercontent.com/assets/383994/6543728/7a381c08-c4f6-11e4-8b7d-b6ba197569c9.png) ### options.width Type: `Number` Default: `50` The width of the text before wrapping to a new line. **Example:** ```js wrap(str, {width: 60}); ``` ### options.indent Type: `String` Default: `` (two spaces) The string to use at the beginning of each line. **Example:** ```js wrap(str, {indent: ' '}); ``` ### options.newline Type: `String` Default: `\n` The string to use at the end of each line. **Example:** ```js wrap(str, {newline: '\n\n'}); ``` ### options.escape Type: `function` Default: `function(str){return str;}` An escape function to run on each line after splitting them. **Example:** ```js var xmlescape = require('xml-escape'); wrap(str, { escape: function(string){ return xmlescape(string); } }); ``` ### options.trim Type: `Boolean` Default: `false` Trim trailing whitespace from the returned string. This option is included since `.trim()` would also strip the leading indentation from the first line. **Example:** ```js wrap(str, {trim: true}); ``` ### options.cut Type: `Boolean` Default: `false` Break a word between any two letters when the word is longer than the specified width. **Example:** ```js wrap(str, {cut: true}); ``` ## About ### Related projects * [common-words](https://www.npmjs.com/package/common-words): Updated list (JSON) of the 100 most common words in the English language. Useful for… [more](https://github.com/jonschlinkert/common-words) | [homepage](https://github.com/jonschlinkert/common-words "Updated list (JSON) of the 100 most common words in the English language. Useful for excluding these words from arrays.") * [shuffle-words](https://www.npmjs.com/package/shuffle-words): Shuffle the words in a string and optionally the letters in each word using the… [more](https://github.com/jonschlinkert/shuffle-words) | [homepage](https://github.com/jonschlinkert/shuffle-words "Shuffle the words in a string and optionally the letters in each word using the Fisher-Yates algorithm. Useful for creating test fixtures, benchmarking samples, etc.") * [unique-words](https://www.npmjs.com/package/unique-words): Return the unique words in a string or array. | [homepage](https://github.com/jonschlinkert/unique-words "Return the unique words in a string or array.") * [wordcount](https://www.npmjs.com/package/wordcount): Count the words in a string. Support for english, CJK and Cyrillic. | [homepage](https://github.com/jonschlinkert/wordcount "Count the words in a string. Support for english, CJK and Cyrillic.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Contributors | **Commits** | **Contributor** | | --- | --- | | 43 | [jonschlinkert](https://github.com/jonschlinkert) | | 2 | [lordvlad](https://github.com/lordvlad) | | 2 | [hildjj](https://github.com/hildjj) | | 1 | [danilosampaio](https://github.com/danilosampaio) | | 1 | [2fd](https://github.com/2fd) | | 1 | [toddself](https://github.com/toddself) | | 1 | [wolfgang42](https://github.com/wolfgang42) | | 1 | [zachhale](https://github.com/zachhale) | ### Building docs _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` ### Running tests Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](https://twitter.com/jonschlinkert) ### License Copyright © 2017, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 02, 2017._ # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. # is-glob [![NPM version](https://img.shields.io/npm/v/is-glob.svg?style=flat)](https://www.npmjs.com/package/is-glob) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![NPM total downloads](https://img.shields.io/npm/dt/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![Linux Build Status](https://img.shields.io/travis/micromatch/is-glob.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/is-glob) [![Windows Build Status](https://img.shields.io/appveyor/ci/micromatch/is-glob.svg?style=flat&label=AppVeyor)](https://ci.appveyor.com/project/micromatch/is-glob) > Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-glob ``` You might also be interested in [is-valid-glob](https://github.com/jonschlinkert/is-valid-glob) and [has-glob](https://github.com/jonschlinkert/has-glob). ## Usage ```js var isGlob = require('is-glob'); ``` ### Default behavior **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js'); isGlob('*.js'); isGlob('**/abc.js'); isGlob('abc/*.js'); isGlob('abc/(aaa|bbb).js'); isGlob('abc/[a-z].js'); isGlob('abc/{a,b}.js'); //=> true ``` Extglobs ```js isGlob('abc/@(a).js'); isGlob('abc/!(a).js'); isGlob('abc/+(a).js'); isGlob('abc/*(a).js'); isGlob('abc/?(a).js'); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('abc/\\@(a).js'); isGlob('abc/\\!(a).js'); isGlob('abc/\\+(a).js'); isGlob('abc/\\*(a).js'); isGlob('abc/\\?(a).js'); isGlob('\\!foo.js'); isGlob('\\*.js'); isGlob('\\*\\*/abc.js'); isGlob('abc/\\*.js'); isGlob('abc/\\(aaa|bbb).js'); isGlob('abc/\\[a-z].js'); isGlob('abc/\\{a,b}.js'); //=> false ``` Patterns that do not have glob patterns return `false`: ```js isGlob('abc.js'); isGlob('abc/def/ghi.js'); isGlob('foo.js'); isGlob('abc/@.js'); isGlob('abc/+.js'); isGlob('abc/?.js'); isGlob(); isGlob(null); //=> false ``` Arrays are also `false` (If you want to check if an array has a glob pattern, use [has-glob](https://github.com/jonschlinkert/has-glob)): ```js isGlob(['**/*.js']); isGlob(['foo.js']); //=> false ``` ### Option strict When `options.strict === false` the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return `false` may return `true`. This is done so that matching libraries like [micromatch](https://github.com/micromatch/micromatch) have a chance at determining if the pattern is a glob or not. **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js', {strict: false}); isGlob('*.js', {strict: false}); isGlob('**/abc.js', {strict: false}); isGlob('abc/*.js', {strict: false}); isGlob('abc/(aaa|bbb).js', {strict: false}); isGlob('abc/[a-z].js', {strict: false}); isGlob('abc/{a,b}.js', {strict: false}); //=> true ``` Extglobs ```js isGlob('abc/@(a).js', {strict: false}); isGlob('abc/!(a).js', {strict: false}); isGlob('abc/+(a).js', {strict: false}); isGlob('abc/*(a).js', {strict: false}); isGlob('abc/?(a).js', {strict: false}); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('\\!foo.js', {strict: false}); isGlob('\\*.js', {strict: false}); isGlob('\\*\\*/abc.js', {strict: false}); isGlob('abc/\\*.js', {strict: false}); isGlob('abc/\\(aaa|bbb).js', {strict: false}); isGlob('abc/\\[a-z].js', {strict: false}); isGlob('abc/\\{a,b}.js', {strict: false}); //=> false ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [assemble](https://www.npmjs.com/package/assemble): Get the rocks out of your socks! Assemble makes you fast at creating web projects… [more](https://github.com/assemble/assemble) | [homepage](https://github.com/assemble/assemble "Get the rocks out of your socks! Assemble makes you fast at creating web projects. Assemble is used by thousands of projects for rapid prototyping, creating themes, scaffolds, boilerplates, e-books, UI components, API documentation, blogs, building websit") * [base](https://www.npmjs.com/package/base): Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks | [homepage](https://github.com/node-base/base "Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks") * [update](https://www.npmjs.com/package/update): Be scalable! Update is a new, open source developer framework and CLI for automating updates… [more](https://github.com/update/update) | [homepage](https://github.com/update/update "Be scalable! Update is a new, open source developer framework and CLI for automating updates of any kind in code projects.") * [verb](https://www.npmjs.com/package/verb): Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used… [more](https://github.com/verbose/verb) | [homepage](https://github.com/verbose/verb "Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used on hundreds of projects of all sizes to generate everything from API docs to readmes.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 47 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [doowb](https://github.com/doowb) | | 1 | [phated](https://github.com/phated) | | 1 | [danhper](https://github.com/danhper) | | 1 | [paulmillr](https://github.com/paulmillr) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 27, 2019._ near Smart Contract ================== A [smart contract] written in [AssemblyScript] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install [Node.js] ≥ 12 Exploring The Code ================== 1. The main smart contract code lives in `assembly/index.ts`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard AssemblyScript tests using [as-pect]. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [AssemblyScript]: https://www.assemblyscript.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [as-pect]: https://www.npmjs.com/package/@as-pect/cli # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. [![build status](https://secure.travis-ci.org/dankogai/js-base64.png)](http://travis-ci.org/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## HEADS UP In version 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see below). And since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/js-base64@3.6.1/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.6.1/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/js-base64@3.6.1/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.encode(latin, true)); // ZGFua29nYWk skips padding Base64.encodeURI(latin)); // ZGFua29nYWk Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.decode( 'ZGFua29nYWk'); // dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IE11. Do the following in your shell. ```shell $ make base64.es5.js ``` # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/main/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/yargs-parser@19.0.0/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _: [], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _: [2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC ## Test Strategy - tests are copied from the [polyfill implementation](https://github.com/tc39/proposal-temporal/tree/main/polyfill/test) - tests should be removed if they relate to features that do not make sense for TS/AS, i.e. tests that validate the shape of an object do not make sense in a language with compile-time type checking - tests that fail because a feature has not been implemented yet should be left as failures. # has > Object.prototype.hasOwnProperty.call shortcut ## Installation ```sh npm install --save has ``` ## Usage ```js var has = require('has'); has({}, 'hasOwnProperty'); // false has(Object.prototype, 'hasOwnProperty'); // true ``` # is-core-module <sup>[![Version Badge][2]][1]</sup> [![github actions][actions-image]][actions-url] [![coverage][codecov-image]][codecov-url] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] Is this specifier a node.js core module? Optionally provide a node version to check; defaults to the current node version. ## Example ```js var isCore = require('is-core-module'); var assert = require('assert'); assert(isCore('fs')); assert(!isCore('butts')); ``` ## Tests Clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-core-module [2]: https://versionbadg.es/inspect-js/is-core-module.svg [5]: https://david-dm.org/inspect-js/is-core-module.svg [6]: https://david-dm.org/inspect-js/is-core-module [7]: https://david-dm.org/inspect-js/is-core-module/dev-status.svg [8]: https://david-dm.org/inspect-js/is-core-module#info=devDependencies [11]: https://nodei.co/npm/is-core-module.png?downloads=true&stars=true [license-image]: https://img.shields.io/npm/l/is-core-module.svg [license-url]: LICENSE [downloads-image]: https://img.shields.io/npm/dm/is-core-module.svg [downloads-url]: https://npm-stat.com/charts.html?package=is-core-module [codecov-image]: https://codecov.io/gh/inspect-js/is-core-module/branch/main/graphs/badge.svg [codecov-url]: https://app.codecov.io/gh/inspect-js/is-core-module/ [actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/inspect-js/is-core-module [actions-url]: https://github.com/inspect-js/is-core-module/actions # fast-levenshtein - Levenshtein algorithm in Javascript [![Build Status](https://secure.travis-ci.org/hiddentao/fast-levenshtein.png)](http://travis-ci.org/hiddentao/fast-levenshtein) [![NPM module](https://badge.fury.io/js/fast-levenshtein.png)](https://badge.fury.io/js/fast-levenshtein) [![NPM downloads](https://img.shields.io/npm/dm/fast-levenshtein.svg?maxAge=2592000)](https://www.npmjs.com/package/fast-levenshtein) [![Follow on Twitter](https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/hiddentao) An efficient Javascript implementation of the [Levenshtein algorithm](http://en.wikipedia.org/wiki/Levenshtein_distance) with locale-specific collator support. ## Features * Works in node.js and in the browser. * Better performance than other implementations by not needing to store the whole matrix ([more info](http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm)). * Locale-sensitive string comparisions if needed. * Comprehensive test suite and performance benchmark. * Small: <1 KB minified and gzipped ## Installation ### node.js Install using [npm](http://npmjs.org/): ```bash $ npm install fast-levenshtein ``` ### Browser Using bower: ```bash $ bower install fast-levenshtein ``` If you are not using any module loader system then the API will then be accessible via the `window.Levenshtein` object. ## Examples **Default usage** ```javascript var levenshtein = require('fast-levenshtein'); var distance = levenshtein.get('back', 'book'); // 2 var distance = levenshtein.get('我愛你', '我叫你'); // 1 ``` **Locale-sensitive string comparisons** It supports using [Intl.Collator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Collator) for locale-sensitive string comparisons: ```javascript var levenshtein = require('fast-levenshtein'); levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true}); // 1 ``` ## Building and Testing To build the code and run the tests: ```bash $ npm install -g grunt-cli $ npm install $ npm run build ``` ## Performance _Thanks to [Titus Wormer](https://github.com/wooorm) for [encouraging me](https://github.com/hiddentao/fast-levenshtein/issues/1) to do this._ Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM): ```bash Running suite Implementation comparison [benchmark/speed.js]... >> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled) >> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled) >> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled) >> natural x 255 ops/sec ±0.76% (88 runs sampled) >> levenshtein x 180 ops/sec ±3.55% (86 runs sampled) >> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled) Benchmark done. Fastest test is fast-levenshtein at 4.2x faster than levenshtein-component ``` You can run this benchmark yourself by doing: ```bash $ npm install $ npm run build $ npm run benchmark ``` ## Contributing If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes. See [CONTRIBUTING.md](https://github.com/hiddentao/fast-levenshtein/blob/master/CONTRIBUTING.md) for details. ## License MIT - see [LICENSE.md](https://github.com/hiddentao/fast-levenshtein/blob/master/LICENSE.md) [Build]: http://img.shields.io/travis/litejs/natural-compare-lite.png [Coverage]: http://img.shields.io/coveralls/litejs/natural-compare-lite.png [1]: https://travis-ci.org/litejs/natural-compare-lite [2]: https://coveralls.io/r/litejs/natural-compare-lite [npm package]: https://npmjs.org/package/natural-compare-lite [GitHub repo]: https://github.com/litejs/natural-compare-lite @version 1.4.0 @date 2015-10-26 @stability 3 - Stable Natural Compare &ndash; [![Build][]][1] [![Coverage][]][2] =============== Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a "natural ordering". ```text Standard sorting: Natural order sorting: img1.png img1.png img10.png img2.png img12.png img10.png img2.png img12.png ``` String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function. ### Installation - In browser ```html <script src=min.natural-compare.js></script> ``` - In node.js: `npm install natural-compare-lite` ```javascript require("natural-compare-lite") ``` ### Usage ```javascript // Simple case sensitive example var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"]; a.sort(String.naturalCompare); // ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"] // Use wrapper function for case insensitivity a.sort(function(a, b){ return String.naturalCompare(a.toLowerCase(), b.toLowerCase()); }) // In most cases we want to sort an array of objects var a = [ {"street":"350 5th Ave", "room":"A-1021"} , {"street":"350 5th Ave", "room":"A-21046-b"} ]; // sort by street, then by room a.sort(function(a, b){ return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room); }) // When text transformation is needed (eg toLowerCase()), // it is best for performance to keep // transformed key in that object. // There are no need to do text transformation // on each comparision when sorting. var a = [ {"make":"Audi", "model":"A6"} , {"make":"Kia", "model":"Rio"} ]; // sort by make, then by model a.map(function(car){ car.sort_key = (car.make + " " + car.model).toLowerCase(); }) a.sort(function(a, b){ return String.naturalCompare(a.sort_key, b.sort_key); }) ``` - Works well with dates in ISO format eg "Rev 2012-07-26.doc". ### Custom alphabet It is possible to configure a custom alphabet to achieve a desired order. ```javascript // Estonian alphabet String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy" ["t", "z", "x", "õ"].sort(String.naturalCompare) // ["z", "t", "õ", "x"] // Russian alphabet String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя" ["Ё", "А", "Б"].sort(String.naturalCompare) // ["А", "Б", "Ё"] ``` External links -------------- - [GitHub repo][https://github.com/litejs/natural-compare-lite] - [jsperf test](http://jsperf.com/natural-sort-2/12) Licence ------- Copyright (c) 2012-2015 Lauri Rooden &lt;lauri@rooden.ee&gt; [The MIT License](http://lauri.rooden.ee/mit-license.txt) # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` <table><thead> <tr> <th>Linux</th> <th>OS X</th> <th>Windows</th> <th>Coverage</th> <th>Downloads</th> </tr> </thead><tbody><tr> <td colspan="2" align="center"> <a href="https://travis-ci.org/kaelzhang/node-ignore"> <img src="https://travis-ci.org/kaelzhang/node-ignore.svg?branch=master" alt="Build Status" /></a> </td> <td align="center"> <a href="https://ci.appveyor.com/project/kaelzhang/node-ignore"> <img src="https://ci.appveyor.com/api/projects/status/github/kaelzhang/node-ignore?branch=master&svg=true" alt="Windows Build Status" /></a> </td> <td align="center"> <a href="https://codecov.io/gh/kaelzhang/node-ignore"> <img src="https://codecov.io/gh/kaelzhang/node-ignore/branch/master/graph/badge.svg" alt="Coverage Status" /></a> </td> <td align="center"> <a href="https://www.npmjs.org/package/ignore"> <img src="http://img.shields.io/npm/dm/ignore.svg" alt="npm module downloads per month" /></a> </td> </tr></tbody></table> # ignore `ignore` is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore [spec](http://git-scm.com/docs/gitignore). Pay attention that [`minimatch`](https://www.npmjs.org/package/minimatch) does not work in the gitignore way. To filter filenames according to .gitignore file, I recommend this module. ##### Tested on - Linux + Node: `0.8` - `7.x` - Windows + Node: `0.10` - `7.x`, node < `0.10` is not tested due to the lack of support of appveyor. Actually, `ignore` does not rely on any versions of node specially. Since `4.0.0`, ignore will no longer support `node < 6` by default, to use in node < 6, `require('ignore/legacy')`. For details, see [CHANGELOG](https://github.com/kaelzhang/node-ignore/blob/master/CHANGELOG.md). ## Table Of Main Contents - [Usage](#usage) - [`Pathname` Conventions](#pathname-conventions) - [Guide for 2.x -> 3.x](#upgrade-2x---3x) - [Guide for 3.x -> 4.x](#upgrade-3x---4x) - See Also: - [`glob-gitignore`](https://www.npmjs.com/package/glob-gitignore) matches files using patterns and filters them according to gitignore rules. ## Usage ```js import ignore from 'ignore' const ig = ignore().add(['.abc/*', '!.abc/d/']) ``` ### Filter the given paths ```js const paths = [ '.abc/a.js', // filtered out '.abc/d/e.js' // included ] ig.filter(paths) // ['.abc/d/e.js'] ig.ignores('.abc/a.js') // true ``` ### As the filter function ```js paths.filter(ig.createFilter()); // ['.abc/d/e.js'] ``` ### Win32 paths will be handled ```js ig.filter(['.abc\\a.js', '.abc\\d\\e.js']) // if the code above runs on windows, the result will be // ['.abc\\d\\e.js'] ``` ## Why another ignore? - `ignore` is a standalone module, and is much simpler so that it could easy work with other programs, unlike [isaacs](https://npmjs.org/~isaacs)'s [fstream-ignore](https://npmjs.org/package/fstream-ignore) which must work with the modules of the fstream family. - `ignore` only contains utility methods to filter paths according to the specified ignore rules, so - `ignore` never try to find out ignore rules by traversing directories or fetching from git configurations. - `ignore` don't cares about sub-modules of git projects. - Exactly according to [gitignore man page](http://git-scm.com/docs/gitignore), fixes some known matching issues of fstream-ignore, such as: - '`/*.js`' should only match '`a.js`', but not '`abc/a.js`'. - '`**/foo`' should match '`foo`' anywhere. - Prevent re-including a file if a parent directory of that file is excluded. - Handle trailing whitespaces: - `'a '`(one space) should not match `'a '`(two spaces). - `'a \ '` matches `'a '` - All test cases are verified with the result of `git check-ignore`. # Methods ## .add(pattern: string | Ignore): this ## .add(patterns: Array<string | Ignore>): this - **pattern** `String | Ignore` An ignore pattern string, or the `Ignore` instance - **patterns** `Array<String | Ignore>` Array of ignore patterns. Adds a rule or several rules to the current manager. Returns `this` Notice that a line starting with `'#'`(hash) is treated as a comment. Put a backslash (`'\'`) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename. ```js ignore().add('#abc').ignores('#abc') // false ignore().add('\#abc').ignores('#abc') // true ``` `pattern` could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just `ignore().add()` the content of a ignore file: ```js ignore() .add(fs.readFileSync(filenameOfGitignore).toString()) .filter(filenames) ``` `pattern` could also be an `ignore` instance, so that we could easily inherit the rules of another `Ignore` instance. ## <strike>.addIgnoreFile(path)</strike> REMOVED in `3.x` for now. To upgrade `ignore@2.x` up to `3.x`, use ```js import fs from 'fs' if (fs.existsSync(filename)) { ignore().add(fs.readFileSync(filename).toString()) } ``` instead. ## .filter(paths: Array<Pathname>): Array<Pathname> ```ts type Pathname = string ``` Filters the given array of pathnames, and returns the filtered array. - **paths** `Array.<Pathname>` The array of `pathname`s to be filtered. ### `Pathname` Conventions: #### 1. `Pathname` should be a `path.relative()`d pathname `Pathname` should be a string that have been `path.join()`ed, or the return value of `path.relative()` to the current directory. ```js // WRONG ig.ignores('./abc') // WRONG, for it will never happen. // If the gitignore rule locates at the root directory, // `'/abc'` should be changed to `'abc'`. // ``` // path.relative('/', '/abc') -> 'abc' // ``` ig.ignores('/abc') // Right ig.ignores('abc') // Right ig.ignores(path.join('./abc')) // path.join('./abc') -> 'abc' ``` In other words, each `Pathname` here should be a relative path to the directory of the gitignore rules. Suppose the dir structure is: ``` /path/to/your/repo |-- a | |-- a.js | |-- .b | |-- .c |-- .DS_store ``` Then the `paths` might be like this: ```js [ 'a/a.js' '.b', '.c/.DS_store' ] ``` Usually, you could use [`glob`](http://npmjs.org/package/glob) with `option.mark = true` to fetch the structure of the current directory: ```js import glob from 'glob' glob('**', { // Adds a / character to directory matches. mark: true }, (err, files) => { if (err) { return console.error(err) } let filtered = ignore().add(patterns).filter(files) console.log(filtered) }) ``` #### 2. filenames and dirnames `node-ignore` does NO `fs.stat` during path matching, so for the example below: ```js ig.add('config/') // `ig` does NOT know if 'config' is a normal file, directory or something ig.ignores('config') // And it returns `false` ig.ignores('config/') // returns `true` ``` Specially for people who develop some library based on `node-ignore`, it is important to understand that. ## .ignores(pathname: Pathname): boolean > new in 3.2.0 Returns `Boolean` whether `pathname` should be ignored. ```js ig.ignores('.abc/a.js') // true ``` ## .createFilter() Creates a filter function which could filter an array of paths with `Array.prototype.filter`. Returns `function(path)` the filter function. ## `options.ignorecase` since 4.0.0 Similar as the `core.ignorecase` option of [git-config](https://git-scm.com/docs/git-config), `node-ignore` will be case insensitive if `options.ignorecase` is set to `true` (default value), otherwise case sensitive. ```js const ig = ignore({ ignorecase: false }) ig.add('*.png') ig.ignores('*.PNG') // false ``` **** # Upgrade Guide ## Upgrade 2.x -> 3.x - All `options` of 2.x are unnecessary and removed, so just remove them. - `ignore()` instance is no longer an [`EventEmitter`](nodejs.org/api/events.html), and all events are unnecessary and removed. - `.addIgnoreFile()` is removed, see the [.addIgnoreFile](#addignorefilepath) section for details. ## Upgrade 3.x -> 4.x Since `4.0.0`, `ignore` will no longer support node < 6, to use `ignore` in node < 6: ```js var ignore = require('ignore/legacy') ``` **** # Collaborators - [@whitecolor](https://github.com/whitecolor) *Alex* - [@SamyPesse](https://github.com/SamyPesse) *Samy Pessé* - [@azproduction](https://github.com/azproduction) *Mikhail Davydov* - [@TrySound](https://github.com/TrySound) *Bogdan Chadkin* - [@JanMattner](https://github.com/JanMattner) *Jan Mattner* - [@ntwb](https://github.com/ntwb) *Stephen Edgar* - [@kasperisager](https://github.com/kasperisager) *Kasper Isager* - [@sandersn](https://github.com/sandersn) *Nathan Shively-Sanders* # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0,4.1,4.2?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <tannerntannern@gmail.com> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) ### Status: Work in progress Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ # ansi-colors [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/ansi-colors.svg?style=flat)](https://www.npmjs.com/package/ansi-colors) [![NPM monthly downloads](https://img.shields.io/npm/dm/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![NPM total downloads](https://img.shields.io/npm/dt/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![Linux Build Status](https://img.shields.io/travis/doowb/ansi-colors.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/ansi-colors) > Easily add ANSI colors to your text and symbols in the terminal. A faster drop-in replacement for chalk, kleur and turbocolor (without the dependencies and rendering bugs). Please consider following this project's author, [Brian Woodward](https://github.com/doowb), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save ansi-colors ``` ![image](https://user-images.githubusercontent.com/383994/39635445-8a98a3a6-4f8b-11e8-89c1-068c45d4fff8.png) ## Why use this? ansi-colors is _the fastest Node.js library for terminal styling_. A more performant drop-in replacement for chalk, with no dependencies. * _Blazing fast_ - Fastest terminal styling library in node.js, 10-20x faster than chalk! * _Drop-in replacement_ for [chalk](https://github.com/chalk/chalk). * _No dependencies_ (Chalk has 7 dependencies in its tree!) * _Safe_ - Does not modify the `String.prototype` like [colors](https://github.com/Marak/colors.js). * Supports [nested colors](#nested-colors), **and does not have the [nested styling bug](#nested-styling-bug) that is present in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur)**. * Supports [chained colors](#chained-colors). * [Toggle color support](#toggle-color-support) on or off. ## Usage ```js const c = require('ansi-colors'); console.log(c.red('This is a red string!')); console.log(c.green('This is a red string!')); console.log(c.cyan('This is a cyan string!')); console.log(c.yellow('This is a yellow string!')); ``` ![image](https://user-images.githubusercontent.com/383994/39653848-a38e67da-4fc0-11e8-89ae-98c65ebe9dcf.png) ## Chained colors ```js console.log(c.bold.red('this is a bold red message')); console.log(c.bold.yellow.italic('this is a bold yellow italicized message')); console.log(c.green.bold.underline('this is a bold green underlined message')); ``` ![image](https://user-images.githubusercontent.com/383994/39635780-7617246a-4f8c-11e8-89e9-05216cc54e38.png) ## Nested colors ```js console.log(c.yellow(`foo ${c.red.bold('red')} bar ${c.cyan('cyan')} baz`)); ``` ![image](https://user-images.githubusercontent.com/383994/39635817-8ed93d44-4f8c-11e8-8afd-8c3ea35f5fbe.png) ### Nested styling bug `ansi-colors` does not have the nested styling bug found in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur). ```js const { bold, red } = require('ansi-styles'); console.log(bold(`foo ${red.dim('bar')} baz`)); const colorette = require('colorette'); console.log(colorette.bold(`foo ${colorette.red(colorette.dim('bar'))} baz`)); const kleur = require('kleur'); console.log(kleur.bold(`foo ${kleur.red.dim('bar')} baz`)); const chalk = require('chalk'); console.log(chalk.bold(`foo ${chalk.red.dim('bar')} baz`)); ``` **Results in the following** (sans icons and labels) ![image](https://user-images.githubusercontent.com/383994/47280326-d2ee0580-d5a3-11e8-9611-ea6010f0a253.png) ## Toggle color support Easily enable/disable colors. ```js const c = require('ansi-colors'); // disable colors manually c.enabled = false; // or use a library to automatically detect support c.enabled = require('color-support').hasBasic; console.log(c.red('I will only be colored red if the terminal supports colors')); ``` ## Strip ANSI codes Use the `.unstyle` method to strip ANSI codes from a string. ```js console.log(c.unstyle(c.blue.bold('foo bar baz'))); //=> 'foo bar baz' ``` ## Available styles **Note** that bright and bright-background colors are not always supported. | Colors | Background Colors | Bright Colors | Bright Background Colors | | ------- | ----------------- | ------------- | ------------------------ | | black | bgBlack | blackBright | bgBlackBright | | red | bgRed | redBright | bgRedBright | | green | bgGreen | greenBright | bgGreenBright | | yellow | bgYellow | yellowBright | bgYellowBright | | blue | bgBlue | blueBright | bgBlueBright | | magenta | bgMagenta | magentaBright | bgMagentaBright | | cyan | bgCyan | cyanBright | bgCyanBright | | white | bgWhite | whiteBright | bgWhiteBright | | gray | | | | | grey | | | | _(`gray` is the U.S. spelling, `grey` is more commonly used in the Canada and U.K.)_ ### Style modifiers * dim * **bold** * hidden * _italic_ * underline * inverse * ~~strikethrough~~ * reset ## Aliases Create custom aliases for styles. ```js const colors = require('ansi-colors'); colors.alias('primary', colors.yellow); colors.alias('secondary', colors.bold); console.log(colors.primary.secondary('Foo')); ``` ## Themes A theme is an object of custom aliases. ```js const colors = require('ansi-colors'); colors.theme({ danger: colors.red, dark: colors.dim.gray, disabled: colors.gray, em: colors.italic, heading: colors.bold.underline, info: colors.cyan, muted: colors.dim, primary: colors.blue, strong: colors.bold, success: colors.green, underline: colors.underline, warning: colors.yellow }); // Now, we can use our custom styles alongside the built-in styles! console.log(colors.danger.strong.em('Error!')); console.log(colors.warning('Heads up!')); console.log(colors.info('Did you know...')); console.log(colors.success.bold('It worked!')); ``` ## Performance **Libraries tested** * ansi-colors v3.0.4 * chalk v2.4.1 ### Mac > MacBook Pro, Intel Core i7, 2.3 GHz, 16 GB. **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.915ms` * chalk - `12.437ms` **Benchmarks** ``` # All Colors ansi-colors x 173,851 ops/sec ±0.42% (91 runs sampled) chalk x 9,944 ops/sec ±2.53% (81 runs sampled))) # Chained colors ansi-colors x 20,791 ops/sec ±0.60% (88 runs sampled) chalk x 2,111 ops/sec ±2.34% (83 runs sampled) # Nested colors ansi-colors x 59,304 ops/sec ±0.98% (92 runs sampled) chalk x 4,590 ops/sec ±2.08% (82 runs sampled) ``` ### Windows > Windows 10, Intel Core i7-7700k CPU @ 4.2 GHz, 32 GB **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.494ms` * chalk - `11.523ms` **Benchmarks** ``` # All Colors ansi-colors x 193,088 ops/sec ±0.51% (95 runs sampled)) chalk x 9,612 ops/sec ±3.31% (77 runs sampled))) # Chained colors ansi-colors x 26,093 ops/sec ±1.13% (94 runs sampled) chalk x 2,267 ops/sec ±2.88% (80 runs sampled)) # Nested colors ansi-colors x 67,747 ops/sec ±0.49% (93 runs sampled) chalk x 4,446 ops/sec ±3.01% (82 runs sampled)) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [ansi-wrap](https://www.npmjs.com/package/ansi-wrap): Create ansi colors by passing the open and close codes. | [homepage](https://github.com/jonschlinkert/ansi-wrap "Create ansi colors by passing the open and close codes.") * [strip-color](https://www.npmjs.com/package/strip-color): Strip ANSI color codes from a string. No dependencies. | [homepage](https://github.com/jonschlinkert/strip-color "Strip ANSI color codes from a string. No dependencies.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 48 | [jonschlinkert](https://github.com/jonschlinkert) | | 42 | [doowb](https://github.com/doowb) | | 6 | [lukeed](https://github.com/lukeed) | | 2 | [Silic0nS0ldier](https://github.com/Silic0nS0ldier) | | 1 | [dwieeb](https://github.com/dwieeb) | | 1 | [jorgebucaran](https://github.com/jorgebucaran) | | 1 | [madhavarshney](https://github.com/madhavarshney) | | 1 | [chapterjason](https://github.com/chapterjason) | ### Author **Brian Woodward** * [GitHub Profile](https://github.com/doowb) * [Twitter Profile](https://twitter.com/doowb) * [LinkedIn Profile](https://linkedin.com/in/woodwardbrian) ### License Copyright © 2019, [Brian Woodward](https://github.com/doowb). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on July 01, 2019._ Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # levn [![Build Status](https://travis-ci.org/gkz/levn.png)](https://travis-ci.org/gkz/levn) <a name="levn" /> __Light ECMAScript (JavaScript) Value Notation__ Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments). Levn aims to concisely describe JavaScript values in text, and allow for the extraction and validation of those values. Levn uses [type-check](https://github.com/gkz/type-check) for its type format, and to validate the results. MIT license. Version 0.4.1. __How is this different than JSON?__ levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose. npm install levn For updates on levn, [follow me on twitter](https://twitter.com/gkzahariev). ## Quick Examples ```js var parse = require('levn').parse; parse('Number', '2'); // 2 parse('String', '2'); // '2' parse('String', 'levn'); // 'levn' parse('String', 'a b'); // 'a b' parse('Boolean', 'true'); // true parse('Date', '#2011-11-11#'); // (Date object) parse('Date', '2011-11-11'); // (Date object) parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi parse('RegExp', 're'); // /re/ parse('Int', '2'); // 2 parse('Number | String', 'str'); // 'str' parse('Number | String', '2'); // 2 parse('[Number]', '[1,2,3]'); // [1,2,3] parse('(String, Boolean)', '(hi, false)'); // ['hi', false] parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2} // at the top level, you can ommit surrounding delimiters parse('[Number]', '1,2,3'); // [1,2,3] parse('(String, Boolean)', 'hi, false'); // ['hi', false] parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2} // wildcard - auto choose type parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}] ``` ## Usage `require('levn');` returns an object that exposes three properties. `VERSION` is the current version of the library as a string. `parse` and `parsedTypeParse` are functions. ```js // parse(type, input, options); parse('[Number]', '1,2,3'); // [1, 2, 3] // parsedTypeParse(parsedType, input, options); var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ### parse(type, input, options) `parse` casts the string `input` into a JavaScript value according to the specified `type` in the [type format](https://github.com/gkz/type-check#type-format) (and taking account the optional `options`) and returns the resulting JavaScript value. ##### arguments * type - `String` - the type written in the [type format](https://github.com/gkz/type-check#type-format) which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js parse('[Number]', '1,2,3'); // [1, 2, 3] ``` ### parsedTypeParse(parsedType, input, options) `parsedTypeParse` casts the string `input` into a JavaScript value according to the specified `type` which has already been parsed (and taking account the optional `options`) and returns the resulting JavaScript value. You can parse a type using the [type-check](https://github.com/gkz/type-check) library's `parseType` function. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ## Levn Format Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, `parse('Number', '2')` will produce the number `2`, but `parse('String', '2')` will produce the string `"2"`. If you do not provide type information, and simply use `*`, levn will parse the input according the unambiguous "explicit" mode, which we will now detail - you can also set the `explicit` option to true manually in the [options](#options). * `"string"`, `'string'` are parsed as a String, eg. `"a msg"` is `"a msg"` * `#date#` is parsed as a Date, eg. `#2011-11-11#` is `new Date('2011-11-11')` * `/regexp/flags` is parsed as a RegExp, eg. `/re/gi` is `/re/gi` * `undefined`, `null`, `NaN`, `true`, and `false` are all their JavaScript equivalents * `[element1, element2, etc]` is an Array, and the casting procedure is recursively applied to each element. Eg. `[1,2,3]` is `[1,2,3]`. * `(element1, element2, etc)` is an tuple, and the casting procedure is recursively applied to each element. Eg. `(1, a)` is `(1, a)` (is `[1, 'a']`). * `{key1: val1, key2: val2, ...}` is an Object, and the casting procedure is recursively applied to each property. Eg. `{a: 1, b: 2}` is `{a: 1, b: 2}`. * Any test which does not fall under the above, and which does not contain special characters (`[``]``(``)``{``}``:``,`) is a string, eg. `$12- blah` is `"$12- blah"`. If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the [type format](https://github.com/gkz/type-check#type-format) section of [type-check](https://github.com/gkz/type-check) for more information about how to specify types. There are some rules about what levn can do with the information: * If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. `[({})]` is `"[({})]"`, and `"hi"` is `'"hi"'`. * If a Date is expected, the surrounding `#` can be omitted from date literals. Eg. `2011-11-11` is `new Date('2011-11-11')`. * If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing `/` can be omitted - this will have the affect of setting the source of the regex to the input. Eg. `regex` is `/regex/`. * If an Array is expected, and it is the root node (at the top level), the opening `[` and closing `]` can be omitted. Eg. `1,2,3` is `[1,2,3]`. * If a tuple is expected, and it is the root node (at the top level), the opening `(` and closing `)` can be omitted. Eg. `1, a` is `(1, a)` (is `[1, 'a']`). * If an Object is expected, and it is the root node (at the top level), the opening `{` and closing `}` can be omitted. Eg `a: 1, b: 2` is `{a: 1, b: 2}`. If you list multiple types (eg. `Number | String`), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list. Whitespace between special characters and elements is inconsequential. ## Options Options is an object. It is an optional parameter to the `parse` and `parsedTypeParse` functions. ### Explicit A `Boolean`. By default it is `false`. __Example:__ ```js parse('RegExp', 're', {explicit: false}); // /re/ parse('RegExp', 're', {explicit: true}); // Error: ... does not type check... parse('RegExp | String', 're', {explicit: true}); // 're' ``` `explicit` sets whether to be in explicit mode or not. Using `*` automatically activates explicit mode. For more information, read the [levn format](#levn-format) section. ### customTypes An `Object`. Empty `{}` by default. __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function (x) { return x % 2 === 0; }, cast: function (x) { return {type: 'Just', value: parseInt(x)}; } } } } parse('Even', '2', options); // 2 parse('Even', '3', options); // Error: Value: "3" does not type check... ``` __Another Example:__ ```js function Person(name, age){ this.name = name; this.age = age; } var options = { customTypes: { Person: { typeOf: 'Object', validate: function (x) { x instanceof Person; }, cast: function (value, options, typesCast) { var name, age; if ({}.toString.call(value).slice(8, -1) !== 'Object') { return {type: 'Nothing'}; } name = typesCast(value.name, [{type: 'String'}], options); age = typesCast(value.age, [{type: 'Numger'}], options); return {type: 'Just', value: new Person(name, age)}; } } } parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25} ``` `customTypes` is an object whose keys are the name of the types, and whose values are an object with three properties, `typeOf`, `validate`, and `cast`. For more information about `typeOf` and `validate`, please see the [custom types](https://github.com/gkz/type-check#custom-types) section of type-check. `cast` is a function which receives three arguments, the value under question, options, and the typesCast function. In `cast`, attempt to cast the value into the specified type. If you are successful, return an object in the format `{type: 'Just', value: CAST-VALUE}`, if you know it won't work, return `{type: 'Nothing'}`. You can use the `typesCast` function to cast any child values. Remember to pass `options` to it. In your function you can also check for `options.explicit` and act accordingly. ## Technical About `levn` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [type-check](https://github.com/gkz/type-check) to both parse types and validate values. It also uses the [prelude.ls](http://preludels.com/) library. # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![build](https://github.com/epoberezkin/json-schema-traverse/workflows/build/badge.svg)](https://github.com/epoberezkin/json-schema-traverse/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/json-schema-traverse)](https://www.npmjs.com/package/json-schema-traverse) [![coverage](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## Enterprise support json-schema-traverse package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-json-schema-traverse?utm_source=npm-json-schema-traverse&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. These files are compiled dot templates from dot folder. Do NOT edit them directly, edit the templates and run `npm run build` from main ajv folder. iMurmurHash.js ============== An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on [Gary Court's implementation](https://github.com/garycourt/murmurhash-js) with [kazuyukitanimura's modifications](https://github.com/kazuyukitanimura/murmurhash-js). This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing. Installation ------------ To use iMurmurHash in the browser, [download the latest version](https://raw.github.com/jensyt/imurmurhash-js/master/imurmurhash.min.js) and include it as a script on your site. ```html <script type="text/javascript" src="/scripts/imurmurhash.min.js"></script> <script> // Your code here, access iMurmurHash using the global object MurmurHash3 </script> ``` --- To use iMurmurHash in Node.js, install the module using NPM: ```bash npm install imurmurhash ``` Then simply include it in your scripts: ```javascript MurmurHash3 = require('imurmurhash'); ``` Quick Example ------------- ```javascript // Create the initial hash var hashState = MurmurHash3('string'); // Incrementally add text hashState.hash('more strings'); hashState.hash('even more strings'); // All calls can be chained if desired hashState.hash('and').hash('some').hash('more'); // Get a result hashState.result(); // returns 0xe4ccfe6b ``` Functions --------- ### MurmurHash3 ([string], [seed]) Get a hash state object, optionally initialized with the given _string_ and _seed_. _Seed_ must be a positive integer if provided. Calling this function without the `new` keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use `new` to create a new state object. For example: ```javascript // Use the cached object, calling the function again will return the same // object (but reset, so the current state would be lost) hashState = MurmurHash3(); ... // Create a new object that can be safely used however you wish. Calling the // function again will simply return a new state object, and no state loss // will occur, at the cost of creating more objects. hashState = new MurmurHash3(); ``` Both methods can be mixed however you like if you have different use cases. --- ### MurmurHash3.prototype.hash (string) Incrementally add _string_ to the hash. This can be called as many times as you want for the hash state object, including after a call to `result()`. Returns `this` so calls can be chained. --- ### MurmurHash3.prototype.result () Get the result of the hash as a 32-bit positive integer. This performs the tail and finalizer portions of the algorithm, but does not store the result in the state object. This means that it is perfectly safe to get results and then continue adding strings via `hash`. ```javascript // Do the whole string at once MurmurHash3('this is a test string').result(); // 0x70529328 // Do part of the string, get a result, then the other part var m = MurmurHash3('this is a'); m.result(); // 0xbfc4f834 m.hash(' test string').result(); // 0x70529328 (same as above) ``` --- ### MurmurHash3.prototype.reset ([seed]) Reset the state object for reuse, optionally using the given _seed_ (defaults to 0 like the constructor). Returns `this` so calls can be chained. --- License (MIT) ------------- Copyright (c) 2013 Gary Court, Jens Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # regexpp [![npm version](https://img.shields.io/npm/v/regexpp.svg)](https://www.npmjs.com/package/regexpp) [![Downloads/month](https://img.shields.io/npm/dm/regexpp.svg)](http://www.npmtrends.com/regexpp) [![Build Status](https://github.com/mysticatea/regexpp/workflows/CI/badge.svg)](https://github.com/mysticatea/regexpp/actions) [![codecov](https://codecov.io/gh/mysticatea/regexpp/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/regexpp) [![Dependency Status](https://david-dm.org/mysticatea/regexpp.svg)](https://david-dm.org/mysticatea/regexpp) A regular expression parser for ECMAScript. ## 💿 Installation ```bash $ npm install regexpp ``` - require Node.js 8 or newer. ## 📖 Usage ```ts import { AST, RegExpParser, RegExpValidator, RegExpVisitor, parseRegExpLiteral, validateRegExpLiteral, visitRegExpAST } from "regexpp" ``` ### parseRegExpLiteral(source, options?) Parse a given regular expression literal then make AST object. This is equivalent to `new RegExpParser(options).parseLiteral(source)`. - **Parameters:** - `source` (`string | RegExp`) The source code to parse. - `options?` ([`RegExpParser.Options`]) The options to parse. - **Return:** - The AST of the regular expression. ### validateRegExpLiteral(source, options?) Validate a given regular expression literal. This is equivalent to `new RegExpValidator(options).validateLiteral(source)`. - **Parameters:** - `source` (`string`) The source code to validate. - `options?` ([`RegExpValidator.Options`]) The options to validate. ### visitRegExpAST(ast, handlers) Visit each node of a given AST. This is equivalent to `new RegExpVisitor(handlers).visit(ast)`. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. ### RegExpParser #### new RegExpParser(options?) - **Parameters:** - `options?` ([`RegExpParser.Options`]) The options to parse. #### parser.parseLiteral(source, start?, end?) Parse a regular expression literal. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"/abc/g"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression. #### parser.parsePattern(source, start?, end?, uFlag?) Parse a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"abc"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. - **Return:** - The AST of the regular expression pattern. #### parser.parseFlags(source, start?, end?) Parse a regular expression flags. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"gim"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression flags. ### RegExpValidator #### new RegExpValidator(options) - **Parameters:** - `options` ([`RegExpValidator.Options`]) The options to validate. #### validator.validateLiteral(source, start, end) Validate a regular expression literal. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. #### validator.validatePattern(source, start, end, uFlag) Validate a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. #### validator.validateFlags(source, start, end) Validate a regular expression flags. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. ### RegExpVisitor #### new RegExpVisitor(handlers) - **Parameters:** - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. #### visitor.visit(ast) Validate a regular expression literal. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. ## 📰 Changelog - [GitHub Releases](https://github.com/mysticatea/regexpp/releases) ## 🍻 Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run build` compiles TypeScript source code to `index.js`, `index.js.map`, and `index.d.ts`. - `npm run clean` removes the temporary files which are created by `npm test` and `npm run build`. - `npm run lint` runs ESLint. - `npm run update:test` updates test fixtures. - `npm run update:ids` updates `src/unicode/ids.ts`. - `npm run watch` runs tests with `--watch` option. [`AST.Node`]: src/ast.ts#L4 [`RegExpParser.Options`]: src/parser.ts#L539 [`RegExpValidator.Options`]: src/validator.ts#L127 [`RegExpVisitor.Handlers`]: src/visitor.ts#L204 ### Esrecurse [![Build Status](https://travis-ci.org/estools/esrecurse.svg?branch=master)](https://travis-ci.org/estools/esrecurse) Esrecurse ([esrecurse](https://github.com/estools/esrecurse)) is [ECMAScript](https://www.ecma-international.org/publications/standards/Ecma-262.htm) recursive traversing functionality. ### Example Usage The following code will output all variables declared at the root of a file. ```javascript esrecurse.visit(ast, { XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); ``` We can use `Visitor` instance. ```javascript var visitor = new esrecurse.Visitor({ XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); visitor.visit(ast); ``` We can inherit `Visitor` instance easily. ```javascript class Derived extends esrecurse.Visitor { constructor() { super(null); } XXXStatement(node) { } } ``` ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { this.visit(node.left); // do something... this.visit(node.right); }; ``` And you can invoke default visiting operation inside custom visit operation. ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { // do something... this.visitChildren(node); }; ``` The `childVisitorKeys` option does customize the behaviour of `this.visitChildren(node)`. We can use user-defined node types. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { // Extending the existing traversing rules. childVisitorKeys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } } ); ``` We can use the `fallback` option as well. If the `fallback` option is `"iteration"`, `esrecurse` would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: 'iteration' } ); ``` If the `fallback` option is a function, `esrecurse` calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: function (node) { return Object.keys(node).filter(function(key) { return key !== 'argument' }); } } ); ``` ### License Copyright (C) 2014 [Yusuke Suzuki](https://github.com/Constellation) (twitter: [@Constellation](https://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) <p align="center"> <a href="http://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # interpret [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] A dictionary of file extensions and associated module loaders. ## What is it This is used by [Liftoff](http://github.com/tkellen/node-liftoff) to automatically require dependencies for configuration files, and by [rechoir](http://github.com/tkellen/node-rechoir) for registering module loaders. ## API ### extensions Map file types to modules which provide a [require.extensions] loader. ```js { '.babel.js': [ { module: '@babel/register', register: function(hook) { // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353 // which only captures the final extension (.babel.js -> .js) hook({ extensions: '.js' }); }, }, { module: 'babel-register', register: function(hook) { hook({ extensions: '.js' }); }, }, { module: 'babel-core/register', register: function(hook) { hook({ extensions: '.js' }); }, }, { module: 'babel/register', register: function(hook) { hook({ extensions: '.js' }); }, }, ], '.babel.ts': [ { module: '@babel/register', register: function(hook) { hook({ extensions: '.ts' }); }, }, ], '.buble.js': 'buble/register', '.cirru': 'cirru-script/lib/register', '.cjsx': 'node-cjsx/register', '.co': 'coco', '.coffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.coffee.md': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.csv': 'require-csv', '.eg': 'earlgrey/register', '.esm.js': { module: 'esm', register: function(hook) { // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353 // which only captures the final extension (.babel.js -> .js) var esmLoader = hook(module); require.extensions['.js'] = esmLoader('module')._extensions['.js']; }, }, '.iced': ['iced-coffee-script/register', 'iced-coffee-script'], '.iced.md': 'iced-coffee-script/register', '.ini': 'require-ini', '.js': null, '.json': null, '.json5': 'json5/lib/require', '.jsx': [ { module: '@babel/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel-register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel-core/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'babel/register', register: function(hook) { hook({ extensions: '.jsx' }); }, }, { module: 'node-jsx', register: function(hook) { hook.install({ extension: '.jsx', harmony: true }); }, }, ], '.litcoffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'], '.liticed': 'iced-coffee-script/register', '.ls': ['livescript', 'LiveScript'], '.mjs': '/absolute/path/to/interpret/mjs-stub.js', '.node': null, '.toml': { module: 'toml-require', register: function(hook) { hook.install(); }, }, '.ts': [ 'ts-node/register', 'typescript-node/register', 'typescript-register', 'typescript-require', 'sucrase/register/ts', { module: '@babel/register', register: function(hook) { hook({ extensions: '.ts' }); }, }, ], '.tsx': [ 'ts-node/register', 'typescript-node/register', 'sucrase/register', { module: '@babel/register', register: function(hook) { hook({ extensions: '.tsx' }); }, }, ], '.wisp': 'wisp/engine/node', '.xml': 'require-xml', '.yaml': 'require-yaml', '.yml': 'require-yaml', } ``` ### jsVariants Same as above, but only include the extensions which are javascript variants. ## How to use it Consumers should use the exported `extensions` or `jsVariants` object to determine which module should be loaded for a given extension. If a matching extension is found, consumers should do the following: 1. If the value is null, do nothing. 2. If the value is a string, try to require it. 3. If the value is an object, try to require the `module` property. If successful, the `register` property (a function) should be called with the module passed as the first argument. 4. If the value is an array, iterate over it, attempting step #2 or #3 until one of the attempts does not throw. [require.extensions]: http://nodejs.org/api/globals.html#globals_require_extensions [downloads-image]: http://img.shields.io/npm/dm/interpret.svg [npm-url]: https://www.npmjs.com/package/interpret [npm-image]: http://img.shields.io/npm/v/interpret.svg [travis-url]: https://travis-ci.org/gulpjs/interpret [travis-image]: http://img.shields.io/travis/gulpjs/interpret.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/interpret [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/interpret.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/interpret [coveralls-image]: http://img.shields.io/coveralls/gulpjs/interpret/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` # line-column [![Build Status](https://travis-ci.org/io-monad/line-column.svg?branch=master)](https://travis-ci.org/io-monad/line-column) [![Coverage Status](https://coveralls.io/repos/github/io-monad/line-column/badge.svg?branch=master)](https://coveralls.io/github/io-monad/line-column?branch=master) [![npm version](https://badge.fury.io/js/line-column.svg)](https://badge.fury.io/js/line-column) Node module to convert efficiently index to/from line-column in a string. ## Install npm install line-column ## Usage ### lineColumn(str, options = {}) Returns a `LineColumnFinder` instance for given string `str`. #### Options | Key | Description | Default | | ------- | ----------- | ------- | | `origin` | The origin value of line number and column number | `1` | ### lineColumn(str, index) This is just a shorthand for `lineColumn(str).fromIndex(index)`. ### LineColumnFinder#fromIndex(index) Find line and column from index in the string. Parameters: - `index` - `number` Index in the string. (0-origin) Returns: - `{ line: x, col: y }` Found line number and column number. - `null` if the given index is out of range. ### LineColumnFinder#toIndex(line, column) Find index from line and column in the string. Parameters: - `line` - `number` Line number in the string. - `column` - `number` Column number in the string. or - `{ line: x, col: y }` - `Object` line and column numbers in the string.<br>A key name `column` can be used instead of `col`. or - `[ line, col ]` - `Array` line and column numbers in the string. Returns: - `number` Found index in the string. - `-1` if the given line or column is out of range. ## Example ```js var lineColumn = require("line-column"); var testString = [ "ABCDEFG\n", // line:0, index:0 "HIJKLMNOPQRSTU\n", // line:1, index:8 "VWXYZ\n", // line:2, index:23 "日本語の文字\n", // line:3, index:29 "English words" // line:4, index:36 ].join(""); // length:49 lineColumn(testString).fromIndex(3) // { line: 1, col: 4 } lineColumn(testString).fromIndex(33) // { line: 4, col: 5 } lineColumn(testString).toIndex(1, 4) // 3 lineColumn(testString).toIndex(4, 5) // 33 // Shorthand of .fromIndex (compatible with find-line-column) lineColumn(testString, 33) // { line:4, col: 5 } // Object or Array is also acceptable lineColumn(testString).toIndex({ line: 4, col: 5 }) // 33 lineColumn(testString).toIndex({ line: 4, column: 5 }) // 33 lineColumn(testString).toIndex([4, 5]) // 33 // You can cache it for the same string. It is so efficient. (See benchmark) var finder = lineColumn(testString); finder.fromIndex(33) // { line: 4, column: 5 } finder.toIndex(4, 5) // 33 // For 0-origin line and column numbers var oneOrigin = lineColumn(testString, { origin: 0 }); oneOrigin.fromIndex(33) // { line: 3, column: 4 } oneOrigin.toIndex(3, 4) // 33 ``` ## Testing npm test ## Benchmark The popular package [find-line-column](https://www.npmjs.com/package/find-line-column) provides the same "index to line-column" feature. Here is some benchmarking on `line-column` vs `find-line-column`. You can run this benchmark by `npm run benchmark`. See [benchmark/](benchmark/) for the source code. ``` long text + line-column (not cached) x 72,989 ops/sec ±0.83% (89 runs sampled) long text + line-column (cached) x 13,074,242 ops/sec ±0.32% (89 runs sampled) long text + find-line-column x 33,887 ops/sec ±0.54% (84 runs sampled) short text + line-column (not cached) x 1,636,766 ops/sec ±0.77% (82 runs sampled) short text + line-column (cached) x 21,699,686 ops/sec ±1.04% (82 runs sampled) short text + find-line-column x 382,145 ops/sec ±1.04% (85 runs sampled) ``` As you might have noticed, even not cached version of `line-column` is 2x - 4x faster than `find-line-column`, and cached version of `line-column` is remarkable 50x - 380x faster. ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -am 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## License MIT (See LICENSE) # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT functional-red-black-tree ========================= A [fully persistent](http://en.wikipedia.org/wiki/Persistent_data_structure) [red-black tree](http://en.wikipedia.org/wiki/Red%E2%80%93black_tree) written 100% in JavaScript. Works both in node.js and in the browser via [browserify](http://browserify.org/). Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy. Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module. # Install npm install functional-red-black-tree # Example Here is an example of some basic usage: ```javascript //Load the library var createTree = require("functional-red-black-tree") //Create a tree var t1 = createTree() //Insert some items into the tree var t2 = t1.insert(1, "foo") var t3 = t2.insert(2, "bar") //Remove something var t4 = t3.remove(1) ``` # API ```javascript var createTree = require("functional-red-black-tree") ``` ## Overview - [Tree methods](#tree-methods) - [`var tree = createTree([compare])`](#var-tree-=-createtreecompare) - [`tree.keys`](#treekeys) - [`tree.values`](#treevalues) - [`tree.length`](#treelength) - [`tree.get(key)`](#treegetkey) - [`tree.insert(key, value)`](#treeinsertkey-value) - [`tree.remove(key)`](#treeremovekey) - [`tree.find(key)`](#treefindkey) - [`tree.ge(key)`](#treegekey) - [`tree.gt(key)`](#treegtkey) - [`tree.lt(key)`](#treeltkey) - [`tree.le(key)`](#treelekey) - [`tree.at(position)`](#treeatposition) - [`tree.begin`](#treebegin) - [`tree.end`](#treeend) - [`tree.forEach(visitor(key,value)[, lo[, hi]])`](#treeforEachvisitorkeyvalue-lo-hi) - [`tree.root`](#treeroot) - [Node properties](#node-properties) - [`node.key`](#nodekey) - [`node.value`](#nodevalue) - [`node.left`](#nodeleft) - [`node.right`](#noderight) - [Iterator methods](#iterator-methods) - [`iter.key`](#iterkey) - [`iter.value`](#itervalue) - [`iter.node`](#iternode) - [`iter.tree`](#itertree) - [`iter.index`](#iterindex) - [`iter.valid`](#itervalid) - [`iter.clone()`](#iterclone) - [`iter.remove()`](#iterremove) - [`iter.update(value)`](#iterupdatevalue) - [`iter.next()`](#iternext) - [`iter.prev()`](#iterprev) - [`iter.hasNext`](#iterhasnext) - [`iter.hasPrev`](#iterhasprev) ## Tree methods ### `var tree = createTree([compare])` Creates an empty functional tree * `compare` is an optional comparison function, same semantics as array.sort() **Returns** An empty tree ordered by `compare` ### `tree.keys` A sorted array of all the keys in the tree ### `tree.values` An array array of all the values in the tree ### `tree.length` The number of items in the tree ### `tree.get(key)` Retrieves the value associated to the given key * `key` is the key of the item to look up **Returns** The value of the first node associated to `key` ### `tree.insert(key, value)` Creates a new tree with the new pair inserted. * `key` is the key of the item to insert * `value` is the value of the item to insert **Returns** A new tree with `key` and `value` inserted ### `tree.remove(key)` Removes the first item with `key` in the tree * `key` is the key of the item to remove **Returns** A new tree with the given item removed if it exists ### `tree.find(key)` Returns an iterator pointing to the first item in the tree with `key`, otherwise `null`. ### `tree.ge(key)` Find the first item in the tree whose key is `>= key` * `key` is the key to search for **Returns** An iterator at the given element. ### `tree.gt(key)` Finds the first item in the tree whose key is `> key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.lt(key)` Finds the last item in the tree whose key is `< key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.le(key)` Finds the last item in the tree whose key is `<= key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.at(position)` Finds an iterator starting at the given element * `position` is the index at which the iterator gets created **Returns** An iterator starting at position ### `tree.begin` An iterator pointing to the first element in the tree ### `tree.end` An iterator pointing to the last element in the tree ### `tree.forEach(visitor(key,value)[, lo[, hi]])` Walks a visitor function over the nodes of the tree in order. * `visitor(key,value)` is a callback that gets executed on each node. If a truthy value is returned from the visitor, then iteration is stopped. * `lo` is an optional start of the range to visit (inclusive) * `hi` is an optional end of the range to visit (non-inclusive) **Returns** The last value returned by the callback ### `tree.root` Returns the root node of the tree ## Node properties Each node of the tree has the following properties: ### `node.key` The key associated to the node ### `node.value` The value associated to the node ### `node.left` The left subtree of the node ### `node.right` The right subtree of the node ## Iterator methods ### `iter.key` The key of the item referenced by the iterator ### `iter.value` The value of the item referenced by the iterator ### `iter.node` The value of the node at the iterator's current position. `null` is iterator is node valid. ### `iter.tree` The tree associated to the iterator ### `iter.index` Returns the position of this iterator in the sequence. ### `iter.valid` Checks if the iterator is valid ### `iter.clone()` Makes a copy of the iterator ### `iter.remove()` Removes the item at the position of the iterator **Returns** A new binary search tree with `iter`'s item removed ### `iter.update(value)` Updates the value of the node in the tree at this iterator **Returns** A new binary search tree with the corresponding node updated ### `iter.next()` Advances the iterator to the next position ### `iter.prev()` Moves the iterator backward one element ### `iter.hasNext` If true, then the iterator is not at the end of the sequence ### `iter.hasPrev` If true, then the iterator is not at the beginning of the sequence # Credits (c) 2013 Mikola Lysenko. MIT License Like `chown -R`. Takes the same arguments as `fs.chown()` # flat-cache > A stupidly simple key/value storage using files to persist the data [![NPM Version](http://img.shields.io/npm/v/flat-cache.svg?style=flat)](https://npmjs.org/package/flat-cache) [![Build Status](https://api.travis-ci.org/royriojas/flat-cache.svg?branch=master)](https://travis-ci.org/royriojas/flat-cache) ## install ```bash npm i --save flat-cache ``` ## Usage ```js var flatCache = require('flat-cache') // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var cache = flatCache.load('cacheId'); // sets a key on the cache cache.setKey('key', { foo: 'var' }); // get a key from the cache cache.getKey('key') // { foo: 'var' } // fetch the entire persisted object cache.all() // { 'key': { foo: 'var' } } // remove a key cache.removeKey('key'); // removes a key from the cache // save it to disk cache.save(); // very important, if you don't save no changes will be persisted. // cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys // loads the cache from a given directory, if one does // not exists for the given Id a new one will be prepared to be created var cache = flatCache.load('cacheId', path.resolve('./path/to/folder')); // The following methods are useful to clear the cache // delete a given cache flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists. // delete all cache flatCache.clearAll(); // remove the cache directory ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistance in order to make a script that will beutify files with `esformatter` only execute on the files that were changed since the last run. To make that possible we need to store the `fileSize` and `modificationTime` of the files. So a simple `key/value` storage was needed and Bam! this module was born. ## Important notes - If no directory is especified when the `load` method is called, a folder named `.cache` will be created inside the module directory when `cache.save` is called. If you're committing your `node_modules` to any vcs, you might want to ignore the default `.cache` folder, or specify a custom directory. - The values set on the keys of the cache should be `stringify-able` ones, meaning no circular references - All the changes to the cache state are done to memory - I could have used a timer or `Object.observe` to deliver the changes to disk, but I wanted to keep this module intentionally dumb and simple - Non visited keys are removed when `cache.save()` is called. If this is not desired, you can pass `true` to the save call like: `cache.save( true /* noPrune */ )`. ## License MIT ## Changelog [changelog](./changelog.md) # ESLint Scope ESLint Scope is the [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) scope analyzer used in ESLint. It is a fork of [escope](http://github.com/estools/escope). ## Usage Install: ``` npm i eslint-scope --save ``` Example: ```js var eslintScope = require('eslint-scope'); var espree = require('espree'); var estraverse = require('estraverse'); var ast = espree.parse(code); var scopeManager = eslintScope.analyze(ast); var currentScope = scopeManager.acquire(ast); // global scope estraverse.traverse(ast, { enter: function(node, parent) { // do stuff if (/Function/.test(node.type)) { currentScope = scopeManager.acquire(node); // get current function scope } }, leave: function(node, parent) { if (/Function/.test(node.type)) { currentScope = currentScope.upper; // set to parent scope } // do stuff } }); ``` ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/eslint-scope/issues). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting ## License ESLint Scope is licensed under a permissive BSD 2-clause license. # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # cross-spawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build status][appveyor-image]][appveyor-url] [![Coverage Status][codecov-image]][codecov-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/cross-spawn [downloads-image]:https://img.shields.io/npm/dm/cross-spawn.svg [npm-image]:https://img.shields.io/npm/v/cross-spawn.svg [travis-url]:https://travis-ci.org/moxystudio/node-cross-spawn [travis-image]:https://img.shields.io/travis/moxystudio/node-cross-spawn/master.svg [appveyor-url]:https://ci.appveyor.com/project/satazor/node-cross-spawn [appveyor-image]:https://img.shields.io/appveyor/ci/satazor/node-cross-spawn/master.svg [codecov-url]:https://codecov.io/gh/moxystudio/node-cross-spawn [codecov-image]:https://img.shields.io/codecov/c/github/moxystudio/node-cross-spawn/master.svg [david-dm-url]:https://david-dm.org/moxystudio/node-cross-spawn [david-dm-image]:https://img.shields.io/david/moxystudio/node-cross-spawn.svg [david-dm-dev-url]:https://david-dm.org/moxystudio/node-cross-spawn?type=dev [david-dm-dev-image]:https://img.shields.io/david/dev/moxystudio/node-cross-spawn.svg A cross platform solution to node's spawn and spawnSync. ## Installation Node.js version 8 and up: `$ npm install cross-spawn` Node.js version 7 and under: `$ npm install cross-spawn@6` ## Why Node has issues when using spawn on Windows: - It ignores [PATHEXT](https://github.com/joyent/node/issues/2318) - It does not support [shebangs](https://en.wikipedia.org/wiki/Shebang_(Unix)) - Has problems running commands with [spaces](https://github.com/nodejs/node/issues/7367) - Has problems running commands with posix relative paths (e.g.: `./my-folder/my-executable`) - Has an [issue](https://github.com/moxystudio/node-cross-spawn/issues/82) with command shims (files in `node_modules/.bin/`), where arguments with quotes and parenthesis would result in [invalid syntax error](https://github.com/moxystudio/node-cross-spawn/blob/e77b8f22a416db46b6196767bcd35601d7e11d54/test/index.test.js#L149) - No `options.shell` support on node `<v4.8` All these issues are handled correctly by `cross-spawn`. There are some known modules, such as [win-spawn](https://github.com/ForbesLindesay/win-spawn), that try to solve this but they are either broken or provide faulty escaping of shell arguments. ## Usage Exactly the same way as node's [`spawn`](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options) or [`spawnSync`](https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options), so it's a drop in replacement. ```js const spawn = require('cross-spawn'); // Spawn NPM asynchronously const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); // Spawn NPM synchronously const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); ``` ## Caveats ### Using `options.shell` as an alternative to `cross-spawn` Starting from node `v4.8`, `spawn` has a `shell` option that allows you run commands from within a shell. This new option solves the [PATHEXT](https://github.com/joyent/node/issues/2318) issue but: - It's not supported in node `<v4.8` - You must manually escape the command and arguments which is very error prone, specially when passing user input - There are a lot of other unresolved issues from the [Why](#why) section that you must take into account If you are using the `shell` option to spawn a command in a cross platform way, consider using `cross-spawn` instead. You have been warned. ### `options.shell` support While `cross-spawn` adds support for `options.shell` in node `<v4.8`, all of its enhancements are disabled. This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using `options.shell` you are probably targeting a specific platform anyway and you don't want things to get into your way. ### Shebangs support While `cross-spawn` handles shebangs on Windows, its support is limited. More specifically, it just supports `#!/usr/bin/env <program>` where `<program>` must not contain any arguments. If you would like to have the shebang support improved, feel free to contribute via a pull-request. Remember to always test your code on Windows! ## Tests `$ npm test` `$ npm test -- --watch` during development ## License Released under the [MIT License](https://www.opensource.org/licenses/mit-license.php). # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![a fun cartoon logo made of glob characters](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # flatted [![Downloads](https://img.shields.io/npm/dm/flatted.svg)](https://www.npmjs.com/package/flatted) [![Coverage Status](https://coveralls.io/repos/github/WebReflection/flatted/badge.svg?branch=main)](https://coveralls.io/github/WebReflection/flatted?branch=main) [![Build Status](https://travis-ci.com/WebReflection/flatted.svg?branch=main)](https://travis-ci.com/WebReflection/flatted) [![License: ISC](https://img.shields.io/badge/License-ISC-yellow.svg)](https://opensource.org/licenses/ISC) ![WebReflection status](https://offline.report/status/webreflection.svg) ![snow flake](./flatted.jpg) <sup>**Social Media Photo by [Matt Seymour](https://unsplash.com/@mattseymour) on [Unsplash](https://unsplash.com/)**</sup> A super light (0.5K) and fast circular JSON parser, directly from the creator of [CircularJSON](https://github.com/WebReflection/circular-json/#circularjson). Now available also for **[PHP](./php/flatted.php)**. ```js npm i flatted ``` Usable via [CDN](https://unpkg.com/flatted) or as regular module. ```js // ESM import {parse, stringify, toJSON, fromJSON} from 'flatted'; // CJS const {parse, stringify, toJSON, fromJSON} = require('flatted'); const a = [{}]; a[0].a = a; a.push(a); stringify(a); // [["1","0"],{"a":"0"}] ``` ## toJSON and from JSON If you'd like to implicitly survive JSON serialization, these two helpers helps: ```js import {toJSON, fromJSON} from 'flatted'; class RecursiveMap extends Map { static fromJSON(any) { return new this(fromJSON(any)); } toJSON() { return toJSON([...this.entries()]); } } const recursive = new RecursiveMap; const same = {}; same.same = same; recursive.set('same', same); const asString = JSON.stringify(recursive); const asMap = RecursiveMap.fromJSON(JSON.parse(asString)); asMap.get('same') === asMap.get('same').same; // true ``` ## Flatted VS JSON As it is for every other specialized format capable of serializing and deserializing circular data, you should never `JSON.parse(Flatted.stringify(data))`, and you should never `Flatted.parse(JSON.stringify(data))`. The only way this could work is to `Flatted.parse(Flatted.stringify(data))`, as it is also for _CircularJSON_ or any other, otherwise there's no granted data integrity. Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won't be serialized and unserialized as expected. ### New in V1: Exact same JSON API * Added a [reviver](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Syntax) parameter to `.parse(string, reviver)` and revive your own objects. * Added a [replacer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Syntax) and a `space` parameter to `.stringify(object, replacer, space)` for feature parity with JSON signature. ### Compatibility All ECMAScript engines compatible with `Map`, `Set`, `Object.keys`, and `Array.prototype.reduce` will work, even if polyfilled. ### How does it work ? While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. `*` Once parsed, all indexes will be replaced through the flattened collection. <sup><sub>`*` represented as string to avoid conflicts with numbers</sub></sup> ```js // logic example var a = [{one: 1}, {two: '2'}]; a[0].a = a; // a is the main object, will be at index '0' // {one: 1} is the second object, index '1' // {two: '2'} the third, in '2', and it has a string // which will be found at index '3' Flatted.stringify(a); // [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"] // a[one,two] {one: 1, a} {two: '2'} '2' ``` <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> # lodash.merge v4.6.2 The [Lodash](https://lodash.com/) method `_.merge` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.merge ``` In Node.js: ```js var merge = require('lodash.merge'); ``` See the [documentation](https://lodash.com/docs#merge) or [package source](https://github.com/lodash/lodash/blob/4.6.2-npm-packages/lodash.merge) for more details. # ShellJS - Unix shell commands for Node.js [![Travis](https://img.shields.io/travis/shelljs/shelljs/master.svg?style=flat-square&label=unix)](https://travis-ci.org/shelljs/shelljs) [![AppVeyor](https://img.shields.io/appveyor/ci/shelljs/shelljs/master.svg?style=flat-square&label=windows)](https://ci.appveyor.com/project/shelljs/shelljs/branch/master) [![Codecov](https://img.shields.io/codecov/c/github/shelljs/shelljs/master.svg?style=flat-square&label=coverage)](https://codecov.io/gh/shelljs/shelljs) [![npm version](https://img.shields.io/npm/v/shelljs.svg?style=flat-square)](https://www.npmjs.com/package/shelljs) [![npm downloads](https://img.shields.io/npm/dm/shelljs.svg?style=flat-square)](https://www.npmjs.com/package/shelljs) ShellJS is a portable **(Windows/Linux/OS X)** implementation of Unix shell commands on top of the Node.js API. You can use it to eliminate your shell script's dependency on Unix while still keeping its familiar and powerful commands. You can also install it globally so you can run it from outside Node projects - say goodbye to those gnarly Bash scripts! ShellJS is proudly tested on every node release since `v4`! The project is [unit-tested](http://travis-ci.org/shelljs/shelljs) and battle-tested in projects like: + [Firebug](http://getfirebug.com/) - Firefox's infamous debugger + [JSHint](http://jshint.com) & [ESLint](http://eslint.org/) - popular JavaScript linters + [Zepto](http://zeptojs.com) - jQuery-compatible JavaScript library for modern browsers + [Yeoman](http://yeoman.io/) - Web application stack and development tool + [Deployd.com](http://deployd.com) - Open source PaaS for quick API backend generation + And [many more](https://npmjs.org/browse/depended/shelljs). If you have feedback, suggestions, or need help, feel free to post in our [issue tracker](https://github.com/shelljs/shelljs/issues). Think ShellJS is cool? Check out some related projects in our [Wiki page](https://github.com/shelljs/shelljs/wiki)! Upgrading from an older version? Check out our [breaking changes](https://github.com/shelljs/shelljs/wiki/Breaking-Changes) page to see what changes to watch out for while upgrading. ## Command line use If you just want cross platform UNIX commands, checkout our new project [shelljs/shx](https://github.com/shelljs/shx), a utility to expose `shelljs` to the command line. For example: ``` $ shx mkdir -p foo $ shx touch foo/bar.txt $ shx rm -rf foo ``` ## Plugin API ShellJS now supports third-party plugins! You can learn more about using plugins and writing your own ShellJS commands in [the wiki](https://github.com/shelljs/shelljs/wiki/Using-ShellJS-Plugins). ## A quick note about the docs For documentation on all the latest features, check out our [README](https://github.com/shelljs/shelljs). To read docs that are consistent with the latest release, check out [the npm page](https://www.npmjs.com/package/shelljs) or [shelljs.org](http://documentup.com/shelljs/shelljs). ## Installing Via npm: ```bash $ npm install [-g] shelljs ``` ## Examples ```javascript var shell = require('shelljs'); if (!shell.which('git')) { shell.echo('Sorry, this script requires git'); shell.exit(1); } // Copy files to release dir shell.rm('-rf', 'out/Release'); shell.cp('-R', 'stuff/', 'out/Release'); // Replace macros in each .js file shell.cd('lib'); shell.ls('*.js').forEach(function (file) { shell.sed('-i', 'BUILD_VERSION', 'v0.1.2', file); shell.sed('-i', /^.*REMOVE_THIS_LINE.*$/, '', file); shell.sed('-i', /.*REPLACE_LINE_WITH_MACRO.*\n/, shell.cat('macro.js'), file); }); shell.cd('..'); // Run external tool synchronously if (shell.exec('git commit -am "Auto-commit"').code !== 0) { shell.echo('Error: Git commit failed'); shell.exit(1); } ``` ## Exclude options If you need to pass a parameter that looks like an option, you can do so like: ```js shell.grep('--', '-v', 'path/to/file'); // Search for "-v", no grep options shell.cp('-R', '-dir', 'outdir'); // If already using an option, you're done ``` ## Global vs. Local We no longer recommend using a global-import for ShellJS (i.e. `require('shelljs/global')`). While still supported for convenience, this pollutes the global namespace, and should therefore only be used with caution. Instead, we recommend a local import (standard for npm packages): ```javascript var shell = require('shelljs'); shell.echo('hello world'); ``` <!-- DO NOT MODIFY BEYOND THIS POINT - IT'S AUTOMATICALLY GENERATED --> ## Command reference All commands run synchronously, unless otherwise stated. All commands accept standard bash globbing characters (`*`, `?`, etc.), compatible with the [node `glob` module](https://github.com/isaacs/node-glob). For less-commonly used commands and features, please check out our [wiki page](https://github.com/shelljs/shelljs/wiki). ### cat([options,] file [, file ...]) ### cat([options,] file_array) Available options: + `-n`: number all output lines Examples: ```javascript var str = cat('file*.txt'); var str = cat('file1', 'file2'); var str = cat(['file1', 'file2']); // same as above ``` Returns a string containing the given file, or a concatenated string containing the files if more than one file is given (a new line character is introduced between each file). ### cd([dir]) Changes to directory `dir` for the duration of the script. Changes to home directory if no argument is supplied. ### chmod([options,] octal_mode || octal_string, file) ### chmod([options,] symbolic_mode, file) Available options: + `-v`: output a diagnostic for every file processed + `-c`: like verbose, but report only when a change is made + `-R`: change files and directories recursively Examples: ```javascript chmod(755, '/Users/brandon'); chmod('755', '/Users/brandon'); // same as above chmod('u+x', '/Users/brandon'); chmod('-R', 'a-w', '/Users/brandon'); ``` Alters the permissions of a file or directory by either specifying the absolute permissions in octal form or expressing the changes in symbols. This command tries to mimic the POSIX behavior as much as possible. Notable exceptions: + In symbolic modes, `a-r` and `-r` are identical. No consideration is given to the `umask`. + There is no "quiet" option, since default behavior is to run silent. ### cp([options,] source [, source ...], dest) ### cp([options,] source_array, dest) Available options: + `-f`: force (default behavior) + `-n`: no-clobber + `-u`: only copy if `source` is newer than `dest` + `-r`, `-R`: recursive + `-L`: follow symlinks + `-P`: don't follow symlinks Examples: ```javascript cp('file1', 'dir1'); cp('-R', 'path/to/dir/', '~/newCopy/'); cp('-Rf', '/tmp/*', '/usr/local/*', '/home/tmp'); cp('-Rf', ['/tmp/*', '/usr/local/*'], '/home/tmp'); // same as above ``` Copies files. ### pushd([options,] [dir | '-N' | '+N']) Available options: + `-n`: Suppresses the normal change of directory when adding directories to the stack, so that only the stack is manipulated. + `-q`: Supresses output to the console. Arguments: + `dir`: Sets the current working directory to the top of the stack, then executes the equivalent of `cd dir`. + `+N`: Brings the Nth directory (counting from the left of the list printed by dirs, starting with zero) to the top of the list by rotating the stack. + `-N`: Brings the Nth directory (counting from the right of the list printed by dirs, starting with zero) to the top of the list by rotating the stack. Examples: ```javascript // process.cwd() === '/usr' pushd('/etc'); // Returns /etc /usr pushd('+1'); // Returns /usr /etc ``` Save the current directory on the top of the directory stack and then `cd` to `dir`. With no arguments, `pushd` exchanges the top two directories. Returns an array of paths in the stack. ### popd([options,] ['-N' | '+N']) Available options: + `-n`: Suppress the normal directory change when removing directories from the stack, so that only the stack is manipulated. + `-q`: Supresses output to the console. Arguments: + `+N`: Removes the Nth directory (counting from the left of the list printed by dirs), starting with zero. + `-N`: Removes the Nth directory (counting from the right of the list printed by dirs), starting with zero. Examples: ```javascript echo(process.cwd()); // '/usr' pushd('/etc'); // '/etc /usr' echo(process.cwd()); // '/etc' popd(); // '/usr' echo(process.cwd()); // '/usr' ``` When no arguments are given, `popd` removes the top directory from the stack and performs a `cd` to the new top directory. The elements are numbered from 0, starting at the first directory listed with dirs (i.e., `popd` is equivalent to `popd +0`). Returns an array of paths in the stack. ### dirs([options | '+N' | '-N']) Available options: + `-c`: Clears the directory stack by deleting all of the elements. + `-q`: Supresses output to the console. Arguments: + `+N`: Displays the Nth directory (counting from the left of the list printed by dirs when invoked without options), starting with zero. + `-N`: Displays the Nth directory (counting from the right of the list printed by dirs when invoked without options), starting with zero. Display the list of currently remembered directories. Returns an array of paths in the stack, or a single path if `+N` or `-N` was specified. See also: `pushd`, `popd` ### echo([options,] string [, string ...]) Available options: + `-e`: interpret backslash escapes (default) + `-n`: remove trailing newline from output Examples: ```javascript echo('hello world'); var str = echo('hello world'); echo('-n', 'no newline at end'); ``` Prints `string` to stdout, and returns string with additional utility methods like `.to()`. ### exec(command [, options] [, callback]) Available options: + `async`: Asynchronous execution. If a callback is provided, it will be set to `true`, regardless of the passed value (default: `false`). + `silent`: Do not echo program output to console (default: `false`). + `encoding`: Character encoding to use. Affects the values returned to stdout and stderr, and what is written to stdout and stderr when not in silent mode (default: `'utf8'`). + and any option available to Node.js's [`child_process.exec()`](https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback) Examples: ```javascript var version = exec('node --version', {silent:true}).stdout; var child = exec('some_long_running_process', {async:true}); child.stdout.on('data', function(data) { /* ... do something with data ... */ }); exec('some_long_running_process', function(code, stdout, stderr) { console.log('Exit code:', code); console.log('Program output:', stdout); console.log('Program stderr:', stderr); }); ``` Executes the given `command` _synchronously_, unless otherwise specified. When in synchronous mode, this returns a `ShellString` (compatible with ShellJS v0.6.x, which returns an object of the form `{ code:..., stdout:... , stderr:... }`). Otherwise, this returns the child process object, and the `callback` receives the arguments `(code, stdout, stderr)`. Not seeing the behavior you want? `exec()` runs everything through `sh` by default (or `cmd.exe` on Windows), which differs from `bash`. If you need bash-specific behavior, try out the `{shell: 'path/to/bash'}` option. ### find(path [, path ...]) ### find(path_array) Examples: ```javascript find('src', 'lib'); find(['src', 'lib']); // same as above find('.').filter(function(file) { return file.match(/\.js$/); }); ``` Returns array of all files (however deep) in the given paths. The main difference from `ls('-R', path)` is that the resulting file names include the base directories (e.g., `lib/resources/file1` instead of just `file1`). ### grep([options,] regex_filter, file [, file ...]) ### grep([options,] regex_filter, file_array) Available options: + `-v`: Invert `regex_filter` (only print non-matching lines). + `-l`: Print only filenames of matching files. + `-i`: Ignore case. Examples: ```javascript grep('-v', 'GLOBAL_VARIABLE', '*.js'); grep('GLOBAL_VARIABLE', '*.js'); ``` Reads input string from given files and returns a string containing all lines of the file that match the given `regex_filter`. ### head([{'-n': \<num\>},] file [, file ...]) ### head([{'-n': \<num\>},] file_array) Available options: + `-n <num>`: Show the first `<num>` lines of the files Examples: ```javascript var str = head({'-n': 1}, 'file*.txt'); var str = head('file1', 'file2'); var str = head(['file1', 'file2']); // same as above ``` Read the start of a file. ### ln([options,] source, dest) Available options: + `-s`: symlink + `-f`: force Examples: ```javascript ln('file', 'newlink'); ln('-sf', 'file', 'existing'); ``` Links `source` to `dest`. Use `-f` to force the link, should `dest` already exist. ### ls([options,] [path, ...]) ### ls([options,] path_array) Available options: + `-R`: recursive + `-A`: all files (include files beginning with `.`, except for `.` and `..`) + `-L`: follow symlinks + `-d`: list directories themselves, not their contents + `-l`: list objects representing each file, each with fields containing `ls -l` output fields. See [`fs.Stats`](https://nodejs.org/api/fs.html#fs_class_fs_stats) for more info Examples: ```javascript ls('projs/*.js'); ls('-R', '/users/me', '/tmp'); ls('-R', ['/users/me', '/tmp']); // same as above ls('-l', 'file.txt'); // { name: 'file.txt', mode: 33188, nlink: 1, ...} ``` Returns array of files in the given `path`, or files in the current directory if no `path` is provided. ### mkdir([options,] dir [, dir ...]) ### mkdir([options,] dir_array) Available options: + `-p`: full path (and create intermediate directories, if necessary) Examples: ```javascript mkdir('-p', '/tmp/a/b/c/d', '/tmp/e/f/g'); mkdir('-p', ['/tmp/a/b/c/d', '/tmp/e/f/g']); // same as above ``` Creates directories. ### mv([options ,] source [, source ...], dest') ### mv([options ,] source_array, dest') Available options: + `-f`: force (default behavior) + `-n`: no-clobber Examples: ```javascript mv('-n', 'file', 'dir/'); mv('file1', 'file2', 'dir/'); mv(['file1', 'file2'], 'dir/'); // same as above ``` Moves `source` file(s) to `dest`. ### pwd() Returns the current directory. ### rm([options,] file [, file ...]) ### rm([options,] file_array) Available options: + `-f`: force + `-r, -R`: recursive Examples: ```javascript rm('-rf', '/tmp/*'); rm('some_file.txt', 'another_file.txt'); rm(['some_file.txt', 'another_file.txt']); // same as above ``` Removes files. ### sed([options,] search_regex, replacement, file [, file ...]) ### sed([options,] search_regex, replacement, file_array) Available options: + `-i`: Replace contents of `file` in-place. _Note that no backups will be created!_ Examples: ```javascript sed('-i', 'PROGRAM_VERSION', 'v0.1.3', 'source.js'); sed(/.*DELETE_THIS_LINE.*\n/, '', 'source.js'); ``` Reads an input string from `file`s, and performs a JavaScript `replace()` on the input using the given `search_regex` and `replacement` string or function. Returns the new string after replacement. Note: Like unix `sed`, ShellJS `sed` supports capture groups. Capture groups are specified using the `$n` syntax: ```javascript sed(/(\w+)\s(\w+)/, '$2, $1', 'file.txt'); ``` ### set(options) Available options: + `+/-e`: exit upon error (`config.fatal`) + `+/-v`: verbose: show all commands (`config.verbose`) + `+/-f`: disable filename expansion (globbing) Examples: ```javascript set('-e'); // exit upon first error set('+e'); // this undoes a "set('-e')" ``` Sets global configuration variables. ### sort([options,] file [, file ...]) ### sort([options,] file_array) Available options: + `-r`: Reverse the results + `-n`: Compare according to numerical value Examples: ```javascript sort('foo.txt', 'bar.txt'); sort('-r', 'foo.txt'); ``` Return the contents of the `file`s, sorted line-by-line. Sorting multiple files mixes their content (just as unix `sort` does). ### tail([{'-n': \<num\>},] file [, file ...]) ### tail([{'-n': \<num\>},] file_array) Available options: + `-n <num>`: Show the last `<num>` lines of `file`s Examples: ```javascript var str = tail({'-n': 1}, 'file*.txt'); var str = tail('file1', 'file2'); var str = tail(['file1', 'file2']); // same as above ``` Read the end of a `file`. ### tempdir() Examples: ```javascript var tmp = tempdir(); // "/tmp" for most *nix platforms ``` Searches and returns string containing a writeable, platform-dependent temporary directory. Follows Python's [tempfile algorithm](http://docs.python.org/library/tempfile.html#tempfile.tempdir). ### test(expression) Available expression primaries: + `'-b', 'path'`: true if path is a block device + `'-c', 'path'`: true if path is a character device + `'-d', 'path'`: true if path is a directory + `'-e', 'path'`: true if path exists + `'-f', 'path'`: true if path is a regular file + `'-L', 'path'`: true if path is a symbolic link + `'-p', 'path'`: true if path is a pipe (FIFO) + `'-S', 'path'`: true if path is a socket Examples: ```javascript if (test('-d', path)) { /* do something with dir */ }; if (!test('-f', path)) continue; // skip if it's a regular file ``` Evaluates `expression` using the available primaries and returns corresponding value. ### ShellString.prototype.to(file) Examples: ```javascript cat('input.txt').to('output.txt'); ``` Analogous to the redirection operator `>` in Unix, but works with `ShellStrings` (such as those returned by `cat`, `grep`, etc.). _Like Unix redirections, `to()` will overwrite any existing file!_ ### ShellString.prototype.toEnd(file) Examples: ```javascript cat('input.txt').toEnd('output.txt'); ``` Analogous to the redirect-and-append operator `>>` in Unix, but works with `ShellStrings` (such as those returned by `cat`, `grep`, etc.). ### touch([options,] file [, file ...]) ### touch([options,] file_array) Available options: + `-a`: Change only the access time + `-c`: Do not create any files + `-m`: Change only the modification time + `-d DATE`: Parse `DATE` and use it instead of current time + `-r FILE`: Use `FILE`'s times instead of current time Examples: ```javascript touch('source.js'); touch('-c', '/path/to/some/dir/source.js'); touch({ '-r': FILE }, '/path/to/some/dir/source.js'); ``` Update the access and modification times of each `FILE` to the current time. A `FILE` argument that does not exist is created empty, unless `-c` is supplied. This is a partial implementation of [`touch(1)`](http://linux.die.net/man/1/touch). ### uniq([options,] [input, [output]]) Available options: + `-i`: Ignore case while comparing + `-c`: Prefix lines by the number of occurrences + `-d`: Only print duplicate lines, one for each group of identical lines Examples: ```javascript uniq('foo.txt'); uniq('-i', 'foo.txt'); uniq('-cd', 'foo.txt', 'bar.txt'); ``` Filter adjacent matching lines from `input`. ### which(command) Examples: ```javascript var nodeExec = which('node'); ``` Searches for `command` in the system's `PATH`. On Windows, this uses the `PATHEXT` variable to append the extension if it's not already executable. Returns string containing the absolute path to `command`. ### exit(code) Exits the current process with the given exit `code`. ### error() Tests if error occurred in the last command. Returns a truthy value if an error returned, or a falsy value otherwise. **Note**: do not rely on the return value to be an error message. If you need the last error message, use the `.stderr` attribute from the last command's return value instead. ### ShellString(str) Examples: ```javascript var foo = ShellString('hello world'); ``` Turns a regular string into a string-like object similar to what each command returns. This has special methods, like `.to()` and `.toEnd()`. ### env['VAR_NAME'] Object containing environment variables (both getter and setter). Shortcut to `process.env`. ### Pipes Examples: ```javascript grep('foo', 'file1.txt', 'file2.txt').sed(/o/g, 'a').to('output.txt'); echo('files with o\'s in the name:\n' + ls().grep('o')); cat('test.js').exec('node'); // pipe to exec() call ``` Commands can send their output to another command in a pipe-like fashion. `sed`, `grep`, `cat`, `exec`, `to`, and `toEnd` can appear on the right-hand side of a pipe. Pipes can be chained. ## Configuration ### config.silent Example: ```javascript var sh = require('shelljs'); var silentState = sh.config.silent; // save old silent state sh.config.silent = true; /* ... */ sh.config.silent = silentState; // restore old silent state ``` Suppresses all command output if `true`, except for `echo()` calls. Default is `false`. ### config.fatal Example: ```javascript require('shelljs/global'); config.fatal = true; // or set('-e'); cp('this_file_does_not_exist', '/dev/null'); // throws Error here /* more commands... */ ``` If `true`, the script will throw a Javascript error when any shell.js command encounters an error. Default is `false`. This is analogous to Bash's `set -e`. ### config.verbose Example: ```javascript config.verbose = true; // or set('-v'); cd('dir/'); rm('-rf', 'foo.txt', 'bar.txt'); exec('echo hello'); ``` Will print each command as follows: ``` cd dir/ rm -rf foo.txt bar.txt exec echo hello ``` ### config.globOptions Example: ```javascript config.globOptions = {nodir: true}; ``` Use this value for calls to `glob.sync()` instead of the default options. ### config.reset() Example: ```javascript var shell = require('shelljs'); // Make changes to shell.config, and do stuff... /* ... */ shell.config.reset(); // reset to original state // Do more stuff, but with original settings /* ... */ ``` Reset `shell.config` to the defaults: ```javascript { fatal: false, globOptions: {}, maxdepth: 255, noglob: false, silent: false, verbose: false, } ``` ## Team | [![Nate Fischer](https://avatars.githubusercontent.com/u/5801521?s=130)](https://github.com/nfischer) | [![Brandon Freitag](https://avatars1.githubusercontent.com/u/5988055?v=3&s=130)](http://github.com/freitagbr) | |:---:|:---:| | [Nate Fischer](https://github.com/nfischer) | [Brandon Freitag](http://github.com/freitagbr) | # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors [![NPM version](https://img.shields.io/npm/v/esprima.svg)](https://www.npmjs.com/package/esprima) [![npm download](https://img.shields.io/npm/dm/esprima.svg)](https://www.npmjs.com/package/esprima) [![Build Status](https://img.shields.io/travis/jquery/esprima/master.svg)](https://travis-ci.org/jquery/esprima) [![Coverage Status](https://img.shields.io/codecov/c/github/jquery/esprima/master.svg)](https://codecov.io/github/jquery/esprima) **Esprima** ([esprima.org](http://esprima.org), BSD license) is a high performance, standard-compliant [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) parser written in ECMAScript (also popularly known as [JavaScript](https://en.wikipedia.org/wiki/JavaScript)). Esprima is created and maintained by [Ariya Hidayat](https://twitter.com/ariyahidayat), with the help of [many contributors](https://github.com/jquery/esprima/contributors). ### Features - Full support for ECMAScript 2017 ([ECMA-262 8th Edition](http://www.ecma-international.org/publications/standards/Ecma-262.htm)) - Sensible [syntax tree format](https://github.com/estree/estree/blob/master/es5.md) as standardized by [ESTree project](https://github.com/estree/estree) - Experimental support for [JSX](https://facebook.github.io/jsx/), a syntax extension for [React](https://facebook.github.io/react/) - Optional tracking of syntax node location (index-based and line-column) - [Heavily tested](http://esprima.org/test/ci.html) (~1500 [unit tests](https://github.com/jquery/esprima/tree/master/test/fixtures) with [full code coverage](https://codecov.io/github/jquery/esprima)) ### API Esprima can be used to perform [lexical analysis](https://en.wikipedia.org/wiki/Lexical_analysis) (tokenization) or [syntactic analysis](https://en.wikipedia.org/wiki/Parsing) (parsing) of a JavaScript program. A simple example on Node.js REPL: ```javascript > var esprima = require('esprima'); > var program = 'const answer = 42'; > esprima.tokenize(program); [ { type: 'Keyword', value: 'const' }, { type: 'Identifier', value: 'answer' }, { type: 'Punctuator', value: '=' }, { type: 'Numeric', value: '42' } ] > esprima.parseScript(program); { type: 'Program', body: [ { type: 'VariableDeclaration', declarations: [Object], kind: 'const' } ], sourceType: 'script' } ``` For more information, please read the [complete documentation](http://esprima.org/doc). ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) # isobject [![NPM version](https://img.shields.io/npm/v/isobject.svg?style=flat)](https://www.npmjs.com/package/isobject) [![NPM downloads](https://img.shields.io/npm/dm/isobject.svg?style=flat)](https://npmjs.org/package/isobject) [![Build Status](https://img.shields.io/travis/jonschlinkert/isobject.svg?style=flat)](https://travis-ci.org/jonschlinkert/isobject) Returns true if the value is an object and not an array or null. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject --save ``` Use [is-plain-object](https://github.com/jonschlinkert/is-plain-object) if you want only objects that are created by the `Object` constructor. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject ``` Install with [bower](http://bower.io/) ```sh $ bower install isobject ``` ## Usage ```js var isObject = require('isobject'); ``` **True** All of the following return `true`: ```js isObject({}); isObject(Object.create({})); isObject(Object.create(Object.prototype)); isObject(Object.create(null)); isObject({}); isObject(new Foo); isObject(/foo/); ``` **False** All of the following return `false`: ```js isObject(); isObject(function () {}); isObject(1); isObject([]); isObject(undefined); isObject(null); ``` ## Related projects You might also be interested in these projects: [merge-deep](https://www.npmjs.com/package/merge-deep): Recursively merge values in a javascript object. | [homepage](https://github.com/jonschlinkert/merge-deep) * [extend-shallow](https://www.npmjs.com/package/extend-shallow): Extend an object with the properties of additional objects. node.js/javascript util. | [homepage](https://github.com/jonschlinkert/extend-shallow) * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object) * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of) ## Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](https://github.com/jonschlinkert/isobject/issues/new). ## Building docs Generate readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install verb && npm run docs ``` Or, if [verb](https://github.com/verbose/verb) is installed globally: ```sh $ verb ``` ## Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ## Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ## License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/isobject/blob/master/LICENSE). *** _This file was generated by [verb](https://github.com/verbose/verb), v0.9.0, on April 25, 2016._ ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # v8-compile-cache [![Build Status](https://travis-ci.org/zertosh/v8-compile-cache.svg?branch=master)](https://travis-ci.org/zertosh/v8-compile-cache) `v8-compile-cache` attaches a `require` hook to use [V8's code cache](https://v8project.blogspot.com/2015/07/code-caching.html) to speed up instantiation time. The "code cache" is the work of parsing and compiling done by V8. The ability to tap into V8 to produce/consume this cache was introduced in [Node v5.7.0](https://nodejs.org/en/blog/release/v5.7.0/). ## Usage 1. Add the dependency: ```sh $ npm install --save v8-compile-cache ``` 2. Then, in your entry module add: ```js require('v8-compile-cache'); ``` **Requiring `v8-compile-cache` in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by `v8-compile-cache`.** ## Options Set the environment variable `DISABLE_V8_COMPILE_CACHE=1` to disable the cache. Cache directory is defined by environment variable `V8_COMPILE_CACHE_CACHE_DIR` or defaults to `<os.tmpdir()>/v8-compile-cache-<V8_VERSION>`. ## Internals Cache files are suffixed `.BLOB` and `.MAP` corresponding to the entry module that required `v8-compile-cache`. The cache is _entry module specific_ because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis. ## Benchmarks See https://github.com/zertosh/v8-compile-cache/tree/master/bench. **Load Times:** | Module | Without Cache | With Cache | | ---------------- | -------------:| ----------:| | `babel-core` | `218ms` | `185ms` | | `yarn` | `153ms` | `113ms` | | `yarn` (bundled) | `228ms` | `105ms` | _^ Includes the overhead of loading the cache itself._ ## Acknowledgements * `FileSystemBlobStore` and `NativeCompileCache` are based on Atom's implementation of their v8 compile cache: - https://github.com/atom/atom/blob/b0d7a8a/src/file-system-blob-store.js - https://github.com/atom/atom/blob/b0d7a8a/src/native-compile-cache.js * `mkdirpSync` is based on: - https://github.com/substack/node-mkdirp/blob/f2003bb/index.js#L55-L98 # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![Build Status](https://travis-ci.org/epoberezkin/json-schema-traverse.svg?branch=master)](https://travis-ci.org/epoberezkin/json-schema-traverse) [![npm version](https://badge.fury.io/js/json-schema-traverse.svg)](https://www.npmjs.com/package/json-schema-traverse) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) <h1 align="center">Enquirer</h1> <p align="center"> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/v/enquirer.svg" alt="version"> </a> <a href="https://travis-ci.org/enquirer/enquirer"> <img src="https://img.shields.io/travis/enquirer/enquirer.svg" alt="travis"> </a> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/dm/enquirer.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <b>Stylish CLI prompts that are user-friendly, intuitive and easy to create.</b><br> <sub>>_ Prompts should be more like conversations than inquisitions▌</sub> </p> <br> <p align="center"> <sub>(Example shows Enquirer's <a href="#survey-prompt">Survey Prompt</a>)</a></sub> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"><br> <sub>The terminal in all examples is <a href="https://hyper.is/">Hyper</a>, theme is <a href="https://github.com/jonschlinkert/hyper-monokai-extended">hyper-monokai-extended</a>.</sub><br><br> <a href="#built-in-prompts"><strong>See more prompt examples</strong></a> </p> <br> <br> Created by [jonschlinkert](https://github.com/jonschlinkert) and [doowb](https://github.com/doowb), Enquirer is fast, easy to use, and lightweight enough for small projects, while also being powerful and customizable enough for the most advanced use cases. * **Fast** - [Loads in ~4ms](#-performance) (that's about _3-4 times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps_) * **Lightweight** - Only one dependency, the excellent [ansi-colors](https://github.com/doowb/ansi-colors) by [Brian Woodward](https://github.com/doowb). * **Easy to implement** - Uses promises and async/await and sensible defaults to make prompts easy to create and implement. * **Easy to use** - Thrill your users with a better experience! Navigating around input and choices is a breeze. You can even create [quizzes](examples/fun/countdown.js), or [record](examples/fun/record.js) and [playback](examples/fun/play.js) key bindings to aid with tutorials and videos. * **Intuitive** - Keypress combos are available to simplify usage. * **Flexible** - All prompts can be used standalone or chained together. * **Stylish** - Easily override semantic styles and symbols for any part of the prompt. * **Extensible** - Easily create and use custom prompts by extending Enquirer's built-in [prompts](#-prompts). * **Pluggable** - Add advanced features to Enquirer using plugins. * **Validation** - Optionally validate user input with any prompt. * **Well tested** - All prompts are well-tested, and tests are easy to create without having to use brittle, hacky solutions to spy on prompts or "inject" values. * **Examples** - There are numerous [examples](examples) available to help you get started. If you like Enquirer, please consider starring or tweeting about this project to show your support. Thanks! <br> <p align="center"> <b>>_ Ready to start making prompts your users will love? ▌</b><br> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/heartbeat.gif" alt="Enquirer Select Prompt with heartbeat example" width="750"> </p> <br> <br> ## ❯ Getting started Get started with Enquirer, the most powerful and easy-to-use Node.js library for creating interactive CLI prompts. * [Install](#-install) * [Usage](#-usage) * [Enquirer](#-enquirer) * [Prompts](#-prompts) - [Built-in Prompts](#-prompts) - [Custom Prompts](#-custom-prompts) * [Key Bindings](#-key-bindings) * [Options](#-options) * [Release History](#-release-history) * [Performance](#-performance) * [About](#-about) <br> ## ❯ Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install enquirer --save ``` Install with [yarn](https://yarnpkg.com/en/): ```sh $ yarn add enquirer ``` <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/npm-install.gif" alt="Install Enquirer with NPM" width="750"> </p> _(Requires Node.js 8.6 or higher. Please let us know if you need support for an earlier version by creating an [issue](../../issues/new).)_ <br> ## ❯ Usage ### Single prompt The easiest way to get started with enquirer is to pass a [question object](#prompt-options) to the `prompt` method. ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); // { username: 'jonschlinkert' } ``` _(Examples with `await` need to be run inside an `async` function)_ ### Multiple prompts Pass an array of ["question" objects](#prompt-options) to run a series of prompts. ```js const response = await prompt([ { type: 'input', name: 'name', message: 'What is your name?' }, { type: 'input', name: 'username', message: 'What is your username?' } ]); console.log(response); // { name: 'Edward Chan', username: 'edwardmchan' } ``` ### Different ways to run enquirer #### 1. By importing the specific `built-in prompt` ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Did you like enquirer?' }); prompt.run() .then(answer => console.log('Answer:', answer)); ``` #### 2. By passing the options to `prompt` ```js const { prompt } = require('enquirer'); prompt({ type: 'confirm', name: 'question', message: 'Did you like enquirer?' }) .then(answer => console.log('Answer:', answer)); ``` **Jump to**: [Getting Started](#-getting-started) · [Prompts](#-prompts) · [Options](#-options) · [Key Bindings](#-key-bindings) <br> ## ❯ Enquirer **Enquirer is a prompt runner** Add Enquirer to your JavaScript project with following line of code. ```js const Enquirer = require('enquirer'); ``` The main export of this library is the `Enquirer` class, which has methods and features designed to simplify running prompts. ```js const { prompt } = require('enquirer'); const question = [ { type: 'input', name: 'username', message: 'What is your username?' }, { type: 'password', name: 'password', message: 'What is your password?' } ]; let answers = await prompt(question); console.log(answers); ``` **Prompts control how values are rendered and returned** Each individual prompt is a class with special features and functionality for rendering the types of values you want to show users in the terminal, and subsequently returning the types of values you need to use in your application. **How can I customize prompts?** Below in this guide you will find information about creating [custom prompts](#-custom-prompts). For now, we'll focus on how to customize an existing prompt. All of the individual [prompt classes](#built-in-prompts) in this library are exposed as static properties on Enquirer. This allows them to be used directly without using `enquirer.prompt()`. Use this approach if you need to modify a prompt instance, or listen for events on the prompt. **Example** ```js const { Input } = require('enquirer'); const prompt = new Input({ name: 'username', message: 'What is your username?' }); prompt.run() .then(answer => console.log('Username:', answer)) .catch(console.error); ``` ### [Enquirer](index.js#L20) Create an instance of `Enquirer`. **Params** * `options` **{Object}**: (optional) Options to use with all prompts. * `answers` **{Object}**: (optional) Answers object to initialize with. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` ### [register()](index.js#L42) Register a custom prompt type. **Params** * `type` **{String}** * `fn` **{Function|Prompt}**: `Prompt` class, or a function that returns a `Prompt` class. * `returns` **{Object}**: Returns the Enquirer instance **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); enquirer.register('customType', require('./custom-prompt')); ``` ### [prompt()](index.js#L78) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const response = await enquirer.prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` ### [use()](index.js#L160) Use an enquirer plugin. **Params** * `plugin` **{Function}**: Plugin function that takes an instance of Enquirer. * `returns` **{Object}**: Returns the Enquirer instance. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const plugin = enquirer => { // do stuff to enquire instance }; enquirer.use(plugin); ``` ### [Enquirer#prompt](index.js#L210) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` <br> ## ❯ Prompts This section is about Enquirer's prompts: what they look like, how they work, how to run them, available options, and how to customize the prompts or create your own prompt concept. **Getting started with Enquirer's prompts** * [Prompt](#prompt) - The base `Prompt` class used by other prompts - [Prompt Options](#prompt-options) * [Built-in prompts](#built-in-prompts) * [Prompt Types](#prompt-types) - The base `Prompt` class used by other prompts * [Custom prompts](#%E2%9D%AF-custom-prompts) - Enquirer 2.0 introduced the concept of prompt "types", with the goal of making custom prompts easier than ever to create and use. ### Prompt The base `Prompt` class is used to create all other prompts. ```js const { Prompt } = require('enquirer'); class MyCustomPrompt extends Prompt {} ``` See the documentation for [creating custom prompts](#-custom-prompts) to learn more about how this works. #### Prompt Options Each prompt takes an options object (aka "question" object), that implements the following interface: ```js { // required type: string | function, name: string | function, message: string | function | async function, // optional skip: boolean | function | async function, initial: string | function | async function, format: function | async function, result: function | async function, validate: function | async function, } ``` Each property of the options object is described below: | **Property** | **Required?** | **Type** | **Description** | | ------------ | ------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `type` | yes | `string\|function` | Enquirer uses this value to determine the type of prompt to run, but it's optional when prompts are run directly. | | `name` | yes | `string\|function` | Used as the key for the answer on the returned values (answers) object. | | `message` | yes | `string\|function` | The message to display when the prompt is rendered in the terminal. | | `skip` | no | `boolean\|function` | If `true` it will not ask that prompt. | | `initial` | no | `string\|function` | The default value to return if the user does not supply a value. | | `format` | no | `function` | Function to format user input in the terminal. | | `result` | no | `function` | Function to format the final submitted value before it's returned. | | `validate` | no | `function` | Function to validate the submitted value before it's returned. This function may return a boolean or a string. If a string is returned it will be used as the validation error message. | **Example usage** ```js const { prompt } = require('enquirer'); const question = { type: 'input', name: 'username', message: 'What is your username?' }; prompt(question) .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` <br> ### Built-in prompts * [AutoComplete Prompt](#autocomplete-prompt) * [BasicAuth Prompt](#basicauth-prompt) * [Confirm Prompt](#confirm-prompt) * [Form Prompt](#form-prompt) * [Input Prompt](#input-prompt) * [Invisible Prompt](#invisible-prompt) * [List Prompt](#list-prompt) * [MultiSelect Prompt](#multiselect-prompt) * [Numeral Prompt](#numeral-prompt) * [Password Prompt](#password-prompt) * [Quiz Prompt](#quiz-prompt) * [Survey Prompt](#survey-prompt) * [Scale Prompt](#scale-prompt) * [Select Prompt](#select-prompt) * [Sort Prompt](#sort-prompt) * [Snippet Prompt](#snippet-prompt) * [Toggle Prompt](#toggle-prompt) ### AutoComplete Prompt Prompt that auto-completes as the user types, and returns the selected value as a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/autocomplete-prompt.gif" alt="Enquirer AutoComplete Prompt" width="750"> </p> **Example Usage** ```js const { AutoComplete } = require('enquirer'); const prompt = new AutoComplete({ name: 'flavor', message: 'Pick your favorite flavor', limit: 10, initial: 2, choices: [ 'Almond', 'Apple', 'Banana', 'Blackberry', 'Blueberry', 'Cherry', 'Chocolate', 'Cinnamon', 'Coconut', 'Cranberry', 'Grape', 'Nougat', 'Orange', 'Pear', 'Pineapple', 'Raspberry', 'Strawberry', 'Vanilla', 'Watermelon', 'Wintergreen' ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **AutoComplete Options** | Option | Type | Default | Description | | ----------- | ---------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `highlight` | `function` | `dim` version of primary style | The color to use when "highlighting" characters in the list that match user input. | | `multiple` | `boolean` | `false` | Allow multiple choices to be selected. | | `suggest` | `function` | Greedy match, returns true if choice message contains input string. | Function that filters choices. Takes user input and a choices array, and returns a list of matching choices. | | `initial` | `number` | 0 | Preselected item in the list of choices. | | `footer` | `function` | None | Function that displays [footer text](https://github.com/enquirer/enquirer/blob/6c2819518a1e2ed284242a99a685655fbaabfa28/examples/autocomplete/option-footer.js#L10) | **Related prompts** * [Select](#select-prompt) * [MultiSelect](#multiselect-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### BasicAuth Prompt Prompt that asks for username and password to authenticate the user. The default implementation of `authenticate` function in `BasicAuth` prompt is to compare the username and password with the values supplied while running the prompt. The implementer is expected to override the `authenticate` function with a custom logic such as making an API request to a server to authenticate the username and password entered and expect a token back. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61570485-7ffd9c00-aaaa-11e9-857a-d47dc7008284.gif" alt="Enquirer BasicAuth Prompt" width="750"> </p> **Example Usage** ```js const { BasicAuth } = require('enquirer'); const prompt = new BasicAuth({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '123', showPassword: true }); prompt .run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Confirm Prompt Prompt that returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/confirm-prompt.gif" alt="Enquirer Confirm Prompt" width="750"> </p> **Example Usage** ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Want to answer?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Form Prompt Prompt that allows the user to enter and submit multiple values on a single terminal screen. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/form-prompt.gif" alt="Enquirer Form Prompt" width="750"> </p> **Example Usage** ```js const { Form } = require('enquirer'); const prompt = new Form({ name: 'user', message: 'Please provide the following information:', choices: [ { name: 'firstname', message: 'First Name', initial: 'Jon' }, { name: 'lastname', message: 'Last Name', initial: 'Schlinkert' }, { name: 'username', message: 'GitHub username', initial: 'jonschlinkert' } ] }); prompt.run() .then(value => console.log('Answer:', value)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Input Prompt Prompt that takes user input and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/input-prompt.gif" alt="Enquirer Input Prompt" width="750"> </p> **Example Usage** ```js const { Input } = require('enquirer'); const prompt = new Input({ message: 'What is your username?', initial: 'jonschlinkert' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.log); ``` You can use [data-store](https://github.com/jonschlinkert/data-store) to store [input history](https://github.com/enquirer/enquirer/blob/master/examples/input/option-history.js) that the user can cycle through (see [source](https://github.com/enquirer/enquirer/blob/8407dc3579123df5e6e20215078e33bb605b0c37/lib/prompts/input.js)). **Related prompts** * [Confirm](#confirm-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Invisible Prompt Prompt that takes user input, hides it from the terminal, and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/invisible-prompt.gif" alt="Enquirer Invisible Prompt" width="750"> </p> **Example Usage** ```js const { Invisible } = require('enquirer'); const prompt = new Invisible({ name: 'secret', message: 'What is your secret?' }); prompt.run() .then(answer => console.log('Answer:', { secret: answer })) .catch(console.error); ``` **Related prompts** * [Password](#password-prompt) * [Input](#input-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### List Prompt Prompt that returns a list of values, created by splitting the user input. The default split character is `,` with optional trailing whitespace. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/list-prompt.gif" alt="Enquirer List Prompt" width="750"> </p> **Example Usage** ```js const { List } = require('enquirer'); const prompt = new List({ name: 'keywords', message: 'Type comma-separated keywords' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Sort](#sort-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### MultiSelect Prompt Prompt that allows the user to select multiple items from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/multiselect-prompt.gif" alt="Enquirer MultiSelect Prompt" width="750"> </p> **Example Usage** ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: ['aqua', 'blue', 'fuchsia'] ``` **Example key-value pairs** Optionally, pass a `result` function and use the `.map` method to return an object of key-value pairs of the selected names and values: [example](./examples/multiselect/option-result.js) ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ], result(names) { return this.map(names); } }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: { aqua: '#00ffff', blue: '#0000ff', fuchsia: '#ff00ff' } ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Numeral Prompt Prompt that takes a number as input. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/numeral-prompt.gif" alt="Enquirer Numeral Prompt" width="750"> </p> **Example Usage** ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ name: 'number', message: 'Please enter a number' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Confirm](#confirm-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Password Prompt Prompt that takes user input and masks it in the terminal. Also see the [invisible prompt](#invisible-prompt) <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/password-prompt.gif" alt="Enquirer Password Prompt" width="750"> </p> **Example Usage** ```js const { Password } = require('enquirer'); const prompt = new Password({ name: 'password', message: 'What is your password?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Invisible](#invisible-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Quiz Prompt Prompt that allows the user to play multiple-choice quiz questions. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61567561-891d4780-aa6f-11e9-9b09-3d504abd24ed.gif" alt="Enquirer Quiz Prompt" width="750"> </p> **Example Usage** ```js const { Quiz } = require('enquirer'); const prompt = new Quiz({ name: 'countries', message: 'How many countries are there in the world?', choices: ['165', '175', '185', '195', '205'], correctChoice: 3 }); prompt .run() .then(answer => { if (answer.correct) { console.log('Correct!'); } else { console.log(`Wrong! Correct answer is ${answer.correctAnswer}`); } }) .catch(console.error); ``` **Quiz Options** | Option | Type | Required | Description | | ----------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | | `choices` | `array` | Yes | The list of possible answers to the quiz question. | | `correctChoice`| `number` | Yes | Index of the correct choice from the `choices` array. | **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Survey Prompt Prompt that allows the user to provide feedback for a list of questions. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"> </p> **Example Usage** ```js const { Survey } = require('enquirer'); const prompt = new Survey({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.' }, { name: 'navigation', message: 'The website is easy to navigate.' }, { name: 'images', message: 'The website usually has good images.' }, { name: 'upload', message: 'The website makes it easy to upload images.' }, { name: 'colors', message: 'The website has a pleasing color palette.' } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [Scale](#scale-prompt) * [Snippet](#snippet-prompt) * [Select](#select-prompt) *** ### Scale Prompt A more compact version of the [Survey prompt](#survey-prompt), the Scale prompt allows the user to quickly provide feedback using a [Likert Scale](https://en.wikipedia.org/wiki/Likert_scale). <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/scale-prompt.gif" alt="Enquirer Scale Prompt" width="750"> </p> **Example Usage** ```js const { Scale } = require('enquirer'); const prompt = new Scale({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.', initial: 2 }, { name: 'navigation', message: 'The website is easy to navigate.', initial: 2 }, { name: 'images', message: 'The website usually has good images.', initial: 2 }, { name: 'upload', message: 'The website makes it easy to upload images.', initial: 2 }, { name: 'colors', message: 'The website has a pleasing color palette.', initial: 2 } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Select Prompt Prompt that allows the user to select from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/select-prompt.gif" alt="Enquirer Select Prompt" width="750"> </p> **Example Usage** ```js const { Select } = require('enquirer'); const prompt = new Select({ name: 'color', message: 'Pick a flavor', choices: ['apple', 'grape', 'watermelon', 'cherry', 'orange'] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [MultiSelect](#multiselect-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Sort Prompt Prompt that allows the user to sort items in a list. **Example** In this [example](https://github.com/enquirer/enquirer/raw/master/examples/sort/prompt.js), custom styling is applied to the returned values to make it easier to see what's happening. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/sort-prompt.gif" alt="Enquirer Sort Prompt" width="750"> </p> **Example Usage** ```js const colors = require('ansi-colors'); const { Sort } = require('enquirer'); const prompt = new Sort({ name: 'colors', message: 'Sort the colors in order of preference', hint: 'Top is best, bottom is worst', numbered: true, choices: ['red', 'white', 'green', 'cyan', 'yellow'].map(n => ({ name: n, message: colors[n](n) })) }); prompt.run() .then(function(answer = []) { console.log(answer); console.log('Your preferred order of colors is:'); console.log(answer.map(key => colors[key](key)).join('\n')); }) .catch(console.error); ``` **Related prompts** * [List](#list-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Snippet Prompt Prompt that allows the user to replace placeholders in a snippet of code or text. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/snippet-prompt.gif" alt="Prompts" width="750"> </p> **Example Usage** ```js const semver = require('semver'); const { Snippet } = require('enquirer'); const prompt = new Snippet({ name: 'username', message: 'Fill out the fields in package.json', required: true, fields: [ { name: 'author_name', message: 'Author Name' }, { name: 'version', validate(value, state, item, index) { if (item && item.name === 'version' && !semver.valid(value)) { return prompt.styles.danger('version should be a valid semver value'); } return true; } } ], template: `{ "name": "\${name}", "description": "\${description}", "version": "\${version}", "homepage": "https://github.com/\${username}/\${name}", "author": "\${author_name} (https://github.com/\${username})", "repository": "\${username}/\${name}", "license": "\${license:ISC}" } ` }); prompt.run() .then(answer => console.log('Answer:', answer.result)) .catch(console.error); ``` **Related prompts** * [Survey](#survey-prompt) * [AutoComplete](#autocomplete-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Toggle Prompt Prompt that allows the user to toggle between two values then returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/toggle-prompt.gif" alt="Enquirer Toggle Prompt" width="750"> </p> **Example Usage** ```js const { Toggle } = require('enquirer'); const prompt = new Toggle({ message: 'Want to answer?', enabled: 'Yep', disabled: 'Nope' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Confirm](#confirm-prompt) * [Input](#input-prompt) * [Sort](#sort-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Prompt Types There are 5 (soon to be 6!) type classes: * [ArrayPrompt](#arrayprompt) - [Options](#options) - [Properties](#properties) - [Methods](#methods) - [Choices](#choices) - [Defining choices](#defining-choices) - [Choice properties](#choice-properties) - [Related prompts](#related-prompts) * [AuthPrompt](#authprompt) * [BooleanPrompt](#booleanprompt) * DatePrompt (Coming Soon!) * [NumberPrompt](#numberprompt) * [StringPrompt](#stringprompt) Each type is a low-level class that may be used as a starting point for creating higher level prompts. Continue reading to learn how. ### ArrayPrompt The `ArrayPrompt` class is used for creating prompts that display a list of choices in the terminal. For example, Enquirer uses this class as the basis for the [Select](#select) and [Survey](#survey) prompts. #### Options In addition to the [options](#options) available to all prompts, Array prompts also support the following options. | **Option** | **Required?** | **Type** | **Description** | | ----------- | ------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------- | | `autofocus` | `no` | `string\|number` | The index or name of the choice that should have focus when the prompt loads. Only one choice may have focus at a time. | | | `stdin` | `no` | `stream` | The input stream to use for emitting keypress events. Defaults to `process.stdin`. | | `stdout` | `no` | `stream` | The output stream to use for writing the prompt to the terminal. Defaults to `process.stdout`. | | | #### Properties Array prompts have the following instance properties and getters. | **Property name** | **Type** | **Description** | | ----------------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `choices` | `array` | Array of choices that have been normalized from choices passed on the prompt options. | | `cursor` | `number` | Position of the cursor relative to the _user input (string)_. | | `enabled` | `array` | Returns an array of enabled choices. | | `focused` | `array` | Returns the currently selected choice in the visible list of choices. This is similar to the concept of focus in HTML and CSS. Focused choices are always visible (on-screen). When a list of choices is longer than the list of visible choices, and an off-screen choice is _focused_, the list will scroll to the focused choice and re-render. | | `focused` | Gets the currently selected choice. Equivalent to `prompt.choices[prompt.index]`. | | `index` | `number` | Position of the pointer in the _visible list (array) of choices_. | | `limit` | `number` | The number of choices to display on-screen. | | `selected` | `array` | Either a list of enabled choices (when `options.multiple` is true) or the currently focused choice. | | `visible` | `string` | | #### Methods | **Method** | **Description** | | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pointer()` | Returns the visual symbol to use to identify the choice that currently has focus. The `❯` symbol is often used for this. The pointer is not always visible, as with the `autocomplete` prompt. | | `indicator()` | Returns the visual symbol that indicates whether or not a choice is checked/enabled. | | `focus()` | Sets focus on a choice, if it can be focused. | #### Choices Array prompts support the `choices` option, which is the array of choices users will be able to select from when rendered in the terminal. **Type**: `string|object` **Example** ```js const { prompt } = require('enquirer'); const questions = [{ type: 'select', name: 'color', message: 'Favorite color?', initial: 1, choices: [ { name: 'red', message: 'Red', value: '#ff0000' }, //<= choice object { name: 'green', message: 'Green', value: '#00ff00' }, //<= choice object { name: 'blue', message: 'Blue', value: '#0000ff' } //<= choice object ] }]; let answers = await prompt(questions); console.log('Answer:', answers.color); ``` #### Defining choices Whether defined as a string or object, choices are normalized to the following interface: ```js { name: string; message: string | undefined; value: string | undefined; hint: string | undefined; disabled: boolean | string | undefined; } ``` **Example** ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: ['Apple', 'Orange', 'Raspberry'] }; ``` Normalizes to the following when the prompt is run: ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: [ { name: 'Apple', message: 'Apple', value: 'Apple' }, { name: 'Orange', message: 'Orange', value: 'Orange' }, { name: 'Raspberry', message: 'Raspberry', value: 'Raspberry' } ] }; ``` #### Choice properties The following properties are supported on `choice` objects. | **Option** | **Type** | **Description** | | ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `name` | `string` | The unique key to identify a choice | | `message` | `string` | The message to display in the terminal. `name` is used when this is undefined. | | `value` | `string` | Value to associate with the choice. Useful for creating key-value pairs from user choices. `name` is used when this is undefined. | | `choices` | `array` | Array of "child" choices. | | `hint` | `string` | Help message to display next to a choice. | | `role` | `string` | Determines how the choice will be displayed. Currently the only role supported is `separator`. Additional roles may be added in the future (like `heading`, etc). Please create a [feature request] | | `enabled` | `boolean` | Enabled a choice by default. This is only supported when `options.multiple` is true or on prompts that support multiple choices, like [MultiSelect](#-multiselect). | | `disabled` | `boolean\|string` | Disable a choice so that it cannot be selected. This value may either be `true`, `false`, or a message to display. | | `indicator` | `string\|function` | Custom indicator to render for a choice (like a check or radio button). | #### Related prompts * [AutoComplete](#autocomplete-prompt) * [Form](#form-prompt) * [MultiSelect](#multiselect-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) *** ### AuthPrompt The `AuthPrompt` is used to create prompts to log in user using any authentication method. For example, Enquirer uses this class as the basis for the [BasicAuth Prompt](#basicauth-prompt). You can also find prompt examples in `examples/auth/` folder that utilizes `AuthPrompt` to create OAuth based authentication prompt or a prompt that authenticates using time-based OTP, among others. `AuthPrompt` has a factory function that creates an instance of `AuthPrompt` class and it expects an `authenticate` function, as an argument, which overrides the `authenticate` function of the `AuthPrompt` class. #### Methods | **Method** | **Description** | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `authenticate()` | Contain all the authentication logic. This function should be overridden to implement custom authentication logic. The default `authenticate` function throws an error if no other function is provided. | #### Choices Auth prompt supports the `choices` option, which is the similar to the choices used in [Form Prompt](#form-prompt). **Example** ```js const { AuthPrompt } = require('enquirer'); function authenticate(value, state) { if (value.username === this.options.username && value.password === this.options.password) { return true; } return false; } const CustomAuthPrompt = AuthPrompt.create(authenticate); const prompt = new CustomAuthPrompt({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '1234567', choices: [ { name: 'username', message: 'username' }, { name: 'password', message: 'password' } ] }); prompt .run() .then(answer => console.log('Authenticated?', answer)) .catch(console.error); ``` #### Related prompts * [BasicAuth Prompt](#basicauth-prompt) *** ### BooleanPrompt The `BooleanPrompt` class is used for creating prompts that display and return a boolean value. ```js const { BooleanPrompt } = require('enquirer'); const prompt = new BooleanPrompt({ header: '========================', message: 'Do you love enquirer?', footer: '========================', }); prompt.run() .then(answer => console.log('Selected:', answer)) .catch(console.error); ``` **Returns**: `boolean` *** ### NumberPrompt The `NumberPrompt` class is used for creating prompts that display and return a numerical value. ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ header: '************************', message: 'Input the Numbers:', footer: '************************', }); prompt.run() .then(answer => console.log('Numbers are:', answer)) .catch(console.error); ``` **Returns**: `string|number` (number, or number formatted as a string) *** ### StringPrompt The `StringPrompt` class is used for creating prompts that display and return a string value. ```js const { StringPrompt } = require('enquirer'); const prompt = new StringPrompt({ header: '************************', message: 'Input the String:', footer: '************************' }); prompt.run() .then(answer => console.log('String is:', answer)) .catch(console.error); ``` **Returns**: `string` <br> ## ❯ Custom prompts With Enquirer 2.0, custom prompts are easier than ever to create and use. **How do I create a custom prompt?** Custom prompts are created by extending either: * Enquirer's `Prompt` class * one of the built-in [prompts](#-prompts), or * low-level [types](#-types). <!-- Example: HaiKarate Custom Prompt --> ```js const { Prompt } = require('enquirer'); class HaiKarate extends Prompt { constructor(options = {}) { super(options); this.value = options.initial || 0; this.cursorHide(); } up() { this.value++; this.render(); } down() { this.value--; this.render(); } render() { this.clear(); // clear previously rendered prompt from the terminal this.write(`${this.state.message}: ${this.value}`); } } // Use the prompt by creating an instance of your custom prompt class. const prompt = new HaiKarate({ message: 'How many sprays do you want?', initial: 10 }); prompt.run() .then(answer => console.log('Sprays:', answer)) .catch(console.error); ``` If you want to be able to specify your prompt by `type` so that it may be used alongside other prompts, you will need to first create an instance of `Enquirer`. ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` Then use the `.register()` method to add your custom prompt. ```js enquirer.register('haikarate', HaiKarate); ``` Now you can do the following when defining "questions". ```js let spritzer = require('cologne-drone'); let answers = await enquirer.prompt([ { type: 'haikarate', name: 'cologne', message: 'How many sprays do you need?', initial: 10, async onSubmit(name, value) { await spritzer.activate(value); //<= activate drone return value; } } ]); ``` <br> ## ❯ Key Bindings ### All prompts These key combinations may be used with all prompts. | **command** | **description** | | -------------------------------- | -------------------------------------- | | <kbd>ctrl</kbd> + <kbd>c</kbd> | Cancel the prompt. | | <kbd>ctrl</kbd> + <kbd>g</kbd> | Reset the prompt to its initial state. | <br> ### Move cursor These combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>left</kbd> | Move the cursor back one character. | | <kbd>right</kbd> | Move the cursor forward one character. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> ### Edit Input These key combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> | **command (Mac)** | **command (Windows)** | **description** | | ----------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | <kbd>delete</kbd> | <kbd>backspace</kbd> | Delete one character to the left. | | <kbd>fn</kbd> + <kbd>delete</kbd> | <kbd>delete</kbd> | Delete one character to the right. | | <kbd>option</kbd> + <kbd>up</kbd> | <kbd>alt</kbd> + <kbd>up</kbd> | Scroll to the previous item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | | <kbd>option</kbd> + <kbd>down</kbd> | <kbd>alt</kbd> + <kbd>down</kbd> | Scroll to the next item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | ### Select choices These key combinations may be used on prompts that support _multiple_ choices, such as the [multiselect prompt](#multiselect-prompt), or the [select prompt](#select-prompt) when the `multiple` options is true. | **command** | **description** | | ----------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>space</kbd> | Toggle the currently selected choice when `options.multiple` is true. | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>a</kbd> | Toggle all choices to be enabled or disabled. | | <kbd>i</kbd> | Invert the current selection of choices. | | <kbd>g</kbd> | Toggle the current choice group. | <br> ### Hide/show choices | **command** | **description** | | ------------------------------- | ---------------------------------------------- | | <kbd>fn</kbd> + <kbd>up</kbd> | Decrease the number of visible choices by one. | | <kbd>fn</kbd> + <kbd>down</kbd> | Increase the number of visible choices by one. | <br> ### Move/lock Pointer | **command** | **description** | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>up</kbd> | Move the pointer up. | | <kbd>down</kbd> | Move the pointer down. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move the pointer to the first _visible_ choice. | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move the pointer to the last _visible_ choice. | | <kbd>shift</kbd> + <kbd>up</kbd> | Scroll up one choice without changing pointer position (locks the pointer while scrolling). | | <kbd>shift</kbd> + <kbd>down</kbd> | Scroll down one choice without changing pointer position (locks the pointer while scrolling). | <br> | **command (Mac)** | **command (Windows)** | **description** | | -------------------------------- | --------------------- | ---------------------------------------------------------- | | <kbd>fn</kbd> + <kbd>left</kbd> | <kbd>home</kbd> | Move the pointer to the first choice in the choices array. | | <kbd>fn</kbd> + <kbd>right</kbd> | <kbd>end</kbd> | Move the pointer to the last choice in the choices array. | <br> ## ❯ Release History Please see [CHANGELOG.md](CHANGELOG.md). ## ❯ Performance ### System specs MacBook Pro, Intel Core i7, 2.5 GHz, 16 GB. ### Load time Time it takes for the module to load the first time (average of 3 runs): ``` enquirer: 4.013ms inquirer: 286.717ms ``` <br> ## ❯ About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Todo We're currently working on documentation for the following items. Please star and watch the repository for updates! * [ ] Customizing symbols * [ ] Customizing styles (palette) * [ ] Customizing rendered input * [ ] Customizing returned values * [ ] Customizing key bindings * [ ] Question validation * [ ] Choice validation * [ ] Skipping questions * [ ] Async choices * [ ] Async timers: loaders, spinners and other animations * [ ] Links to examples </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ```sh $ yarn && yarn test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> #### Contributors | **Commits** | **Contributor** | | --- | --- | | 283 | [jonschlinkert](https://github.com/jonschlinkert) | | 82 | [doowb](https://github.com/doowb) | | 32 | [rajat-sr](https://github.com/rajat-sr) | | 20 | [318097](https://github.com/318097) | | 15 | [g-plane](https://github.com/g-plane) | | 12 | [pixelass](https://github.com/pixelass) | | 5 | [adityavyas611](https://github.com/adityavyas611) | | 5 | [satotake](https://github.com/satotake) | | 3 | [tunnckoCore](https://github.com/tunnckoCore) | | 3 | [Ovyerus](https://github.com/Ovyerus) | | 3 | [sw-yx](https://github.com/sw-yx) | | 2 | [DanielRuf](https://github.com/DanielRuf) | | 2 | [GabeL7r](https://github.com/GabeL7r) | | 1 | [AlCalzone](https://github.com/AlCalzone) | | 1 | [hipstersmoothie](https://github.com/hipstersmoothie) | | 1 | [danieldelcore](https://github.com/danieldelcore) | | 1 | [ImgBotApp](https://github.com/ImgBotApp) | | 1 | [jsonkao](https://github.com/jsonkao) | | 1 | [knpwrs](https://github.com/knpwrs) | | 1 | [yeskunall](https://github.com/yeskunall) | | 1 | [mischah](https://github.com/mischah) | | 1 | [renarsvilnis](https://github.com/renarsvilnis) | | 1 | [sbugert](https://github.com/sbugert) | | 1 | [stephencweiss](https://github.com/stephencweiss) | | 1 | [skellock](https://github.com/skellock) | | 1 | [whxaxes](https://github.com/whxaxes) | #### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) #### Credit Thanks to [derhuerst](https://github.com/derhuerst), creator of prompt libraries such as [prompt-skeleton](https://github.com/derhuerst/prompt-skeleton), which influenced some of the concepts we used in our prompts. #### License Copyright © 2018-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. argparse ======== [![Build Status](https://secure.travis-ci.org/nodeca/argparse.svg?branch=master)](http://travis-ci.org/nodeca/argparse) [![NPM version](https://img.shields.io/npm/v/argparse.svg)](https://www.npmjs.org/package/argparse) CLI arguments parser for node.js. Javascript port of python's [argparse](http://docs.python.org/dev/library/argparse.html) module (original version 3.2). That's a full port, except some very rare options, recorded in issue tracker. **NB. Difference with original.** - Method names changed to camelCase. See [generated docs](http://nodeca.github.com/argparse/). - Use `defaultValue` instead of `default`. - Use `argparse.Const.REMAINDER` instead of `argparse.REMAINDER`, and similarly for constant values `OPTIONAL`, `ZERO_OR_MORE`, and `ONE_OR_MORE` (aliases for `nargs` values `'?'`, `'*'`, `'+'`, respectively), and `SUPPRESS`. Example ======= test.js file: ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse example' }); parser.addArgument( [ '-f', '--foo' ], { help: 'foo bar' } ); parser.addArgument( [ '-b', '--bar' ], { help: 'bar foo' } ); parser.addArgument( '--baz', { help: 'baz bar' } ); var args = parser.parseArgs(); console.dir(args); ``` Display help: ``` $ ./test.js -h usage: example.js [-h] [-v] [-f FOO] [-b BAR] [--baz BAZ] Argparse example Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -f FOO, --foo FOO foo bar -b BAR, --bar BAR bar foo --baz BAZ baz bar ``` Parse arguments: ``` $ ./test.js -f=3 --bar=4 --baz 5 { foo: '3', bar: '4', baz: '5' } ``` More [examples](https://github.com/nodeca/argparse/tree/master/examples). ArgumentParser objects ====================== ``` new ArgumentParser({parameters hash}); ``` Creates a new ArgumentParser object. **Supported params:** - ```description``` - Text to display before the argument help. - ```epilog``` - Text to display after the argument help. - ```addHelp``` - Add a -h/–help option to the parser. (default: true) - ```argumentDefault``` - Set the global default value for arguments. (default: null) - ```parents``` - A list of ArgumentParser objects whose arguments should also be included. - ```prefixChars``` - The set of characters that prefix optional arguments. (default: ‘-‘) - ```formatterClass``` - A class for customizing the help output. - ```prog``` - The name of the program (default: `path.basename(process.argv[1])`) - ```usage``` - The string describing the program usage (default: generated) - ```conflictHandler``` - Usually unnecessary, defines strategy for resolving conflicting optionals. **Not supported yet** - ```fromfilePrefixChars``` - The set of characters that prefix files from which additional arguments should be read. Details in [original ArgumentParser guide](http://docs.python.org/dev/library/argparse.html#argumentparser-objects) addArgument() method ==================== ``` ArgumentParser.addArgument(name or flag or [name] or [flags...], {options}) ``` Defines how a single command-line argument should be parsed. - ```name or flag or [name] or [flags...]``` - Either a positional name (e.g., `'foo'`), a single option (e.g., `'-f'` or `'--foo'`), an array of a single positional name (e.g., `['foo']`), or an array of options (e.g., `['-f', '--foo']`). Options: - ```action``` - The basic type of action to be taken when this argument is encountered at the command line. - ```nargs```- The number of command-line arguments that should be consumed. - ```constant``` - A constant value required by some action and nargs selections. - ```defaultValue``` - The value produced if the argument is absent from the command line. - ```type``` - The type to which the command-line argument should be converted. - ```choices``` - A container of the allowable values for the argument. - ```required``` - Whether or not the command-line option may be omitted (optionals only). - ```help``` - A brief description of what the argument does. - ```metavar``` - A name for the argument in usage messages. - ```dest``` - The name of the attribute to be added to the object returned by parseArgs(). Details in [original add_argument guide](http://docs.python.org/dev/library/argparse.html#the-add-argument-method) Action (some details) ================ ArgumentParser objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them, though most actions simply add an attribute to the object returned by parseArgs(). The action keyword argument specifies how the command-line arguments should be handled. The supported actions are: - ```store``` - Just stores the argument’s value. This is the default action. - ```storeConst``` - Stores value, specified by the const keyword argument. (Note that the const keyword argument defaults to the rather unhelpful None.) The 'storeConst' action is most commonly used with optional arguments, that specify some sort of flag. - ```storeTrue``` and ```storeFalse``` - Stores values True and False respectively. These are special cases of 'storeConst'. - ```append``` - Stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. - ```appendConst``` - Stores a list, and appends value, specified by the const keyword argument to the list. (Note, that the const keyword argument defaults is None.) The 'appendConst' action is typically used when multiple arguments need to store constants to the same list. - ```count``` - Counts the number of times a keyword argument occurs. For example, used for increasing verbosity levels. - ```help``` - Prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser. See ArgumentParser for details of how the output is created. - ```version``` - Prints version information and exit. Expects a `version=` keyword argument in the addArgument() call. Details in [original action guide](http://docs.python.org/dev/library/argparse.html#action) Sub-commands ============ ArgumentParser.addSubparsers() Many programs split their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands like `svn checkout`, `svn update`, and `svn commit`. Splitting up functionality this way can be a particularly good idea when a program performs several different functions which require different kinds of command-line arguments. `ArgumentParser` supports creation of such sub-commands with `addSubparsers()` method. The `addSubparsers()` method is normally called with no arguments and returns an special action object. This object has a single method `addParser()`, which takes a command name and any `ArgumentParser` constructor arguments, and returns an `ArgumentParser` object that can be modified as usual. Example: sub_commands.js ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse examples: sub-commands', }); var subparsers = parser.addSubparsers({ title:'subcommands', dest:"subcommand_name" }); var bar = subparsers.addParser('c1', {addHelp:true}); bar.addArgument( [ '-f', '--foo' ], { action: 'store', help: 'foo3 bar3' } ); var bar = subparsers.addParser( 'c2', {aliases:['co'], addHelp:true} ); bar.addArgument( [ '-b', '--bar' ], { action: 'store', type: 'int', help: 'foo3 bar3' } ); var args = parser.parseArgs(); console.dir(args); ``` Details in [original sub-commands guide](http://docs.python.org/dev/library/argparse.html#sub-commands) Contributors ============ - [Eugene Shkuropat](https://github.com/shkuropat) - [Paul Jacobson](https://github.com/hpaulj) [others](https://github.com/nodeca/argparse/graphs/contributors) License ======= Copyright (c) 2012 [Vitaly Puzrin](https://github.com/puzrin). Released under the MIT license. See [LICENSE](https://github.com/nodeca/argparse/blob/master/LICENSE) for details. [![Build Status](https://api.travis-ci.org/adaltas/node-csv-stringify.svg)](https://travis-ci.org/#!/adaltas/node-csv-stringify) [![NPM](https://img.shields.io/npm/dm/csv-stringify)](https://www.npmjs.com/package/csv-stringify) [![NPM](https://img.shields.io/npm/v/csv-stringify)](https://www.npmjs.com/package/csv-stringify) This package is a stringifier converting records into a CSV text and implementing the Node.js [`stream.Transform` API](https://nodejs.org/api/stream.html). It also provides the easier synchronous and callback-based APIs for conveniency. It is both extremely easy to use and powerful. It was first released in 2010 and is tested against big data sets by a large community. ## Documentation * [Project homepage](http://csv.js.org/stringify/) * [API](http://csv.js.org/stringify/api/) * [Options](http://csv.js.org/stringify/options/) * [Examples](http://csv.js.org/stringify/examples/) ## Main features * Follow the Node.js streaming API * Simplicity with the optional callback API * Support for custom formatters, delimiters, quotes, escape characters and header * Support big datasets * Complete test coverage and samples for inspiration * Only 1 external dependency * to be used conjointly with `csv-generate`, `csv-parse` and `stream-transform` * MIT License ## Usage The module is built on the Node.js Stream API. For the sake of simplicity, a simple callback API is also provided. To give you a quick look, here's an example of the callback API: ```javascript const stringify = require('csv-stringify') const assert = require('assert') // import stringify from 'csv-stringify' // import assert from 'assert/strict' const input = [ [ '1', '2', '3', '4' ], [ 'a', 'b', 'c', 'd' ] ] stringify(input, function(err, output) { const expected = '1,2,3,4\na,b,c,d\n' assert.strictEqual(output, expected, `output.should.eql ${expected}`) console.log("Passed.", output) }) ``` ## Development Tests are executed with mocha. To install it, run `npm install` followed by `npm test`. It will install mocha and its dependencies in your project "node_modules" directory and run the test suite. The tests run against the CoffeeScript source files. To generate the JavaScript files, run `npm run build`. The test suite is run online with [Travis](https://travis-ci.org/#!/adaltas/node-csv-stringify). See the [Travis definition file](https://github.com/adaltas/node-csv-stringify/blob/master/.travis.yml) to view the tested Node.js version. ## Contributors * David Worms: <https://github.com/wdavidw> [csv_home]: https://github.com/adaltas/node-csv [stream_transform]: http://nodejs.org/api/stream.html#stream_class_stream_transform [examples]: http://csv.js.org/stringify/examples/ [csv]: https://github.com/adaltas/node-csv discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # isexe Minimal module to check if a file is executable, and a normal file. Uses `fs.stat` and tests against the `PATHEXT` environment variable on Windows. ## USAGE ```javascript var isexe = require('isexe') isexe('some-file-name', function (err, isExe) { if (err) { console.error('probably file does not exist or something', err) } else if (isExe) { console.error('this thing can be run') } else { console.error('cannot be run') } }) // same thing but synchronous, throws errors var isExe = isexe.sync('some-file-name') // treat errors as just "not executable" isexe('maybe-missing-file', { ignoreErrors: true }, callback) var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true }) ``` ## API ### `isexe(path, [options], [callback])` Check if the path is executable. If no callback provided, and a global `Promise` object is available, then a Promise will be returned. Will raise whatever errors may be raised by `fs.stat`, unless `options.ignoreErrors` is set to true. ### `isexe.sync(path, [options])` Same as `isexe` but returns the value and throws any errors raised. ### Options * `ignoreErrors` Treat all errors as "no, this is not executable", but don't raise them. * `uid` Number to use as the user id * `gid` Number to use as the group id * `pathExt` List of path extensions to use instead of `PATHEXT` environment variable on Windows. # file-entry-cache > Super simple cache for file metadata, useful for process that work o a given series of files > and that only need to repeat the job on the changed ones since the previous run of the process — Edit [![NPM Version](http://img.shields.io/npm/v/file-entry-cache.svg?style=flat)](https://npmjs.org/package/file-entry-cache) [![Build Status](http://img.shields.io/travis/royriojas/file-entry-cache.svg?style=flat)](https://travis-ci.org/royriojas/file-entry-cache) ## install ```bash npm i --save file-entry-cache ``` ## Usage The module exposes two functions `create` and `createFromFile`. ## `create(cacheName, [directory, useCheckSum])` - **cacheName**: the name of the cache to be created - **directory**: Optional the directory to load the cache from - **usecheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ## `createFromFile(pathToCache, [useCheckSum])` - **pathToCache**: the path to the cache file (this combines the cache name and directory) - **useCheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ```js // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var fileEntryCache = require('file-entry-cache'); var cache = fileEntryCache.create('testCache'); var files = expand('../fixtures/*.txt'); // the first time this method is called, will return all the files var oFiles = cache.getUpdatedFiles(files); // this will persist this to disk checking each file stats and // updating the meta attributes `size` and `mtime`. // custom fields could also be added to the meta object and will be persisted // in order to retrieve them later cache.reconcile(); // use this if you want the non visited file entries to be kept in the cache // for more than one execution // // cache.reconcile( true /* noPrune */) // on a second run var cache2 = fileEntryCache.create('testCache'); // will return now only the files that were modified or none // if no files were modified previous to the execution of this function var oFiles = cache.getUpdatedFiles(files); // if you want to prevent a file from being considered non modified // something useful if a file failed some sort of validation // you can then remove the entry from the cache doing cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles` // that will effectively make the file to appear again as modified until the validation is passed. In that // case you should not remove it from the cache // if you need all the files, so you can determine what to do with the changed ones // you can call var oFiles = cache.normalizeEntries(files); // oFiles will be an array of objects like the following entry = { key: 'some/name/file', the path to the file changed: true, // if the file was changed since previous run meta: { size: 3242, // the size of the file mtime: 231231231, // the modification time of the file data: {} // some extra field stored for this file (useful to save the result of a transformation on the file } } ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistence (write-back cache) in order to make a script that will beautify files with `esformatter` to execute only on the files that were changed since the last run. In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second. This module uses [flat-cache](https://www.npmjs.com/package/flat-cache) a super simple `key/value` cache storage with optional file persistance. The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for `getChangedFiles` will return only the files that were modified. Making the process to end faster. This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the `meta` field, of the entries. Anything added to the meta field will be persisted. Those processes won't need to call `getChangedFiles` they will instead call `normalizeEntries` that will return the entries with a `changed` field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed. In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed. ## Important notes - The values set on the meta attribute of the entries should be `stringify-able` ones if possible, flat-cache uses `circular-json` to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular values - All the changes to the cache state are done to memory first and only persisted after reconcile. ## License MIT near ================== This app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `npm install` 3. Run the local development server: `npm run dev` (see `package.json` for a full list of `scripts` you can run with `npm`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `npm run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: npm run deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages Shims used when bundling asc for browser usage. # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. # prelude.ls [![Build Status](https://travis-ci.org/gkz/prelude-ls.png?branch=master)](https://travis-ci.org/gkz/prelude-ls) is a functionally oriented utility library. It is powerful and flexible. Almost all of its functions are curried. It is written in, and is the recommended base library for, <a href="http://livescript.net">LiveScript</a>. See **[the prelude.ls site](http://preludels.com)** for examples, a reference, and more. You can install via npm `npm install prelude-ls` ### Development `make test` to test `make build` to build `lib` from `src` `make build-browser` to build browser versions # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support. [![Build Status](https://travis-ci.org/epoberezkin/fast-deep-equal.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-deep-equal) [![npm](https://img.shields.io/npm/v/fast-deep-equal.svg)](https://www.npmjs.com/package/fast-deep-equal) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-deep-equal/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-deep-equal?branch=master) ## Install ```bash npm install fast-deep-equal ``` ## Features - ES5 compatible - works in node.js (8+) and browsers (IE9+) - checks equality of Date and RegExp objects by value. ES6 equal (`require('fast-deep-equal/es6')`) also supports: - Maps - Sets - Typed arrays ## Usage ```javascript var equal = require('fast-deep-equal'); console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true ``` To support ES6 Maps, Sets and Typed arrays equality use: ```javascript var equal = require('fast-deep-equal/es6'); console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true ``` To use with React (avoiding the traversal of React elements' _owner property that contains circular references and is not needed when comparing the elements - borrowed from [react-fast-compare](https://github.com/FormidableLabs/react-fast-compare)): ```javascript var equal = require('fast-deep-equal/react'); var equal = require('fast-deep-equal/es6/react'); ``` ## Performance benchmark Node.js v12.6.0: ``` fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled) fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled) fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled) nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled) shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled) underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled) lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled) deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled) deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled) ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled) util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled) assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled) The fastest is fast-deep-equal ``` To run benchmark (requires node.js 6+): ```bash npm run benchmark ``` __Please note__: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application. ## Enterprise support fast-deep-equal package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-deep-equal?utm_source=npm-fast-deep-equal&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/fast-deep-equal/blob/master/LICENSE) # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 10.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # `asbuild` [![Stars](https://img.shields.io/github/stars/AssemblyScript/asbuild.svg?style=social&maxAge=3600&label=Star)](https://github.com/AssemblyScript/asbuild/stargazers) *A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc.* ## 🚩 Table of Contents - [Installing](#-installing) - [Usage](#-usage) - [`asb init`](#asb-init---create-an-empty-project) - [`asb test`](#asb-test---run-as-pect-tests) - [`asb fmt`](#asb-fmt---format-as-files-using-eslint) - [`asb run`](#asb-run---run-a-wasi-binary) - [`asb build`](#asb-build---compile-the-project-using-asc) - [Background](#-background) ## 🔧 Installing Install it globally ``` npm install -g asbuild ``` Or, locally as dev dependencies ``` npm install --save-dev asbuild ``` ## 💡 Usage ``` Build tool for AssemblyScript projects. Usage: asb [command] [options] Commands: asb Alias of build command, to maintain back-ward compatibility [default] asb build Compile a local package and all of its dependencies [aliases: compile, make] asb init [baseDir] Create a new AS package in an given directory asb test Run as-pect tests asb fmt [paths..] This utility formats current module using eslint. [aliases: format, lint] Options: --version Show version number [boolean] --help Show help [boolean] ``` ### `asb init` - Create an empty project ``` asb init [baseDir] Create a new AS package in an given directory Positionals: baseDir Create a sample AS project in this directory [string] [default: "."] Options: --version Show version number [boolean] --help Show help [boolean] --yes Skip the interactive prompt [boolean] [default: false] ``` ### `asb test` - Run as-pect tests ``` asb test Run as-pect tests USAGE: asb test [options] -- [aspect_options] Options: --version Show version number [boolean] --help Show help [boolean] --verbose, --vv Print out arguments passed to as-pect [boolean] [default: false] ``` ### `asb fmt` - Format AS files using ESlint ``` asb fmt [paths..] This utility formats current module using eslint. Positionals: paths Paths to format [array] [default: ["."]] Initialisation: --init Generates recommended eslint config for AS Projects [boolean] Miscellaneous --lint, --dry-run Tries to fix problems without saving the changes to the file system [boolean] [default: false] Options: --version Show version number [boolean] --help Show help ``` ### `asb run` - Run a WASI binary ``` asb run Run a WASI binary USAGE: asb run [options] [binary path] -- [binary options] Positionals: binary path to Wasm binary [string] [required] Options: --version Show version number [boolean] --help Show help [boolean] --preopen, -p comma separated list of directories to open. [default: "."] ``` ### `asb build` - Compile the project using asc ``` asb build Compile a local package and all of its dependencies USAGE: asb build [entry_file] [options] -- [asc_options] Options: --version Show version number [boolean] --help Show help [boolean] --baseDir, -d Base directory of project. [string] [default: "."] --config, -c Path to asconfig file [string] [default: "./asconfig.json"] --wat Output wat file to outDir [boolean] [default: false] --outDir Directory to place built binaries. Default "./build/<target>/" [string] --target Target for compilation [string] [default: "release"] --verbose Print out arguments passed to asc [boolean] [default: false] Examples: asb build Build release of 'assembly/index.ts to build/release/packageName.wasm asb build --target release Build a release binary asb build -- --measure Pass argument to 'asc' ``` #### Defaults ##### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ##### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./tests/build_test) ## 📖 Background Asbuild started as wrapper around `asc` to provide an easier CLI interface and now has been extened to support other commands like `init`, `test` and `fmt` just like `cargo` to become a one stop build tool for AS Projects. ## 📜 License This library is provided under the open-source [MIT license](https://choosealicense.com/licenses/mit/). # Optionator <a name="optionator" /> Optionator is a JavaScript/Node.js option parsing and help generation library used by [eslint](http://eslint.org), [Grasp](http://graspjs.com), [LiveScript](http://livescript.net), [esmangle](https://github.com/estools/esmangle), [escodegen](https://github.com/estools/escodegen), and [many more](https://www.npmjs.com/browse/depended/optionator). For an online demo, check out the [Grasp online demo](http://www.graspjs.com/#demo). [About](#about) &middot; [Usage](#usage) &middot; [Settings Format](#settings-format) &middot; [Argument Format](#argument-format) ## Why? The problem with other option parsers, such as `yargs` or `minimist`, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application. $ cmd --halp Invalid option '--halp' - perhaps you meant '--help'? $ cmd --count str Invalid value for option 'count' - expected type Int, received value: str. Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier. ## About Optionator uses [type-check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) behind the scenes to cast and verify input according the specified types. MIT license. Version 0.9.1 npm install optionator For updates on Optionator, [follow me on twitter](https://twitter.com/gkzahariev). Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify. ## Usage `require('optionator');` returns a function. It has one property, `VERSION`, the current version of the library as a string. This function is called with an object specifying your options and other information, see the [settings format section](#settings-format). This in turn returns an object with three properties, `parse`, `parseArgv`, `generateHelp`, and `generateHelpForOption`, which are all functions. ```js var optionator = require('optionator')({ prepend: 'Usage: cmd [options]', append: 'Version 1.0.0', options: [{ option: 'help', alias: 'h', type: 'Boolean', description: 'displays help' }, { option: 'count', alias: 'c', type: 'Int', description: 'number of things', example: 'cmd --count 2' }] }); var options = optionator.parseArgv(process.argv); if (options.help) { console.log(optionator.generateHelp()); } ... ``` ### parse(input, parseOptions) `parse` processes the `input` according to your settings, and returns an object with the results. ##### arguments * input - `[String] | Object | String` - the input you wish to parse * parseOptions - `{slice: Int}` - all options optional - `slice` specifies how much to slice away from the beginning if the input is an array or string - by default `0` for string, `2` for array (works with `process.argv`) ##### returns `Object` - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the `_` key. ##### example ```js parse(['node', 't.js', '--count', '2', 'positional']); // {count: 2, _: ['positional']} parse('--count 2 positional'); // {count: 2, _: ['positional']} parse({count: 2, _:['positional']}); // {count: 2, _: ['positional']} ``` ### parseArgv(input) `parseArgv` works exactly like `parse`, but only for array input and it slices off the first two elements. ##### arguments * input - `[String]` - the input you wish to parse ##### returns See "returns" section in "parse" ##### example ```js parseArgv(process.argv); ``` ### generateHelp(helpOptions) `generateHelp` produces help text based on your settings. ##### arguments * helpOptions - `{showHidden: Boolean, interpolate: Object}` - all options optional - `showHidden` specifies whether to show options with `hidden: true` specified, by default it is `false` - `interpolate` specify data to be interpolated in `prepend` and `append` text, `{{key}}` is the format - eg. `generateHelp({interpolate:{version: '0.4.2'}})`, will change this `append` text: `Version {{version}}` to `Version 0.4.2` ##### returns `String` - the generated help text ##### example ```js generateHelp(); /* "Usage: cmd [options] positional -h, --help displays help -c, --count Int number of things Version 1.0.0 "*/ ``` ### generateHelpForOption(optionName) `generateHelpForOption` produces expanded help text for the specified with `optionName` option. If an `example` was specified for the option, it will be displayed, and if a `longDescription` was specified, it will display that instead of the `description`. ##### arguments * optionName - `String` - the name of the option to display ##### returns `String` - the generated help text for the option ##### example ```js generateHelpForOption('count'); /* "-c, --count Int description: number of things example: cmd --count 2 "*/ ``` ## Settings Format When your `require('optionator')`, you get a function that takes in a settings object. This object has the type: { prepend: String, append: String, options: [{heading: String} | { option: String, alias: [String] | String, type: String, enum: [String], default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: [String] | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: [String] | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | [String]]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object } All of the properties are optional (the `Maybe` has been excluded for brevities sake), except for having either `heading: String` or `option: String` in each object in the `options` array. ### Top Level Properties * `prepend` is an optional string to be placed before the options in the help text * `append` is an optional string to be placed after the options in the help text * `options` is a required array specifying your options and headings, the options and headings will be displayed in the order specified * `helpStyle` is an optional object which enables you to change the default appearance of some aspects of the help text * `mutuallyExclusive` is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is present * `concatRepeatedArrays` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `mergeRepeatedObjects` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `positionalAnywhere` is an optional boolean (defaults to `true`) - when `true` it allows positional arguments anywhere, when `false`, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with `positionalAnywhere: false`, the arguments `--flag --boom 12 --crack` would have two positional arguments: `12` and `--crack` * `typeAliases` is an optional object, it allows you to set aliases for types, eg. `{Path: 'String'}` would allow you to use the type `Path` as an alias for the type `String` * `defaults` is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do `default: { type: "String" }` to set the default type of all options to `String`, and then override that default in an individual option by setting the `type` property #### Heading Properties * `heading` a required string, the name of the heading #### Option Properties * `option` the required name of the option - use dash-case, without the leading dashes * `alias` is an optional string or array of strings which specify any aliases for the option * `type` is a required string in the [type check](https://github.com/gkz/type-check) [format](https://github.com/gkz/type-check#type-format), this will be used to cast the inputted value and validate it * `enum` is an optional array of strings, each string will be parsed by [levn](https://github.com/gkz/levn) - the argument value must be one of the resulting values - each potential value must validate against the specified `type` * `default` is a optional string, which will be parsed by [levn](https://github.com/gkz/levn) and used as the default value if none is set - the value must validate against the specified `type` * `restPositional` is an optional boolean - if set to `true`, everything after the option will be taken to be a positional argument, even if it looks like a named argument * `required` is an optional boolean - if set to `true`, the option parsing will fail if the option is not defined * `overrideRequired` is a optional boolean - if set to `true` and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use `--help` or `--version` flags * `concatRepeatedArrays` is an optional boolean or tuple with boolean and options object (defaults to `false`) - when set to `true` and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option `g` of type `[String]`: `-g a -g b -g c,d` will result in `['a','b','c','d']` You can supply an options object by giving the following value: `[true, options]`. The one currently supported option is `oneValuePerFlag`, this only allows one array value per flag. This is useful if your potential values contain a comma. * `mergeRepeatedObjects` is an optional boolean (defaults to `false`) - when set to `true` and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option `g` of type `Object`: `-g a:1 -g b:2 -g c:3,d:4` will result in `{a: 1, b: 2, c: 3, d: 4}` * `dependsOn` is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether `'and'` or `'or'` is first, it will either check whether all (`['and', 'option-a', 'option-b']`), or at least one (`['or', 'option-a', 'option-b']`) other options are set * `description` is an optional string, which will be displayed next to the option in the help text * `longDescription` is an optional string, it will be displayed instead of the `description` when `generateHelpForOption` is used * `example` is an optional string or array of strings with example(s) for the option - these will be displayed when `generateHelpForOption` is used #### Help Style Properties * `aliasSeparator` is an optional string, separates multiple names from each other - default: ' ,' * `typeSeparator` is an optional string, separates the type from the names - default: ' ' * `descriptionSeparator` is an optional string , separates the description from the padded name and type - default: ' ' * `initialIndent` is an optional int - the amount of indent for options - default: 2 * `secondaryIndent` is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4 * `maxPadFactor` is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5 ## Argument Format At the highest level there are two types of arguments: named, and positional. Name arguments of any length are prefixed with `--` (eg. `--go`), and those of one character may be prefixed with either `--` or `-` (eg. `-g`). There are two types of named arguments: boolean flags (eg. `--problemo`, `-p`) which take no value and result in a `true` if they are present, the falsey `undefined` if they are not present, or `false` if present and explicitly prefixed with `no` (eg. `--no-problemo`). Named arguments with values (eg. `--tseries 800`, `-t 800`) are the other type. If the option has a type `Boolean` it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value. For more information about how to properly set types to get the value you want, take a look at the [type check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) pages. You can group single character arguments that use a single `-`, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. `-ba 2` is equivalent to `-b -a 2`. Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in `cmd -b one -a 2 two` where `b` is a boolean flag, and `a` has the type `Number`, there are two positional arguments, `one` and `two`. Everything after an `--` is positional, even if it looks like a named argument. You may optionally use `=` to separate option names from values, for example: `--count=2`. If you specify the option `NUM`, then any argument using a single `-` followed by a number will be valid and will set the value of `NUM`. Eg. `-2` will be parsed into `NUM: 2`. If duplicate named arguments are present, the last one will be taken. ## Technical About `optionator` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [levn](https://github.com/gkz/levn) to cast arguments to their specified type, and uses [type-check](https://github.com/gkz/type-check) to validate values. It also uses the [prelude.ls](http://preludels.com/) library. # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value = <JSON.Value>valueOrNull; // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let innerString = (<JSON.Str>value).valueOf(); let jsonString = (<JSON.Str>value).stringify(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.stringify(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/img/ajv.svg"> &nbsp; # Ajv JSON schema validator The fastest JSON validator for Node.js and browser. Supports JSON Schema draft-04/06/07/2019-09/2020-12 ([draft-04 support](https://ajv.js.org/json-schema.html#draft-04) requires ajv-draft-04 package) and JSON Type Definition [RFC8927](https://datatracker.ietf.org/doc/rfc8927/). [![build](https://github.com/ajv-validator/ajv/workflows/build/badge.svg)](https://github.com/ajv-validator/ajv/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Platinum sponsors [<img src="https://ajv.js.org/img/mozilla.svg" width="45%">](https://www.mozilla.org)<img src="https://ajv.js.org/img/gap.svg" width="8%">[<img src="https://ajv.js.org/img/reserved.svg" width="45%">](https://opencollective.com/ajv) ## Ajv online event - May 20, 10am PT / 6pm UK We will talk about: - new features of Ajv version 8. - the improvements sponsored by Mozilla's MOSS grant. - how Ajv is used in JavaScript applications. Speakers: - [Evgeny Poberezkin](https://github.com/epoberezkin), the creator of Ajv. - [Mehan Jayasuriya](https://github.com/mehan), Program Officer at Mozilla Foundation, leading the [MOSS](https://www.mozilla.org/en-US/moss/) and other programs investing in the open source and community ecosystems. - [Matteo Collina](https://github.com/mcollina), Technical Director at NearForm and Node.js Technical Steering Committee member, creator of Fastify web framework. - [Kin Lane](https://github.com/kinlane), Chief Evangelist at Postman. Studying the tech, business & politics of APIs since 2010. Presidential Innovation Fellow during the Obama administration. - [Ulysse Carion](https://github.com/ucarion), the creator of JSON Type Definition specification. [Gajus Kuizinas](https://github.com/gajus) will host the event. Please [register here](https://us02web.zoom.us/webinar/register/2716192553618/WN_erJ_t4ICTHOnGC1SOybNnw). ## Contributing More than 100 people contributed to Ajv, and we would love to have you join the development. We welcome implementing new features that will benefit many users and ideas to improve our documentation. Please review [Contributing guidelines](./CONTRIBUTING.md) and [Code components](https://ajv.js.org/components.html). ## Documentation All documentation is available on the [Ajv website](https://ajv.js.org). Some useful site links: - [Getting started](https://ajv.js.org/guide/getting-started.html) - [JSON Schema vs JSON Type Definition](https://ajv.js.org/guide/schema-language.html) - [API reference](https://ajv.js.org/api.html) - [Strict mode](https://ajv.js.org/strict-mode.html) - [Standalone validation code](https://ajv.js.org/standalone.html) - [Security considerations](https://ajv.js.org/security.html) - [Command line interface](https://ajv.js.org/packages/ajv-cli.html) - [Frequently Asked Questions](https://ajv.js.org/faq.html) ## <a name="sponsors"></a>Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Performance Ajv generates code to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=62,4,1&chs=600x416&chxl=-1:|ajv|@exodus&#x2F;schemasafe|is-my-json-valid|djv|@cfworker&#x2F;json-schema|jsonschema&chd=t:100,69.2,51.5,13.1,5.1,1.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements JSON Schema [draft-06/07/2019-09/2020-12](http://json-schema.org/) standards (draft-04 is supported in v6): - all validation keywords (see [JSON Schema validation keywords](https://ajv.js.org/json-schema.html)) - [OpenAPI](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md) extensions: - NEW: keyword [discriminator](https://ajv.js.org/json-schema.html#discriminator). - keyword [nullable](https://ajv.js.org/json-schema.html#nullable). - full support of remote references (remote schemas have to be added with `addSchema` or compiled to be available) - support of recursive references between schemas - correct string lengths for strings with unicode pairs - JSON Schema [formats](https://ajv.js.org/guide/formats.html) (with [ajv-formats](https://github.com/ajv-validator/ajv-formats) plugin). - [validates schemas against meta-schema](https://ajv.js.org/api.html#api-validateschema) - NEW: supports [JSON Type Definition](https://datatracker.ietf.org/doc/rfc8927/): - all keywords (see [JSON Type Definition schema forms](https://ajv.js.org/json-type-definition.html)) - meta-schema for JTD schemas - "union" keyword and user-defined keywords (can be used inside "metadata" member of the schema) - supports [browsers](https://ajv.js.org/guide/environments.html#browsers) and Node.js 10.x - current - [asynchronous loading](https://ajv.js.org/guide/managing-schemas.html#asynchronous-schema-loading) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](https://ajv.js.org/options.html#allerrors) - [error messages with parameters](https://ajv.js.org/api.html#validation-errors) describing error reasons to allow error message generation - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [removing-additional-properties](https://ajv.js.org/guide/modifying-data.html#removing-additional-properties) - [assigning defaults](https://ajv.js.org/guide/modifying-data.html#assigning-defaults) to missing properties and items - [coercing data](https://ajv.js.org/guide/modifying-data.html#coercing-data-types) to the types specified in `type` keywords - [user-defined keywords](https://ajv.js.org/guide/user-keywords.html) - additional extension keywords with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [\$data reference](https://ajv.js.org/guide/combining-schemas.html#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](https://ajv.js.org/guide/async-validation.html) of user-defined formats and keywords ## Install To install version 8: ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://runkit.com/npm/ajv In JavaScript: ```javascript // or ESM/TypeScript import import Ajv from "ajv" // Node.js require: const Ajv = require("ajv") const ajv = new Ajv() // options can be passed, e.g. {allErrors: true} const schema = { type: "object", properties: { foo: {type: "integer"}, bar: {type: "string"} }, required: ["foo"], additionalProperties: false, } const data = { foo: 1, bar: "abc" } const validate = ajv.compile(schema) const valid = validate(data) if (!valid) console.log(validate.errors) ``` Learn how to use Ajv and see more examples in the [Guide: getting started](https://ajv.js.org/guide/getting-started.html) ## Changes history See [https://github.com/ajv-validator/ajv/releases](https://github.com/ajv-validator/ajv/releases) **Please note**: [Changes in version 8.0.0](https://github.com/ajv-validator/ajv/releases/tag/v8.0.0) [Version 7.0.0](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](./CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](./LICENSE) <a name="table"></a> # Table > Produces a string that represents array data in a text table. [![Travis build status](http://img.shields.io/travis/gajus/table/master.svg?style=flat-square)](https://travis-ci.org/gajus/table) [![Coveralls](https://img.shields.io/coveralls/gajus/table.svg?style=flat-square)](https://coveralls.io/github/gajus/table) [![NPM version](http://img.shields.io/npm/v/table.svg?style=flat-square)](https://www.npmjs.org/package/table) [![Canonical Code Style](https://img.shields.io/badge/code%20style-canonical-blue.svg?style=flat-square)](https://github.com/gajus/canonical) [![Twitter Follow](https://img.shields.io/twitter/follow/kuizinas.svg?style=social&label=Follow)](https://twitter.com/kuizinas) * [Table](#table) * [Features](#table-features) * [Install](#table-install) * [Usage](#table-usage) * [API](#table-api) * [table](#table-api-table-1) * [createStream](#table-api-createstream) * [getBorderCharacters](#table-api-getbordercharacters) ![Demo of table displaying a list of missions to the Moon.](./.README/demo.png) <a name="table-features"></a> ## Features * Works with strings containing [fullwidth](https://en.wikipedia.org/wiki/Halfwidth_and_fullwidth_forms) characters. * Works with strings containing [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code). * Configurable border characters. * Configurable content alignment per column. * Configurable content padding per column. * Configurable column width. * Text wrapping. <a name="table-install"></a> ## Install ```bash npm install table ``` [![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/gajus) [![Become a Patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/gajus) <a name="table-usage"></a> ## Usage ```js import { table } from 'table'; // Using commonjs? // const { table } = require('table'); const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; console.log(table(data)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ ``` <a name="table-api"></a> ## API <a name="table-api-table-1"></a> ### table Returns the string in the table format **Parameters:** - **_data_:** The data to display - Type: `any[][]` - Required: `true` - **_config_:** Table configuration - Type: `object` - Required: `false` <a name="table-api-table-1-config-border"></a> ##### config.border Type: `{ [type: string]: string }`\ Default: `honeywell` [template](#getbordercharacters) Custom borders. The keys are any of: - `topLeft`, `topRight`, `topBody`,`topJoin` - `bottomLeft`, `bottomRight`, `bottomBody`, `bottomJoin` - `joinLeft`, `joinRight`, `joinBody`, `joinJoin` - `bodyLeft`, `bodyRight`, `bodyJoin` - `headerJoin` ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: { topBody: `─`, topJoin: `┬`, topLeft: `┌`, topRight: `┐`, bottomBody: `─`, bottomJoin: `┴`, bottomLeft: `└`, bottomRight: `┘`, bodyLeft: `│`, bodyRight: `│`, bodyJoin: `│`, joinBody: `─`, joinLeft: `├`, joinRight: `┤`, joinJoin: `┼` } }; console.log(table(data, config)); ``` ``` ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ ``` <a name="table-api-table-1-config-drawverticalline"></a> ##### config.drawVerticalLine Type: `(lineIndex: number, columnCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a vertical line. This callback is called for each vertical border of the table. If the table has `n` columns, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawVerticalLine: (lineIndex, columnCount) => { return lineIndex === 0 || lineIndex === columnCount; } }; console.log(table(data, config)); ``` ``` ╔════════════╗ ║ 0A 0B 0C ║ ╟────────────╢ ║ 1A 1B 1C ║ ╟────────────╢ ║ 2A 2B 2C ║ ╟────────────╢ ║ 3A 3B 3C ║ ╟────────────╢ ║ 4A 4B 4C ║ ╚════════════╝ ``` <a name="table-api-table-1-config-drawhorizontalline"></a> ##### config.drawHorizontalLine Type: `(lineIndex: number, rowCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a horizontal line. This callback is called for each horizontal border of the table. If the table has `n` rows, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. If the table has `n` rows and contains the header, then the range will be `[0, n+1]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawHorizontalLine: (lineIndex, rowCount) => { return lineIndex === 0 || lineIndex === 1 || lineIndex === rowCount - 1 || lineIndex === rowCount; } }; console.log(table(data, config)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ║ 2A │ 2B │ 2C ║ ║ 3A │ 3B │ 3C ║ ╟────┼────┼────╢ ║ 4A │ 4B │ 4C ║ ╚════╧════╧════╝ ``` <a name="table-api-table-1-config-singleline"></a> ##### config.singleLine Type: `boolean`\ Default: `false` If `true`, horizontal lines inside the table are not drawn. This option also overrides the `config.drawHorizontalLine` if specified. ```js const data = [ ['-rw-r--r--', '1', 'pandorym', 'staff', '1529', 'May 23 11:25', 'LICENSE'], ['-rw-r--r--', '1', 'pandorym', 'staff', '16327', 'May 23 11:58', 'README.md'], ['drwxr-xr-x', '76', 'pandorym', 'staff', '2432', 'May 23 12:02', 'dist'], ['drwxr-xr-x', '634', 'pandorym', 'staff', '20288', 'May 23 11:54', 'node_modules'], ['-rw-r--r--', '1,', 'pandorym', 'staff', '525688', 'May 23 11:52', 'package-lock.json'], ['-rw-r--r--@', '1', 'pandorym', 'staff', '2440', 'May 23 11:25', 'package.json'], ['drwxr-xr-x', '27', 'pandorym', 'staff', '864', 'May 23 11:25', 'src'], ['drwxr-xr-x', '20', 'pandorym', 'staff', '640', 'May 23 11:25', 'test'], ]; const config = { singleLine: true }; console.log(table(data, config)); ``` ``` ╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 1529 │ May 23 11:25 │ LICENSE ║ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 16327 │ May 23 11:58 │ README.md ║ ║ drwxr-xr-x │ 76 │ pandorym │ staff │ 2432 │ May 23 12:02 │ dist ║ ║ drwxr-xr-x │ 634 │ pandorym │ staff │ 20288 │ May 23 11:54 │ node_modules ║ ║ -rw-r--r-- │ 1, │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║ ║ -rw-r--r--@ │ 1 │ pandorym │ staff │ 2440 │ May 23 11:25 │ package.json ║ ║ drwxr-xr-x │ 27 │ pandorym │ staff │ 864 │ May 23 11:25 │ src ║ ║ drwxr-xr-x │ 20 │ pandorym │ staff │ 640 │ May 23 11:25 │ test ║ ╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝ ``` <a name="table-api-table-1-config-columns"></a> ##### config.columns Type: `Column[] | { [columnIndex: number]: Column }` Column specific configurations. <a name="table-api-table-1-config-columns-config-columns-width"></a> ###### config.columns[*].width Type: `number`\ Default: the maximum cell widths of the column Column width (excluding the paddings). ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: { 1: { width: 10 } } }; console.log(table(data, config)); ``` ``` ╔════╤════════════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────────────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────────────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════════════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-alignment"></a> ###### config.columns[*].alignment Type: `'center' | 'justify' | 'left' | 'right'`\ Default: `'left'` Cell content horizontal alignment ```js const data = [ ['0A', '0B', '0C', '0D 0E 0F'], ['1A', '1B', '1C', '1D 1E 1F'], ['2A', '2B', '2C', '2D 2E 2F'], ]; const config = { columnDefault: { width: 10, }, columns: [ { alignment: 'left' }, { alignment: 'center' }, { alignment: 'right' }, { alignment: 'justify' } ], }; console.log(table(data, config)); ``` ``` ╔════════════╤════════════╤════════════╤════════════╗ ║ 0A │ 0B │ 0C │ 0D 0E 0F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C │ 1D 1E 1F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C │ 2D 2E 2F ║ ╚════════════╧════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-verticalalignment"></a> ###### config.columns[*].verticalAlignment Type: `'top' | 'middle' | 'bottom'`\ Default: `'top'` Cell content vertical alignment ```js const data = [ ['A', 'B', 'C', 'DEF'], ]; const config = { columnDefault: { width: 1, }, columns: [ { verticalAlignment: 'top' }, { verticalAlignment: 'middle' }, { verticalAlignment: 'bottom' }, ], }; console.log(table(data, config)); ``` ``` ╔═══╤═══╤═══╤═══╗ ║ A │ │ │ D ║ ║ │ B │ │ E ║ ║ │ │ C │ F ║ ╚═══╧═══╧═══╧═══╝ ``` <a name="table-api-table-1-config-columns-config-columns-paddingleft"></a> ###### config.columns[*].paddingLeft Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the left. <a name="table-api-table-1-config-columns-config-columns-paddingright"></a> ###### config.columns[*].paddingRight Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the right. The `paddingLeft` and `paddingRight` options do not count on the column width. So the column has `width = 5`, `paddingLeft = 2` and `paddingRight = 2` will have the total width is `9`. ```js const data = [ ['0A', 'AABBCC', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: [ { paddingLeft: 3 }, { width: 2, paddingRight: 3 } ] }; console.log(table(data, config)); ``` ``` ╔══════╤══════╤════╗ ║ 0A │ AA │ 0C ║ ║ │ BB │ ║ ║ │ CC │ ║ ╟──────┼──────┼────╢ ║ 1A │ 1B │ 1C ║ ╟──────┼──────┼────╢ ║ 2A │ 2B │ 2C ║ ╚══════╧══════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-truncate"></a> ###### config.columns[*].truncate Type: `number`\ Default: `Infinity` The number of characters is which the content will be truncated. To handle a content that overflows the container width, `table` package implements [text wrapping](#config.columns[*].wrapWord). However, sometimes you may want to truncate content that is too long to be displayed in the table. ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20, truncate: 100 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convall… ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-wrapword"></a> ###### config.columns[*].wrapWord Type: `boolean`\ Default: `false` The `table` package implements auto text wrapping, i.e., text that has the width greater than the container width will be separated into multiple lines at the nearest space or one of the special characters: `\|/_.,;-`. When `wrapWord` is `false`: ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convallis ║ ║ dapibus. Nunc venena ║ ║ tis tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` When `wrapWord` is `true`: ``` ╔══════════════════════╗ ║ Lorem ipsum dolor ║ ║ sit amet, ║ ║ consectetur ║ ║ adipiscing elit. ║ ║ Phasellus pulvinar ║ ║ nibh sed mauris ║ ║ convallis dapibus. ║ ║ Nunc venenatis ║ ║ tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columndefault"></a> ##### config.columnDefault Type: `Column`\ Default: `{}` The default configuration for all columns. Column-specific settings will overwrite the default values. <a name="table-api-table-1-config-header"></a> ##### config.header Type: `object` Header configuration. The header configuration inherits the most of the column's, except: - `content` **{string}**: the header content. - `width:` calculate based on the content width automatically. - `alignment:` `center` be default. - `verticalAlignment:` is not supported. - `config.border.topJoin` will be `config.border.topBody` for prettier. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ]; const config = { columnDefault: { width: 10, }, header: { alignment: 'center', content: 'THE HEADER\nThis is the table about something', }, } console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════╗ ║ THE HEADER ║ ║ This is the table about something ║ ╟────────────┬────────────┬────────────╢ ║ 0A │ 0B │ 0C ║ ╟────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C ║ ╟────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C ║ ╚════════════╧════════════╧════════════╝ ``` <a name="table-api-createstream"></a> ### createStream `table` package exports `createStream` function used to draw a table and append rows. **Parameter:** - _**config:**_ the same as `table`'s, except `config.columnDefault.width` and `config.columnCount` must be provided. ```js import { createStream } from 'table'; const config = { columnDefault: { width: 50 }, columnCount: 1 }; const stream = createStream(config); setInterval(() => { stream.write([new Date()]); }, 500); ``` ![Streaming current date.](./.README/api/stream/streaming.gif) `table` package uses ANSI escape codes to overwrite the output of the last line when a new row is printed. The underlying implementation is explained in this [Stack Overflow answer](http://stackoverflow.com/a/32938658/368691). Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g. ```js import { createStream } from 'table'; import _ from 'lodash'; const config = { columnDefault: { width: 50 }, columnCount: 3, columns: [ { width: 10, alignment: 'right' }, { alignment: 'center' }, { width: 10 } ] }; const stream = createStream(config); let i = 0; setInterval(() => { let random; random = _.sample('abcdefghijklmnopqrstuvwxyz', _.random(1, 30)).join(''); stream.write([i++, new Date(), random]); }, 500); ``` ![Streaming random data.](./.README/api/stream/streaming-random.gif) <a name="table-api-getbordercharacters"></a> ### getBorderCharacters **Parameter:** - **_template_** - Type: `'honeywell' | 'norc' | 'ramac' | 'void'` - Required: `true` You can load one of the predefined border templates using `getBorderCharacters` function. ```js import { table, getBorderCharacters } from 'table'; const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: getBorderCharacters(`name of the template`) }; console.log(table(data, config)); ``` ``` # honeywell ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ # norc ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ # ramac (ASCII; for use in terminals that do not support Unicode characters) +----+----+----+ | 0A | 0B | 0C | |----|----|----| | 1A | 1B | 1C | |----|----|----| | 2A | 2B | 2C | +----+----+----+ # void (no borders; see "borderless table" section of the documentation) 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` Raise [an issue](https://github.com/gajus/table/issues) if you'd like to contribute a new border template. <a name="table-api-getbordercharacters-borderless-table"></a> #### Borderless Table Simply using `void` border character template creates a table with a lot of unnecessary spacing. To create a more pleasant to the eye table, reset the padding and remove the joining rows, e.g. ```js const output = table(data, { border: getBorderCharacters('void'), columnDefault: { paddingLeft: 0, paddingRight: 1 }, drawHorizontalLine: () => false } ); console.log(output); ``` ``` 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` # type-check [![Build Status](https://travis-ci.org/gkz/type-check.png?branch=master)](https://travis-ci.org/gkz/type-check) <a name="type-check" /> `type-check` is a library which allows you to check the types of JavaScript values at runtime with a Haskell like type syntax. It is great for checking external input, for testing, or even for adding a bit of safety to your internal code. It is a major component of [levn](https://github.com/gkz/levn). MIT license. Version 0.4.0. Check out the [demo](http://gkz.github.io/type-check/). For updates on `type-check`, [follow me on twitter](https://twitter.com/gkzahariev). npm install type-check ## Quick Examples ```js // Basic types: var typeCheck = require('type-check').typeCheck; typeCheck('Number', 1); // true typeCheck('Number', 'str'); // false typeCheck('Error', new Error); // true typeCheck('Undefined', undefined); // true // Comment typeCheck('count::Number', 1); // true // One type OR another type: typeCheck('Number | String', 2); // true typeCheck('Number | String', 'str'); // true // Wildcard, matches all types: typeCheck('*', 2) // true // Array, all elements of a single type: typeCheck('[Number]', [1, 2, 3]); // true typeCheck('[Number]', [1, 'str', 3]); // false // Tuples, or fixed length arrays with elements of different types: typeCheck('(String, Number)', ['str', 2]); // true typeCheck('(String, Number)', ['str']); // false typeCheck('(String, Number)', ['str', 2, 5]); // false // Object properties: typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true typeCheck('{x: Number, y: Boolean}', {x: 2}); // false typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true // A particular type AND object properties: typeCheck('RegExp{source: String, ...}', /re/i); // true typeCheck('RegExp{source: String, ...}', {source: 're'}); // false // Custom types: var opt = {customTypes: {Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}}; typeCheck('Even', 2, opt); // true // Nested: var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}' typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true ``` Check out the [type syntax format](#syntax) and [guide](#guide). ## Usage `require('type-check');` returns an object that exposes four properties. `VERSION` is the current version of the library as a string. `typeCheck`, `parseType`, and `parsedTypeCheck` are functions. ```js // typeCheck(type, input, options); typeCheck('Number', 2); // true // parseType(type); var parsedType = parseType('Number'); // object // parsedTypeCheck(parsedType, input, options); parsedTypeCheck(parsedType, 2); // true ``` ### typeCheck(type, input, options) `typeCheck` checks a JavaScript value `input` against `type` written in the [type format](#type-format) (and taking account the optional `options`) and returns whether the `input` matches the `type`. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js typeCheck('Number', 2); // true ``` ### parseType(type) `parseType` parses string `type` written in the [type format](#type-format) into an object representing the parsed type. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to parse ##### returns `Object` - an object in the parsed type format representing the parsed type ##### example ```js parseType('Number'); // [{type: 'Number'}] ``` ### parsedTypeCheck(parsedType, input, options) `parsedTypeCheck` checks a JavaScript value `input` against parsed `type` in the parsed type format (and taking account the optional `options`) and returns whether the `input` matches the `type`. Use this in conjunction with `parseType` if you are going to use a type more than once. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js parsedTypeCheck([{type: 'Number'}], 2); // true var parsedType = parseType('String'); parsedTypeCheck(parsedType, 'str'); // true ``` <a name="type-format" /> ## Type Format ### Syntax White space is ignored. The root node is a __Types__. * __Identifier__ = `[\$\w]+` - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. `String` * __Type__ = an `Identifier`, an `Identifier` followed by a `Structure`, just a `Structure`, or a wildcard `*` - eg. `String`, `Object{x: Number}`, `{x: Number}`, `Array{0: String, 1: Boolean, length: Number}`, `*` * __Types__ = optionally a comment (an `Identifier` followed by a `::`), optionally the identifier `Maybe`, one or more `Type`, separated by `|` - eg. `Number`, `String | Date`, `Maybe Number`, `Maybe Boolean | String` * __Structure__ = `Fields`, or a `Tuple`, or an `Array` - eg. `{x: Number}`, `(String, Number)`, `[Date]` * __Fields__ = a `{`, followed one or more `Field` separated by a comma `,` (trailing comma `,` is permitted), optionally an `...` (always preceded by a comma `,`), followed by a `}` - eg. `{x: Number, y: String}`, `{k: Function, ...}` * __Field__ = an `Identifier`, followed by a colon `:`, followed by `Types` - eg. `x: Date | String`, `y: Boolean` * __Tuple__ = a `(`, followed by one or more `Types` separated by a comma `,` (trailing comma `,` is permitted), followed by a `)` - eg `(Date)`, `(Number, Date)` * __Array__ = a `[` followed by exactly one `Types` followed by a `]` - eg. `[Boolean]`, `[Boolean | Null]` ### Guide `type-check` uses `Object.toString` to find out the basic type of a value. Specifically, ```js {}.toString.call(VALUE).slice(8, -1) {}.toString.call(true).slice(8, -1) // 'Boolean' ``` A basic type, eg. `Number`, uses this check. This is much more versatile than using `typeof` - for example, with `document`, `typeof` produces `'object'` which isn't that useful, and our technique produces `'HTMLDocument'`. You may check for multiple types by separating types with a `|`. The checker proceeds from left to right, and passes if the value is any of the types - eg. `String | Boolean` first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false. Adding a `Maybe` in front of a list of multiple types is the same as also checking for `Null` and `Undefined` - eg. `Maybe String` is equivalent to `Undefined | Null | String`. You may add a comment to remind you of what the type is for by following an identifier with a `::` before a type (or multiple types). The comment is simply thrown out. The wildcard `*` matches all types. There are three types of structures for checking the contents of a value: 'fields', 'tuple', and 'array'. If used by itself, a 'fields' structure will pass with any type of object as long as it is an instance of `Object` and the properties pass - this allows for duck typing - eg. `{x: Boolean}`. To check if the properties pass, and the value is of a certain type, you can specify the type - eg. `Error{message: String}`. If you want to make a field optional, you can simply use `Maybe` - eg. `{x: Boolean, y: Maybe String}` will still pass if `y` is undefined (or null). If you don't care if the value has properties beyond what you have specified, you can use the 'etc' operator `...` - eg. `{x: Boolean, ...}` will match an object with an `x` property that is a boolean, and with zero or more other properties. For an array, you must specify one or more types (separated by `|`) - it will pass for something of any length as long as each element passes the types provided - eg. `[Number]`, `[Number | String]`. A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. `(String, Number)`. An array and tuple structure check that the value is of type `Array` by default, but if another type is specified, they will check for that instead - eg. `Int32Array[Number]`. You can use the wildcard `*` to search for any type at all. Check out the [type precedence](https://github.com/zaboco/type-precedence) library for type-check. ## Options Options is an object. It is an optional parameter to the `typeCheck` and `parsedTypeCheck` functions. The only current option is `customTypes`. <a name="custom-types" /> ### Custom Types __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; } } } }; typeCheck('Even', 2, options); // true typeCheck('Even', 3, options); // false ``` `customTypes` allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a `typeOf` property - a string, and `validate` property - a function. The `typeOf` property is the type the value should be (optional - if not set only `validate` will be used), and `validate` is a function which should return true if the value is of that type. `validate` receives one parameter, which is the value that we are checking. ## Technical About `type-check` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It also uses the [prelude.ls](http://preludels.com/) library. Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers [![npm version](https://img.shields.io/npm/v/espree.svg)](https://www.npmjs.com/package/espree) [![Build Status](https://travis-ci.org/eslint/espree.svg?branch=master)](https://travis-ci.org/eslint/espree) [![npm downloads](https://img.shields.io/npm/dm/espree.svg)](https://www.npmjs.com/package/espree) [![Bountysource](https://www.bountysource.com/badge/tracker?tracker_id=9348450)](https://www.bountysource.com/trackers/9348450-eslint?utm_source=9348450&utm_medium=shield&utm_campaign=TRACKER_BADGE) # Espree Espree started out as a fork of [Esprima](http://esprima.org) v1.2.2, the last stable published released of Esprima before work on ECMAScript 6 began. Espree is now built on top of [Acorn](https://github.com/ternjs/acorn), which has a modular architecture that allows extension of core functionality. The goal of Espree is to produce output that is similar to Esprima with a similar API so that it can be used in place of Esprima. ## Usage Install: ``` npm i espree ``` And in your Node.js code: ```javascript const espree = require("espree"); const ast = espree.parse(code); ``` ## API ### `parse()` `parse` parses the given code and returns a abstract syntax tree (AST). It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). ```javascript const espree = require("espree"); const ast = espree.parse(code, options); ``` **Example :** ```js const ast = espree.parse('let foo = "bar"', { ecmaVersion: 6 }); console.log(ast); ``` <details><summary>Output</summary> <p> ``` Node { type: 'Program', start: 0, end: 15, body: [ Node { type: 'VariableDeclaration', start: 0, end: 15, declarations: [Array], kind: 'let' } ], sourceType: 'script' } ``` </p> </details> ### `tokenize()` `tokenize` returns the tokens of a given code. It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). Even if `options` is empty or undefined or `options.tokens` is `false`, it assigns it to `true` in order to get the `tokens` array **Example :** ```js const tokens = espree.tokenize('let foo = "bar"', { ecmaVersion: 6 }); console.log(tokens); ``` <details><summary>Output</summary> <p> ``` Token { type: 'Keyword', value: 'let', start: 0, end: 3 }, Token { type: 'Identifier', value: 'foo', start: 4, end: 7 }, Token { type: 'Punctuator', value: '=', start: 8, end: 9 }, Token { type: 'String', value: '"bar"', start: 10, end: 15 } ``` </p> </details> ### `version` Returns the current `espree` version ### `VisitorKeys` Returns all visitor keys for traversing the AST from [eslint-visitor-keys](https://github.com/eslint/eslint-visitor-keys) ### `latestEcmaVersion` Returns the latest ECMAScript supported by `espree` ### `supportedEcmaVersions` Returns an array of all supported ECMAScript versions ## Options ```js const options = { // attach range information to each node range: false, // attach line/column location information to each node loc: false, // create a top-level comments array containing all comments comment: false, // create a top-level tokens array containing all tokens tokens: false, // Set to 3, 5 (default), 6, 7, 8, 9, 10, 11, or 12 to specify the version of ECMAScript syntax you want to use. // You can also set to 2015 (same as 6), 2016 (same as 7), 2017 (same as 8), 2018 (same as 9), 2019 (same as 10), 2020 (same as 11), or 2021 (same as 12) to use the year-based naming. ecmaVersion: 5, // specify which type of script you're parsing ("script" or "module") sourceType: "script", // specify additional language features ecmaFeatures: { // enable JSX parsing jsx: false, // enable return in global scope globalReturn: false, // enable implied strict mode (if ecmaVersion >= 5) impliedStrict: false } } ``` ## Esprima Compatibility Going Forward The primary goal is to produce the exact same AST structure and tokens as Esprima, and that takes precedence over anything else. (The AST structure being the [ESTree](https://github.com/estree/estree) API with JSX extensions.) Separate from that, Espree may deviate from what Esprima outputs in terms of where and how comments are attached, as well as what additional information is available on AST nodes. That is to say, Espree may add more things to the AST nodes than Esprima does but the overall AST structure produced will be the same. Espree may also deviate from Esprima in the interface it exposes. ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/espree/issues). Espree is licensed under a permissive BSD 2-clause license. ## Security Policy We work hard to ensure that Espree is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting * `npm run browserify` - creates a version of Espree that is usable in a browser ## Differences from Espree 2.x * The `tokenize()` method does not use `ecmaFeatures`. Any string will be tokenized completely based on ECMAScript 6 semantics. * Trailing whitespace no longer is counted as part of a node. * `let` and `const` declarations are no longer parsed by default. You must opt-in by using an `ecmaVersion` newer than `5` or setting `sourceType` to `module`. * The `esparse` and `esvalidate` binary scripts have been removed. * There is no `tolerant` option. We will investigate adding this back in the future. ## Known Incompatibilities In an effort to help those wanting to transition from other parsers to Espree, the following is a list of noteworthy incompatibilities with other parsers. These are known differences that we do not intend to change. ### Esprima 1.2.2 * Esprima counts trailing whitespace as part of each AST node while Espree does not. In Espree, the end of a node is where the last token occurs. * Espree does not parse `let` and `const` declarations by default. * Error messages returned for parsing errors are different. * There are two addition properties on every node and token: `start` and `end`. These represent the same data as `range` and are used internally by Acorn. ### Esprima 2.x * Esprima 2.x uses a different comment attachment algorithm that results in some comments being added in different places than Espree. The algorithm Espree uses is the same one used in Esprima 1.2.2. ## Frequently Asked Questions ### Why another parser [ESLint](http://eslint.org) had been relying on Esprima as its parser from the beginning. While that was fine when the JavaScript language was evolving slowly, the pace of development increased dramatically and Esprima had fallen behind. ESLint, like many other tools reliant on Esprima, has been stuck in using new JavaScript language features until Esprima updates, and that caused our users frustration. We decided the only way for us to move forward was to create our own parser, bringing us inline with JSHint and JSLint, and allowing us to keep implementing new features as we need them. We chose to fork Esprima instead of starting from scratch in order to move as quickly as possible with a compatible API. With Espree 2.0.0, we are no longer a fork of Esprima but rather a translation layer between Acorn and Esprima syntax. This allows us to put work back into a community-supported parser (Acorn) that is continuing to grow and evolve while maintaining an Esprima-compatible parser for those utilities still built on Esprima. ### Have you tried working with Esprima? Yes. Since the start of ESLint, we've regularly filed bugs and feature requests with Esprima and will continue to do so. However, there are some different philosophies around how the projects work that need to be worked through. The initial goal was to have Espree track Esprima and eventually merge the two back together, but we ultimately decided that building on top of Acorn was a better choice due to Acorn's plugin support. ### Why don't you just use Acorn? Acorn is a great JavaScript parser that produces an AST that is compatible with Esprima. Unfortunately, ESLint relies on more than just the AST to do its job. It relies on Esprima's tokens and comment attachment features to get a complete picture of the source code. We investigated switching to Acorn, but the inconsistencies between Esprima and Acorn created too much work for a project like ESLint. We are building on top of Acorn, however, so that we can contribute back and help make Acorn even better. ### What ECMAScript features do you support? Espree supports all ECMAScript 2020 features and partially supports ECMAScript 2021 features. Because ECMAScript 2021 is still under development, we are implementing features as they are finalized. Currently, Espree supports: * [Logical Assignment Operators](https://github.com/tc39/proposal-logical-assignment) * [Numeric Separators](https://github.com/tc39/proposal-numeric-separator) See [finished-proposals.md](https://github.com/tc39/proposals/blob/master/finished-proposals.md) to know what features are finalized. ### How do you determine which experimental features to support? In general, we do not support experimental JavaScript features. We may make exceptions from time to time depending on the maturity of the features. # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # isarray `Array#isArray` for older browsers. [![build status](https://secure.travis-ci.org/juliangruber/isarray.svg)](http://travis-ci.org/juliangruber/isarray) [![downloads](https://img.shields.io/npm/dm/isarray.svg)](https://www.npmjs.org/package/isarray) [![browser support](https://ci.testling.com/juliangruber/isarray.png) ](https://ci.testling.com/juliangruber/isarray) ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # which Like the unix `which` utility. Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so `hash -r` is not needed when the PATH changes. ## USAGE ```javascript var which = require('which') // async usage which('node', function (er, resolvedPath) { // er is returned if no "node" is found on the PATH // if it is found, then the absolute path to the exec is returned }) // or promise which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... }) // sync usage // throws if not found var resolved = which.sync('node') // if nothrow option is used, returns null if not found resolved = which.sync('node', {nothrow: true}) // Pass options to override the PATH and PATHEXT environment vars. which('node', { path: someOtherPath }, function (er, resolved) { if (er) throw er console.log('found at %j', resolved) }) ``` ## CLI USAGE Same as the BSD `which(1)` binary. ``` usage: which [-as] program ... ``` ## OPTIONS You may pass an options object as the second argument. - `path`: Use instead of the `PATH` environment variable. - `pathExt`: Use instead of the `PATHEXT` environment variable. - `all`: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string. Overview [![Build Status](https://travis-ci.org/lydell/js-tokens.svg?branch=master)](https://travis-ci.org/lydell/js-tokens) ======== A regex that tokenizes JavaScript. ```js var jsTokens = require("js-tokens").default var jsString = "var foo=opts.foo;\n..." jsString.match(jsTokens) // ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...] ``` Installation ============ `npm install js-tokens` ```js import jsTokens from "js-tokens" // or: var jsTokens = require("js-tokens").default ``` Usage ===== ### `jsTokens` ### A regex with the `g` flag that matches JavaScript tokens. The regex _always_ matches, even invalid JavaScript and the empty string. The next match is always directly after the previous. ### `var token = matchToToken(match)` ### ```js import {matchToToken} from "js-tokens" // or: var matchToToken = require("js-tokens").matchToToken ``` Takes a `match` returned by `jsTokens.exec(string)`, and returns a `{type: String, value: String}` object. The following types are available: - string - comment - regex - number - name - punctuator - whitespace - invalid Multi-line comments and strings also have a `closed` property indicating if the token was closed or not (see below). Comments and strings both come in several flavors. To distinguish them, check if the token starts with `//`, `/*`, `'`, `"` or `` ` ``. Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use [is-keyword-js] to tell them apart. Whitespace includes both line terminators and other whitespace. [is-keyword-js]: https://github.com/crissdev/is-keyword-js ECMAScript support ================== The intention is to always support the latest ECMAScript version whose feature set has been finalized. If adding support for a newer version requires changes, a new version with a major verion bump will be released. Currently, ECMAScript 2018 is supported. Invalid code handling ===================== Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input. Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input. Unterminated regex literals are likely matched as division and whatever is inside the regex. Invalid ASCII characters have their own capturing group. Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the [ES2018](#es2018) section. Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple. Strings may contain invalid escape sequences. Limitations =========== Tokenizing JavaScript using regexes—in fact, _one single regex_—won’t be perfect. But that’s not the point either. You may compare jsTokens with [esprima] by using `esprima-compare.js`. See `npm run esprima-compare`! [esprima]: http://esprima.org/ ### Template string interpolation ### Template strings are matched as single tokens, from the starting `` ` `` to the ending `` ` ``, including interpolations (whose tokens are not matched individually). Matching template string interpolations requires recursive balancing of `{` and `}`—something that JavaScript regexes cannot do. Only one level of nesting is supported. ### Division and regex literals collision ### Consider this example: ```js var g = 9.82 var number = bar / 2/g var regex = / 2/g ``` A human can easily understand that in the `number` line we’re dealing with division, and in the `regex` line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the `/` characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the [ES2018](#es2018) section). When the `jsTokens` regex scans throught the above, it will see the following at the end of both the `number` and `regex` rows: ```js / 2/g ``` It is then impossible to know if that is a regex literal, or part of an expression dealing with division. Here is a similar case: ```js foo /= 2/g foo(/= 2/g) ``` The first line divides the `foo` variable with `2/g`. The second line calls the `foo` function with the regex literal `/= 2/g`. Again, since `jsTokens` only sees forwards, it cannot tell the two cases apart. There are some cases where we _can_ tell division and regex literals apart, though. First off, we have the simple cases where there’s only one slash in the line: ```js var foo = 2/g foo /= 2 ``` Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line. Secondly, not every character is a valid regex flag. ```js var number = bar / 2/e ``` The above example is also correctly identified as division, because `e` is not a valid regex flag. I initially wanted to future-proof by allowing `[a-zA-Z]*` (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard `g`, `m`, `i`, `y` and `u` flags are allowed. This means that the above example will be identified as division as long as you don’t rename the `e` variable to some permutation of `gmiyus` 1 to 6 characters long. Lastly, we can look _forward_ for information. - If the token following what looks like a regex literal is not valid after a regex literal, but is valid in a division expression, then the regex literal is treated as division instead. For example, a flagless regex cannot be followed by a string, number or name, but all of those three can be the denominator of a division. - Generally, if what looks like a regex literal is followed by an operator, the regex literal is treated as division instead. This is because regexes are seldomly used with operators (such as `+`, `*`, `&&` and `==`), but division could likely be part of such an expression. Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as: If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code). ### ES2018 ### ES2018 added some nice regex improvements to the language. - [Unicode property escapes] should allow telling names and invalid non-ASCII characters apart without blowing up the regex size. - [Lookbehind assertions] should allow matching telling division and regex literals apart in more cases. - [Named capture groups] might simplify some things. These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features. [Unicode property escapes]: http://2ality.com/2017/07/regexp-unicode-property-escapes.html [Lookbehind assertions]: http://2ality.com/2017/05/regexp-lookbehind-assertions.html [Named capture groups]: http://2ality.com/2017/05/regexp-named-capture-groups.html License ======= [MIT](LICENSE). # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) [![npm version](https://img.shields.io/npm/v/eslint.svg)](https://www.npmjs.com/package/eslint) [![Downloads](https://img.shields.io/npm/dm/eslint.svg)](https://www.npmjs.com/package/eslint) [![Build Status](https://github.com/eslint/eslint/workflows/CI/badge.svg)](https://github.com/eslint/eslint/actions) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_shield) <br /> [![Open Collective Backers](https://img.shields.io/opencollective/backers/eslint)](https://opencollective.com/eslint) [![Open Collective Sponsors](https://img.shields.io/opencollective/sponsors/eslint)](https://opencollective.com/eslint) [![Follow us on Twitter](https://img.shields.io/twitter/follow/geteslint?label=Follow&style=social)](https://twitter.com/intent/user?screen_name=geteslint) # ESLint [Website](https://eslint.org) | [Configuring](https://eslint.org/docs/user-guide/configuring) | [Rules](https://eslint.org/docs/rules/) | [Contributing](https://eslint.org/docs/developer-guide/contributing) | [Reporting Bugs](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) | [Code of Conduct](https://eslint.org/conduct) | [Twitter](https://twitter.com/geteslint) | [Mailing List](https://groups.google.com/group/eslint) | [Chat Room](https://eslint.org/chat) ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions: * ESLint uses [Espree](https://github.com/eslint/espree) for JavaScript parsing. * ESLint uses an AST to evaluate patterns in code. * ESLint is completely pluggable, every single rule is a plugin and you can add more at runtime. ## Table of Contents 1. [Installation and Usage](#installation-and-usage) 2. [Configuration](#configuration) 3. [Code of Conduct](#code-of-conduct) 4. [Filing Issues](#filing-issues) 5. [Frequently Asked Questions](#faq) 6. [Releases](#releases) 7. [Security Policy](#security-policy) 8. [Semantic Versioning Policy](#semantic-versioning-policy) 9. [Stylistic Rule Updates](#stylistic-rule-updates) 10. [License](#license) 11. [Team](#team) 12. [Sponsors](#sponsors) 13. [Technology Sponsors](#technology-sponsors) ## <a name="installation-and-usage"></a>Installation and Usage Prerequisites: [Node.js](https://nodejs.org/) (`^10.12.0`, or `>=12.0.0`) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.) You can install ESLint using npm: ``` $ npm install eslint --save-dev ``` You should then set up a configuration file: ``` $ ./node_modules/.bin/eslint --init ``` After that, you can run ESLint on any file or directory like this: ``` $ ./node_modules/.bin/eslint yourfile.js ``` ## <a name="configuration"></a>Configuration After running `eslint --init`, you'll have a `.eslintrc` file in your directory. In it, you'll see some rules configured like this: ```json { "rules": { "semi": ["error", "always"], "quotes": ["error", "double"] } } ``` The names `"semi"` and `"quotes"` are the names of [rules](https://eslint.org/docs/rules) in ESLint. The first value is the error level of the rule and can be one of these values: * `"off"` or `0` - turn the rule off * `"warn"` or `1` - turn the rule on as a warning (doesn't affect exit code) * `"error"` or `2` - turn the rule on as an error (exit code will be 1) The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the [configuration docs](https://eslint.org/docs/user-guide/configuring)). ## <a name="code-of-conduct"></a>Code of Conduct ESLint adheres to the [JS Foundation Code of Conduct](https://eslint.org/conduct). ## <a name="filing-issues"></a>Filing Issues Before filing an issue, please be sure to read the guidelines for what you're reporting: * [Bug Report](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) * [Propose a New Rule](https://eslint.org/docs/developer-guide/contributing/new-rules) * [Proposing a Rule Change](https://eslint.org/docs/developer-guide/contributing/rule-changes) * [Request a Change](https://eslint.org/docs/developer-guide/contributing/changes) ## <a name="faq"></a>Frequently Asked Questions ### I'm using JSCS, should I migrate to ESLint? Yes. [JSCS has reached end of life](https://eslint.org/blog/2016/07/jscs-end-of-life) and is no longer supported. We have prepared a [migration guide](https://eslint.org/docs/user-guide/migrating-from-jscs) to help you convert your JSCS settings to an ESLint configuration. We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process. ### Does Prettier replace ESLint? No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors. ### Why can't ESLint find my plugins? * Make sure your plugins (and ESLint) are both in your project's `package.json` as devDependencies (or dependencies, if your project uses ESLint at runtime). * Make sure you have run `npm install` and all your dependencies are installed. * Make sure your plugins' peerDependencies have been installed as well. You can use `npm view eslint-plugin-myplugin peerDependencies` to see what peer dependencies `eslint-plugin-myplugin` has. ### Does ESLint support JSX? Yes, ESLint natively supports parsing JSX syntax (this must be enabled in [configuration](https://eslint.org/docs/user-guide/configuring)). Please note that supporting JSX syntax *is not* the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn't recognize. We recommend using [eslint-plugin-react](https://www.npmjs.com/package/eslint-plugin-react) if you are using React and want React semantics. ### What ECMAScript versions does ESLint support? ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through [configuration](https://eslint.org/docs/user-guide/configuring). ### What about experimental features? ESLint's parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis. In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the [babel-eslint](https://github.com/babel/babel-eslint) parser and [eslint-plugin-babel](https://github.com/babel/eslint-plugin-babel) to use any option available in Babel. Once a language feature has been adopted into the ECMAScript standard (stage 4 according to the [TC39 process](https://tc39.github.io/process-document/)), we will accept issues and pull requests related to the new feature, subject to our [contributing guidelines](https://eslint.org/docs/developer-guide/contributing). Until then, please use the appropriate parser and plugin(s) for your experimental feature. ### Where to ask for help? Join our [Mailing List](https://groups.google.com/group/eslint) or [Chatroom](https://eslint.org/chat). ### Why doesn't ESLint lock dependency versions? Lock files like `package-lock.json` are helpful for deployed applications. They ensure that dependencies are consistent between environments and across deployments. Packages like `eslint` that get published to the npm registry do not include lock files. `npm install eslint` as a user will respect version constraints in ESLint's `package.json`. ESLint and its dependencies will be included in the user's lock file if one exists, but ESLint's own lock file would not be used. We intentionally don't lock dependency versions so that we have the latest compatible dependency versions in development and CI that our users get when installing ESLint in a project. The Twilio blog has a [deeper dive](https://www.twilio.com/blog/lockfiles-nodejs) to learn more. ## <a name="releases"></a>Releases We have scheduled releases every two weeks on Friday or Saturday. You can follow a [release issue](https://github.com/eslint/eslint/issues?q=is%3Aopen+is%3Aissue+label%3Arelease) for updates about the scheduling of any particular release. ## <a name="security-policy"></a>Security Policy ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## <a name="semantic-versioning-policy"></a>Semantic Versioning Policy ESLint follows [semantic versioning](https://semver.org). However, due to the nature of ESLint as a code quality tool, it's not always clear when a minor or major version bump occurs. To help clarify this for everyone, we've defined the following semantic versioning policy for ESLint: * Patch release (intended to not break your lint build) * A bug fix in a rule that results in ESLint reporting fewer linting errors. * A bug fix to the CLI or core (including formatters). * Improvements to documentation. * Non-user-facing changes such as refactoring code, adding, deleting, or modifying tests, and increasing test coverage. * Re-releasing after a failed release (i.e., publishing a release that doesn't work for anyone). * Minor release (might break your lint build) * A bug fix in a rule that results in ESLint reporting more linting errors. * A new rule is created. * A new option to an existing rule that does not result in ESLint reporting more linting errors by default. * A new addition to an existing rule to support a newly-added language feature (within the last 12 months) that will result in ESLint reporting more linting errors by default. * An existing rule is deprecated. * A new CLI capability is created. * New capabilities to the public API are added (new classes, new methods, new arguments to existing methods, etc.). * A new formatter is created. * `eslint:recommended` is updated and will result in strictly fewer linting errors (e.g., rule removals). * Major release (likely to break your lint build) * `eslint:recommended` is updated and may result in new linting errors (e.g., rule additions, most rule option updates). * A new option to an existing rule that results in ESLint reporting more linting errors by default. * An existing formatter is removed. * Part of the public API is removed or changed in an incompatible way. The public API includes: * Rule schemas * Configuration schema * Command-line options * Node.js API * Rule, formatter, parser, plugin APIs According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (`~`) in `package.json` e.g. `"eslint": "~3.1.0"` to guarantee the results of your builds. ## <a name="stylistic-rule-updates"></a>Stylistic Rule Updates Stylistic rules are frozen according to [our policy](https://eslint.org/blog/2020/05/changes-to-rules-policies) on how we evaluate new rules and rule changes. This means: * **Bug fixes**: We will still fix bugs in stylistic rules. * **New ECMAScript features**: We will also make sure stylistic rules are compatible with new ECMAScript features. * **New options**: We will **not** add any new options to stylistic rules unless an option is the only way to fix a bug or support a newly-added ECMAScript feature. ## <a name="license"></a>License [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=large)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_large) ## <a name="team"></a>Team These folks keep the project moving and are resources for help. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--teamstart--> ### Technical Steering Committee (TSC) The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/nzakas"> <img src="https://github.com/nzakas.png?s=75" width="75" height="75"><br /> Nicholas C. Zakas </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/btmills"> <img src="https://github.com/btmills.png?s=75" width="75" height="75"><br /> Brandon Mills </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/mdjermanovic"> <img src="https://github.com/mdjermanovic.png?s=75" width="75" height="75"><br /> Milos Djermanovic </a> </td></tr></tbody></table> ### Reviewers The people who review and implement new features. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/mysticatea"> <img src="https://github.com/mysticatea.png?s=75" width="75" height="75"><br /> Toru Nagashima </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/aladdin-add"> <img src="https://github.com/aladdin-add.png?s=75" width="75" height="75"><br /> 薛定谔的猫 </a> </td></tr></tbody></table> ### Committers The people who review and fix bugs and help triage issues. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/brettz9"> <img src="https://github.com/brettz9.png?s=75" width="75" height="75"><br /> Brett Zamir </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/bmish"> <img src="https://github.com/bmish.png?s=75" width="75" height="75"><br /> Bryan Mishkin </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/g-plane"> <img src="https://github.com/g-plane.png?s=75" width="75" height="75"><br /> Pig Fang </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/anikethsaha"> <img src="https://github.com/anikethsaha.png?s=75" width="75" height="75"><br /> Anix </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/yeonjuan"> <img src="https://github.com/yeonjuan.png?s=75" width="75" height="75"><br /> YeonJuan </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/snitin315"> <img src="https://github.com/snitin315.png?s=75" width="75" height="75"><br /> Nitin Kumar </a> </td></tr></tbody></table> <!--teamend--> ## <a name="sponsors"></a>Sponsors The following companies, organizations, and individuals support ESLint's ongoing maintenance and development. [Become a Sponsor](https://opencollective.com/eslint) to get your logo on our README and website. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--sponsorsstart--> <h3>Platinum Sponsors</h3> <p><a href="https://automattic.com"><img src="https://images.opencollective.com/photomatt/d0ef3e1/logo.png" alt="Automattic" height="undefined"></a></p><h3>Gold Sponsors</h3> <p><a href="https://nx.dev"><img src="https://images.opencollective.com/nx/0efbe42/logo.png" alt="Nx (by Nrwl)" height="96"></a> <a href="https://google.com/chrome"><img src="https://images.opencollective.com/chrome/dc55bd4/logo.png" alt="Chrome's Web Framework & Tools Performance Fund" height="96"></a> <a href="https://www.salesforce.com"><img src="https://images.opencollective.com/salesforce/ca8f997/logo.png" alt="Salesforce" height="96"></a> <a href="https://www.airbnb.com/"><img src="https://images.opencollective.com/airbnb/d327d66/logo.png" alt="Airbnb" height="96"></a> <a href="https://coinbase.com"><img src="https://avatars.githubusercontent.com/u/1885080?v=4" alt="Coinbase" height="96"></a> <a href="https://substack.com/"><img src="https://avatars.githubusercontent.com/u/53023767?v=4" alt="Substack" height="96"></a></p><h3>Silver Sponsors</h3> <p><a href="https://retool.com/"><img src="https://images.opencollective.com/retool/98ea68e/logo.png" alt="Retool" height="64"></a> <a href="https://liftoff.io/"><img src="https://images.opencollective.com/liftoff/5c4fa84/logo.png" alt="Liftoff" height="64"></a></p><h3>Bronze Sponsors</h3> <p><a href="https://www.crosswordsolver.org/anagram-solver/"><img src="https://images.opencollective.com/anagram-solver/2666271/logo.png" alt="Anagram Solver" height="32"></a> <a href="null"><img src="https://images.opencollective.com/bugsnag-stability-monitoring/c2cef36/logo.png" alt="Bugsnag Stability Monitoring" height="32"></a> <a href="https://mixpanel.com"><img src="https://images.opencollective.com/mixpanel/cd682f7/logo.png" alt="Mixpanel" height="32"></a> <a href="https://www.vpsserver.com"><img src="https://images.opencollective.com/vpsservercom/logo.png" alt="VPS Server" height="32"></a> <a href="https://icons8.com"><img src="https://images.opencollective.com/icons8/7fa1641/logo.png" alt="Icons8: free icons, photos, illustrations, and music" height="32"></a> <a href="https://discord.com"><img src="https://images.opencollective.com/discordapp/f9645d9/logo.png" alt="Discord" height="32"></a> <a href="https://themeisle.com"><img src="https://images.opencollective.com/themeisle/d5592fe/logo.png" alt="ThemeIsle" height="32"></a> <a href="https://www.firesticktricks.com"><img src="https://images.opencollective.com/fire-stick-tricks/b8fbe2c/logo.png" alt="Fire Stick Tricks" height="32"></a> <a href="https://www.practiceignition.com"><img src="https://avatars.githubusercontent.com/u/5753491?v=4" alt="Practice Ignition" height="32"></a></p> <!--sponsorsend--> ## <a name="technology-sponsors"></a>Technology Sponsors * Site search ([eslint.org](https://eslint.org)) is sponsored by [Algolia](https://www.algolia.com) * Hosting for ([eslint.org](https://eslint.org)) is sponsored by [Netlify](https://www.netlify.com) * Password management is sponsored by [1Password](https://www.1password.com) # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # Acorn A tiny, fast JavaScript parser written in JavaScript. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface **parse**`(input, options)` is the main interface to the library. The `input` parameter is a string, `options` can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the [ESTree spec](https://github.com/estree/estree). ```javascript let acorn = require("acorn"); console.log(acorn.parse("1 + 1")); ``` When encountering a syntax error, the parser will raise a `SyntaxError` object with a meaningful message. The error object will have a `pos` property that indicates the string offset at which the error occurred, and a `loc` object that contains a `{line, column}` object referring to that same position. Options can be provided by passing a second argument, which should be an object containing any of these fields: - **ecmaVersion**: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10. **NOTE**: Only 'stage 4' (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins. - **sourceType**: Indicate the mode the code should be parsed in. Can be either `"script"` or `"module"`. This influences global strict mode and parsing of `import` and `export` declarations. **NOTE**: If set to `"module"`, then static `import` / `export` syntax will be valid, even if `ecmaVersion` is less than 6. - **onInsertedSemicolon**: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if `locations` is on, also a `{line, column}` object representing this position. - **onTrailingComma**: Like `onInsertedSemicolon`, but for trailing commas. - **allowReserved**: If `false`, using a reserved word will generate an error. Defaults to `true` for `ecmaVersion` 3, `false` for higher versions. When given the value `"never"`, reserved words and keywords can also not be used as property names (as in Internet Explorer's old parser). - **allowReturnOutsideFunction**: By default, a return statement at the top level raises an error. Set this to `true` to accept such code. - **allowImportExportEverywhere**: By default, `import` and `export` declarations can only appear at a program's top level. Setting this option to `true` allows them anywhere where a statement is allowed. - **allowAwaitOutsideFunction**: By default, `await` expressions can only appear inside `async` functions. Setting this option to `true` allows to have top-level `await` expressions. They are still not allowed in non-`async` functions, though. - **allowHashBang**: When this is enabled (off by default), if the code starts with the characters `#!` (as in a shellscript), the first line will be treated as a comment. - **locations**: When `true`, each node has a `loc` object attached with `start` and `end` subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. Default is `false`. - **onToken**: If a function is passed for this option, each found token will be passed in same format as tokens returned from `tokenizer().getToken()`. If array is passed, each found token is pushed to it. Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **onComment**: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters: - `block`: `true` if the comment is a block comment, false if it is a line comment. - `text`: The content of the comment. - `start`: Character offset of the start of the comment. - `end`: Character offset of the end of the comment. When the `locations` options is on, the `{line, column}` locations of the comment’s start and end are passed as two additional parameters. If array is passed for this option, each found comment is pushed to it as object in Esprima format: ```javascript { "type": "Line" | "Block", "value": "comment text", "start": Number, "end": Number, // If `locations` option is on: "loc": { "start": {line: Number, column: Number} "end": {line: Number, column: Number} }, // If `ranges` option is on: "range": [Number, Number] } ``` Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **ranges**: Nodes have their start and end characters offsets recorded in `start` and `end` properties (directly on the node, rather than the `loc` object, which holds line/column data. To also add a [semi-standardized](https://bugzilla.mozilla.org/show_bug.cgi?id=745678) `range` property holding a `[start, end]` array with the same numbers, set the `ranges` option to `true`. - **program**: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the `program` option in subsequent parses. This will add the toplevel forms of the parsed file to the "Program" (top) node of an existing parse tree. - **sourceFile**: When the `locations` option is `true`, you can pass this option to add a `source` attribute in every node’s `loc` object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose. - **directSourceFile**: Like `sourceFile`, but a `sourceFile` property will be added (regardless of the `location` option) directly to the nodes, rather than the `loc` object. - **preserveParens**: If this option is `true`, parenthesized expressions are represented by (non-standard) `ParenthesizedExpression` nodes that have a single `expression` property containing the expression inside parentheses. **parseExpressionAt**`(input, offset, options)` will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression. **tokenizer**`(input, options)` returns an object with a `getToken` method that can be called repeatedly to get the next token, a `{start, end, type, value}` object (with added `loc` property when the `locations` option is enabled and `range` property when the `ranges` option is enabled). When the token's type is `tokTypes.eof`, you should stop calling the method, since it will keep returning that same token forever. In ES6 environment, returned result can be used as any other protocol-compliant iterable: ```javascript for (let token of acorn.tokenizer(str)) { // iterate over the tokens } // transform code to array of tokens: var tokens = [...acorn.tokenizer(str)]; ``` **tokTypes** holds an object mapping names to the token type objects that end up in the `type` properties of tokens. **getLineInfo**`(input, offset)` can be used to get a `{line, column}` object for a given program string and offset. ### The `Parser` class Instances of the **`Parser`** class contain all the state and logic that drives a parse. It has static methods `parse`, `parseExpressionAt`, and `tokenizer` that match the top-level functions by the same name. When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static `extend` method. ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); var JSXParser = acorn.Parser.extend(jsx()); JSXParser.parse("foo(<bar/>)"); ``` The `extend` method takes any number of plugin values, and returns a new `Parser` class that includes the extra parser logic provided by the plugins. ## Command line interface The `bin/acorn` utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options: - `--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10`: Sets the ECMAScript version to parse. Default is version 9. - `--module`: Sets the parsing mode to `"module"`. Is set to `"script"` otherwise. - `--locations`: Attaches a "loc" object to each node with "start" and "end" subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. - `--allow-hash-bang`: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment. - `--compact`: No whitespace is used in the AST output. - `--silent`: Do not output the AST, just return the exit status. - `--help`: Print the usage information and quit. The utility spits out the syntax tree as JSON data. ## Existing plugins - [`acorn-jsx`](https://github.com/RReverser/acorn-jsx): Parse [Facebook JSX syntax extensions](https://github.com/facebook/jsx) Plugins for ECMAScript proposals: - [`acorn-stage3`](https://github.com/acornjs/acorn-stage3): Parse most stage 3 proposals, bundling: - [`acorn-class-fields`](https://github.com/acornjs/acorn-class-fields): Parse [class fields proposal](https://github.com/tc39/proposal-class-fields) - [`acorn-import-meta`](https://github.com/acornjs/acorn-import-meta): Parse [import.meta proposal](https://github.com/tc39/proposal-import-meta) - [`acorn-private-methods`](https://github.com/acornjs/acorn-private-methods): parse [private methods, getters and setters proposal](https://github.com/tc39/proposal-private-methods)n
Pranav543_metabuild-hackathon-project
DEVELOPMENT.md README.md contracts Cargo.toml build.sh contract-element-token Cargo.toml src lib.rs contract-invest Cargo.toml src investment.rs lib.rs location.rs response.rs deploy.sh data-platform README.md app emmissions main.py requirements.txt docker-compose.yml docs conf.py make.bat requirements.txt setup.py src __init__.py data __init__.py make_dataset.py features __init__.py build_features.py models __init__.py predict_model.py train_model.py visualization __init__.py visualize.py test_environment.py element-app .env gatsby-config.js gatsby-node.js package.json src components connect-wallet.js footer.js getStarted index.js step1.js step2.js step3.js layout.js map chart.js detailsBox.js investedBox.js map.js tool.js message.js withdraw.js context chain.js store.js images logo.svg pages 404.js index.js utils client.js wallet.js
<h1 align="center"> Data Platform </h1> Near Metabuild Hackathon Entry Realtime data apps for emmission data from ESA satellites Project Organization ------------ ├── LICENSE ├── Makefile <- Makefile with commands like `make data` or `make train` ├── README.md <- The top-level README for developers using this project. ├── data │   ├── external <- Data from third party sources. │   ├── interim <- Intermediate data that has been transformed. │   ├── processed <- The final, canonical data sets for modeling. │   └── raw <- The original, immutable data dump. │ ├── docs <- A default Sphinx project; see sphinx-doc.org for details │ ├── models <- Trained and serialized models, model predictions, or model summaries │ ├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering), │ the creator's initials, and a short `-` delimited description, e.g. │ `1.0-jqp-initial-data-exploration`. │ ├── references <- Data dictionaries, manuals, and all other explanatory materials. │ ├── reports <- Generated analysis as HTML, PDF, LaTeX, etc. │   └── figures <- Generated graphics and figures to be used in reporting │ ├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g. │ generated with `pip freeze > requirements.txt` │ ├── setup.py <- makes project pip installable (pip install -e .) so src can be imported ├── src <- Source code for use in this project. │   ├── __init__.py <- Makes src a Python module │ │ │   ├── data <- Scripts to download or generate data │   │   └── make_dataset.py │ │ │   ├── features <- Scripts to turn raw data into features for modeling │   │   └── build_features.py │ │ │   ├── models <- Scripts to train models and then use trained models to make │ │ │ predictions │   │   ├── predict_model.py │   │   └── train_model.py │ │ │   └── visualization <- Scripts to create exploratory and results oriented visualizations │   └── visualize.py │ └── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io -------- # The Element Project The Element Project is an example of an environmental DeFi app with the goal of channeling the DeFi funding towards environmental goods. We had discussed many ideas related to tracking deforestation/reforestation, and those would be wonderful projects to bring to production, but given the time constraints of the hackathon, we decided on a metric that changed much more quickly... Air Quality. The goal is to make visible the invisible - in this case air pollution - and allow people to profit when they can reduce it. An investor can place ELEMENT tokens on a given area and then work/lobby/tweet to improve the air quality. Several months later, when the investment is mature, we compare the new air quality with the original quality and reward the investor with a return based on the change. If the pollution levels decrease, they get more ELEMENT back. If pollution increases, they lose ELEMENT (do not get all their initial investment back). For the prototype, we have a quite short investment duration (7 days), but any real system should configure this for several months and use a time-averaged value to encourage long-term improvements This document provides a high-level overview of the project. You can also get information on [how to run this code](./DEVELOPMENT.md) ## Data Sources For the original data, we use [Emissions API](https://emissions-api.org/), which is based on public domain ESA (European Space Agency) satellite data. We poll the data for the last several months for the US, and then combine it into a standard grid, so nearby measurements are grouped together. The first product is measuring Methane, which is a very potent greenhouse gas. Further work would be to normalize the data to remove some of the noise/variation. This could be done by averaging, but also we would like to investigate if there is a high correlation between wind/rain and pollution levels (high wind or rain removing atmospheric pollution) and if so, we could compensate for this correlation by reading in historical weather information and processing the data from Emissions API, before we perform some time averaging on it. Once we have "clean data" for one metric, we would like to add the other frequently updated pollution sources (carbon monoxide and ozone), preprocess them as above, and then combine them to produce a combine index that represents a more holistic view of air quality. We chose one country to limit the scale of the data we work with. And chose the USA, as it is the most widely recognized country. ## Hexagons Since every measurement is at a slightly different location, we need to group them together in a reasonable spatial resolution to allow meaningful aggregation and historical data. To do so, we searched for a regular grid system that covers the planet and went with the [H3 Hexagonal Hierarchical Spatial Index](https://eng.uber.com/h3/). This can easily be adjusted to different spatial resolutions and covers the whole globe with mostly-equally sized hexagons in contrast to e.g. the distortion of squares in the Mercator projection. ## Frontend App The frontend app provides the user access to both the data platform and the blockchain. It queries the latest values of all grid points from the data platform, and when you click on a hexagon, it will also show a graph of the historical values of pollution. We use Near test wallet to connect to Near's testnet. The application currently runs in a local environment, follow the instructions to test it locally. Once you have connected your wallet and filled it with tokens, you can browse the different hexagons and invest in the one(s) that you would like to clean up. We display all your current investments in the sidebar, and also allow you to withdraw any investments that have reached maturity, calculating your winnings. ## Contracts We use [rust contracts](https://github.com/Pranav543/metabuild-hackathon-project/tree/main/contracts) deployed on NEAR to handle the blockchain side.
laponte243_NEAvataR
README.md neavatar Cargo.toml build.sh src enumeration.rs internal.rs lib.rs metadata.rs mint.rs
<div id="top"></div> <div align="center"> <h3 align="center">NEAvataR</h3> <p align="center"> Obten tu Nft unico por cuenta NEAR <br /> <a href="https://docs.near.org/docs/develop/basics/getting-started"><strong>Aprende más de Near »</strong></a> <br /> <br /> <a href="https://www.figma.com/file/I6yP4oKgf7e96SSqXcPtnY/example.near?node-id=0%3A1" target="_blank">View Demo</a> </p> </div> <!-- TABLE OF CONTENTS --> <details> <summary>Tabla de contenido</summary> <ol> <li> <a href="#about-the-project">Acerca del proyecto</a> <ul> <li><a href="#built-with">Herramientas utilizadas</a></li> </ul> </li> <li> <a href="#getting-started">Inicialización</a> <ul> <li><a href="#installation">Instalación</a></li> </ul> </li> <li><a href="#roadmap">Roadmap</a></li> <li><a href="#contact">Contacto</a></li> </ol> </details> <!-- ABOUT THE PROJECT --> ## Acerca del proyecto La idea del proyecto, es que mediante la tecnologia blockchain se pueda generar de manera dinamica una imagen de 128 x 128 pixels, utilizando como datos en la generacion al azar la direccion de la cuenta near, ejemplo : "leyner.near" y que esta imagen este guardada en un sistema IPFS generando así un NFT **Por definir Economia este proyecto tomo bases para el desarrollo del contrato: * [Near-Examples](https://github.com/near-examples/NFT/blob/master/nft/src/lib.rs) * [Paras](https://github.com/ParasHQ/paras-nft-contract) * [NFT-culturas-latinas](https://github.com/NEAR-Hispano/NFT-culturas-latinas/blob/master/blockchain/rust-contract/contract/src/lib.rs) <p align="right">(<a href="#top">back to top</a>)</p> ### Herramientas utilizadas A continuación podra observar las principales librerias y lenguajes utilizados para crear este proyecto. * [Vue.js](https://vuejs.org/) * [Rust](https://www.rust-lang.org/) <p align="right">(<a href="#top">Subir</a>)</p> <!-- GETTING STARTED --> ## Iniciando Debe tener en cuenta que para ejecutar este proyecto, debe contar con un entorno con nodeJS y rust previamente instalados ### Instalación _A continuación, se daran unos breves paso la implementacion y pruebas para los tres componentes del aplicativo con ejemplos de ejecución_ 1. Clonar el repositorio ```sh git clone https://github.com/laponte243/NEAvataR.git ``` 2. FrontEnd * Navega hasta el front e installa las librerias necesarias ```sh npm install ``` * inicializa el proyecto ```sh npm run serve ``` <p align="right">(<a href="#top">arriba</a>)</p> <!-- ROADMAP --> ## Roadmap - [X] Conceptualización - [X] Crear contrato inicial - [ ] Crear pruebas internas al contrato - [ ] Inicializar frontend con VueJs y Vuetify - [ ] Creación algoritmo de generación de imagenes - [ ] Implementacion del IPFS - [ ] Soporte multi-lenguaje - [ ] Inglés <p align="right">(<a href="#top">Subir</a>)</p> <!-- CONTACT --> ## Contacto Leyner Aponte Project Link: [https://github.com/laponte243/NEAvataR](https://github.com/laponte243/NEAvataR) <p align="right">(<a href="#top">Arriba</a>)</p>
esaminu_near-social-donations-template
.github scripts runfe.sh workflows deploy-to-console.yml readme.yml tests.yml .gitpod.yml README.md contract README.md build.sh deploy.sh package-lock.json package.json src contract.ts model.ts utils.ts tsconfig.json integration-tests package-lock.json package.json src main.ava.ts package-lock.json package.json
# Donation Contract The smart contract exposes methods to handle donating $NEAR to a `beneficiary`. ```ts @call donate() { // Get who is calling the method and how much $NEAR they attached let donor = near.predecessorAccountId(); let donationAmount: bigint = near.attachedDeposit() as bigint; let donatedSoFar = this.donations.get(donor) === null? BigInt(0) : BigInt(this.donations.get(donor) as string) let toTransfer = donationAmount; // This is the user's first donation, lets register it, which increases storage if(donatedSoFar == BigInt(0)) { assert(donationAmount > STORAGE_COST, `Attach at least ${STORAGE_COST} yoctoNEAR`); // Subtract the storage cost to the amount to transfer toTransfer -= STORAGE_COST } // Persist in storage the amount donated so far donatedSoFar += donationAmount this.donations.set(donor, donatedSoFar.toString()) // Send the money to the beneficiary const promise = near.promiseBatchCreate(this.beneficiary) near.promiseBatchActionTransfer(promise, toTransfer) // Return the total amount donated so far return donatedSoFar.toString() } ``` <br /> # Quickstart 1. Make sure you have installed [node.js](https://nodejs.org/en/download/package-manager/) >= 16. 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash npm run deploy ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` The contract will be automatically initialized with a default `beneficiary`. To initialize the contract yourself do: ```bash # Use near-cli to initialize contract (optional) near call <dev-account> init '{"beneficiary":"<account>"}' --accountId <dev-account> ``` <br /> ## 2. Get Beneficiary `beneficiary` is a read-only method (`view` method) that returns the beneficiary of the donations. `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash near view <dev-account> beneficiary ``` <br /> ## 3. Get Number of Donations `donate` forwards any attached money to the `beneficiary` while keeping track of it. `donate` is a payable method for which can only be invoked using a NEAR account. The account needs to attach money and pay GAS for the transaction. ```bash # Use near-cli to donate 1 NEAR near call <dev-account> donate --amount 1 --accountId <account> ``` **Tip:** If you would like to `donate` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Donation 💸 [![](https://img.shields.io/badge/⋈%20Examples-Basics-green)](https://docs.near.org/tutorials/welcome) [![](https://img.shields.io/badge/Gitpod-Ready-orange)](https://gitpod.io/#/https://github.com/near-examples/donation-js) [![](https://img.shields.io/badge/Contract-js-yellow)](https://docs.near.org/develop/contracts/anatomy) [![](https://img.shields.io/badge/Frontend-JS-yellow)](https://docs.near.org/develop/integrate/frontend) [![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fnear-examples%2Fdonation-js%2Fbadge&style=flat&label=Tests)](https://actions-badge.atrox.dev/near-examples/donation-js/goto) Our Donation example enables to forward money to an account while keeping track of it. It is one of the simplest examples on making a contract receive and send money. ![](https://docs.near.org/assets/images/donation-7cf65e5e131274fd1ae9aa34bc465bb8.png) # What This Example Shows 1. How to receive and transfer $NEAR on a contract. 2. How to divide a project into multiple modules. 3. How to handle the storage costs. 4. How to handle transaction results. 5. How to use a `Map`. <br /> # Quickstart Clone this repository locally or [**open it in gitpod**](https://gitpod.io/#/github.com/near-examples/donation-js). Then follow these steps: ### 1. Install Dependencies ```bash npm install ``` ### 2. Test the Contract Deploy your contract in a sandbox and simulate interactions from users. ```bash npm test ``` ### 3. Deploy the Contract Build the contract and deploy it in a testnet account ```bash npm run deploy ``` --- # Learn More 1. Learn more about the contract through its [README](./contract/README.md). 2. Check [**our documentation**](https://docs.near.org/develop/welcome).
OllieMurray_subgraph-refinery-kudos-r-strawser
README.md subgraphs tenkbay debug-collection-metadata README.md src mapping.ts extinctheroes-collection src mapping.ts template-collection src mapping.ts utils README.md package copy.json package.json scripts TEMPLATES view-contract-methods.js view-nft-metadata.js view-nft-token.js view-tx-details-in-range.js extinctheroes contract-methods.js nft-metadata.js nft-token.js misfits README.md contract-methods.js nft-metadata.js mrbrownproject contract-methods.js nft-metadata.js nft-token.js tenk-dao contract-methods.js get-policy.js get-tx-results.js tx-details.js tigeracademy contract-methods.js nft-metadata.js
# **NEAR Query Tooling** Indexing mrbrownproject.near https://thegraph.com/hosted-service/subgraph/r-strawser/debug-collection-metadata?selected=playground # **NEAR Subgraphs** ## **What is a Subgraph?** Put simply, a subgraph is an index of specific events, actions, and data on the blockchain. Creating a subgraph of a contract’s events requires traversal of the entire blockchain to index, recording these events in a data store defined by a schema within the subgraph’s source code. Once complete, this data store will be queryable via GraphQL by clients or other consumers of the data. ## **Why Build a Subgraph?** Subgraphs allow clients to query on-chain information that’s functionally cached off-chain. Without a subgraph, a client who wanted to fetch the ownership data for a given token or NFT contract, for example, would need to iterate through the entire blockchain and compile the data themselves. Subgraphs enable clients to make complex queries and return results much more quickly than iterating through the blockchain, so if a client plans on requesting a certain type of data more than once, building a subgraph is probably the best path forward. ## **Three Parts of a Subgraph** ### 1. **Schema** The schema file is where you define all entities you want stored in your subgraph. These entities are defined using GraphQL, and will be used by The Graph to generate model files for use in the mapping, and corresponding data store to persist entries. For any given entity, you define it like you would any other data schema in GraphQL. For example, a simple NFT entity that maps to the [NEP-177](https://nomicon.io/Standards/NonFungibleToken/Metadata) might look something like this: ``` type NFT @entity { id: ID! # NEP-177 title: String description: String media: String media_hash: String copies: BigInt issued_at: BigInt expires_at: BigInt starts_at: BigInt updated_at: BigInt extra: String reference: String reference_hash: String } ``` You can also create more complicated types, and relationships between entities. For example, in the above GraphQL sample you might add a Contract entity and a User entity, and store information like ownership, collection metadata, etc. For more information on GraphQL Schemas, see the GraphQL documentation on [Schemas and Types](https://graphql.org/learn/schema/). For more information on schemas for subgraphs, see documentation by [The Graph](https://thegraph.com/docs/en/developer/create-subgraph-hosted/#the-graph-ql-schema). ### 2. **Manifest** The subgraph manifest is a [YAML](https://yaml.org/) file that contains the metadata required to build the subgraph, such as locations of the schema and mapping file, the entities to generate from the schema, the contract address to index, the events to monitor, and a link to the repository with the source code. The structure is standardized by [The Graph](https://thegraph.com/docs/en/developer/create-subgraph-hosted/#the-subgraph-manifest) and should be simple to fill in. An example manifest file: ``` specVersion: 0.0.4 schema: file: ./schema.graphql features: - ipfsOnEthereumContracts - fullTextSearch dataSources: - kind: near name: Mrbrownproject network: near-mainnet source: account: "mrbrownproject.near" startBlock: 59140000 mapping: apiVersion: 0.0.5 language: wasm/assemblyscript entities: - Token - User - Contract receiptHandlers: - handler: handleReceipt file: ./src/mapping.ts ``` _Note: any given manifest can have multiple entries and serve multiple subgraphs._ ### 3. **Mapping** The Mapping is an [assemblyscript](https://www.assemblyscript.org/introduction.html) file designated in the Manifest (generally named mapping.ts) that contains a callback that’s triggered when certain events are detected on-chain. This callback receives data for a given transaction from the NEAR blockchain, maps this data to the entities defined in the schema, and saves it to the index. Once data is stored in the index, it’s queryable via GraphQL. The Graph has some great documentation on mappings [here](https://thegraph.com/docs/en/developer/create-subgraph-hosted/#writing-mappings). ## **Building Your Own Subgraph** ### **Installing the Graph CLI** The first step in building your own subgraph is installing the global Graph CLI. You will need Node and Yarn installed on your computer - If you don’t have Node installed, we recommend you use [Node Version Manager](https://github.com/nvm-sh/nvm). This will install both Node and the Node Package Manager (npm) - If you don’t have Yarn installed, run `npm install --global yarn` \_Note: If you just installed Node or Yarn, you’ll have to reset your terminal. Running the <code>reset</code> command in your terminal will handle this for you</em> Once you have Node and Yarn installed, run the following command: ``` yarn global add @graphprotocol/graph-cli ``` ### **Create your hosted service** The Graph provides some dev infra called Hosted Service, which will host your subgraphs for free. To begin: - go to [https://thegraph.com/hosted-service/](https://thegraph.com/hosted-service/) and login with Github. - Click the “My Dashboard” button on the top of the screen, and click “Add Subgraph”. - Within the subgraph creation form, you must provide a subgraph name and subtitle within the form, and you can choose to provide optional parameters such as a description, an image, and a github URL for the subgraph repository if you’ve already set one up. - Click “Create Subgraph” ### **Create your subgraph repository and generate scaffolding** - In your terminal, create a new directory, `cd` into that directory, and run `graph init -product hosted-service &lt;GITHUB_USER>/&lt;SUBGRAPH_NAME>`, where &lt;GITHUB_USER> is your Github username and &lt;SUBGRAPH_NAME> is the name you gave the subgraph we created above. - Next, run `yarn install && yarn codegen`. These commands will install any required dependencies, and generate the basic data model types from the default GraphQL provided by the `graph init` command. - If those commands didn’t spit out any errors, run `yarn deploy` to deploy your first subgraph! _Note: after waiting a minute, reload your hosted service page, and you should see that indexing has begun!_ ### **Filling in your subgraph** Now that you’ve officially deployed your first subgraph to the Hosted Service, it’s time to fill in the three pieces of the subgraph that we discussed earlier: **the Schema**, **the Manifest**, and **the Mapping**. #### **Manifest** Let’s start with the Manifest. Thankfully, the `graph init` step should have filled in everything we need inside the Manifest file, so we shouldn’t have to make any changes yet. However, if you want to make your subgraph more efficient, feel free to update the `startBlock` field of your data source(s), which indicates where on-chain indexing should begin. For example, if you’re writing a subgraph for an NFT contract that just launched three days ago, you don’t need to index the entire history of the blockchain. Instead, you only need the history of the blockchain starting three days ago because any transactions before the NFT contract was deployed won’t be relevant for the purposes of your subgraph. #### **Schema** Next, you’ll need to fill in your data schema. Like we discussed above, the schema defines the structure of your subgraph by specifying what fields your data models have and the relationships among these models. You’ll have to determine what data models best suit your use case and add them to your `schema.graphql` file. For example, a very simple token data model might look something like this: ``` type Token @entity { id: ID! ownerId: String } ``` Once you’ve done that, be sure to re-run `yarn codegen` to make sure you re-generate your data models to use in your mapping. #### **Mapping** Lastly, you need to fill in the meat of the subgraph: the mapping code. Again, the mapping is where we extract data from on-chain transactions and write them to our data store using the types generated from our schema. To do this, first you need to determine which function calls you want to monitor, and filter by those function calls. For example, if you were looking to index NFT ownership, you might want to monitor transfers of NFTs on a given contract. This would generally look something like this: ``` export function handleReceipt(receipt: near.ReceiptWithOutcome): void { const actions = receipt.receipt.actions; for (let i = 0; i < actions.length; i++) { handleAction( actions[i], receipt.receipt, receipt.block.header, receipt.outcome ); } } function handleAction( action: near.ActionValue, receipt: near.ActionReceipt, blockHeader: near.BlockHeader, outcome: near.ExecutionOutcome ): void { if (action.kind != near.ActionKind.FUNCTION_CALL) { log.info("Early return: {}", ["Not a function call"]); return; } let accounts = new Account(receipt.signerId); const functionCall = action.toFunctionCall(); if (functionCall.methodName == "nft_transfer") { ... } } ``` From there, you would parse the JSON from the outcome logs, and write the relevant data to your schema. For example, your parsing logic might look something like this: ``` if (functionCall.methodName == "nft_transfer") { const receiptId = receipt.id.toBase58(); accounts.signerId = receipt.signerId; // Maps the JSON formatted log to the LOG entity if(outcome.logs[0]!=null){ let parsed = outcome.logs[0].replace('EVENT_JSON:', '') let jsonData = json.try_fromString(parsed) if(jsonData.value != null) { const jsonObject = jsonData.value.toObject() let eventData = jsonObject.get('data') if(eventData) { let eventArray:JSONValue[] = eventData.toArray() let data = eventArray[0].toObject() const new_owner_id = data.get('new_owner_id') const tokenIds = data.get('token_ids') const ids:JSONValue[] = tokenIds.toArray() const tokenId = ids[0].toString() let token = Token.load(tokenId) if (!token) { token = new Token(tokenId) token.id = tokenId token.ownerId = new_owner_id.toString() } token.save() } } ``` ### **Re-deploy your subgraph** Once you’ve written your mapping code and filled in your schema, simply run `yarn deploy` again! This will re-deploy your subgraph onto the hosted service and begin the indexing job again. ### **Querying your subgraph** Once your subgraph is fully indexed, you’ll be able to query it within the Hosted Service. As an example, here’s a simple query we could run on a subgraph generated using the above schema and mapping. ``` { tokens(first: 5) { id ownerId } } ``` This query, as you might expect, will return the first 5 tokens, and each token data model will contain the token ID and the owner ID. After typing that into the query field within the Hosted Service, simply click the Run button and wait for the response from your subgraph. _Note: Your query will not succeed until a certain portion of the chain has been indexed. After all, before your indexing is complete, the data isn’t in your subgraph in the first place so there’s no way to query it._ Here’s a sample response for a query like the one above: ``` { "data": { "tokens": [ { "id": "100", "ownerId": "azhyel.near" }, { "id": "1001", "ownerId": "uncutgems.near" }, { "id": "1012", "ownerId": "fbdaa19a39ffce9c26c0613fee064481d3d59fd9f5a2934a982adefc5b21339c" }, { "id": "1025", "ownerId": "2f18219d1d7ca1638526ecde288a7a935266aaa6264a7c05ac00c18310c2f0d8" }, { "id": "103", "ownerId": "enderr.near" } ] } } ``` GraphQL also enables you to make much more complex queries than this, such as sorting, filtering, nested queries, etc. For more information about that, see the GraphQL documentation. That’s it! You’ve officially deployed and queried your subgraph.
near_near-hat
.github ISSUE_TEMPLATE BOUNTY.yml Cargo.toml README.md dns.py examples distribute-contract Cargo.toml README.md build.sh src lib.rs hasura migrations default 1691364619300_init down.sql up.sql install.sh near-hat-cli Cargo.toml src main.rs near-hat Cargo.toml src client.rs containers coordinator.rs explorer_backend.rs explorer_database.rs explorer_frontend.rs explorer_indexer.rs hasura_auth.rs hasura_graphql.rs lake_indexer.rs localstack.rs mod.rs queryapi_postgres.rs redis.rs relayer.rs runner.rs sandbox.rs ctx explorer.rs lake_indexer.rs mod.rs nearcore.rs queryapi.rs relayer.rs lib.rs validator.rs start.sh tests data indexer_code.js indexer_schema.sql demo.test.js package.json testUtils.js
# NEARHat NEARHat is a NEAR Protocol local development environment. It allows you to run local development of dApps and create automated end-to-end tests from smart contracts to indexers. ![NEARHat-Logo](https://github.com/near/near-hat/assets/116191277/68326fa2-f9d9-45b4-a332-078b4733d376) Built by the Pagoda Engineers for the NEAR Ecosystem as part of the December 2023 hackathon. Currently supports local versions of: * nearcore sandbox * NEAR Lake Indexer (+ LocalStack NEAR Lake S3 bucket) * NEAR Relayer * Local NEAR Explorer * Query API Potential future support: * Local MyNearWallet * BOS dependency chain and near.org gateway * FastAuth ## One line installation: ``` $ ./install.sh ``` This will install dependencies via Homebrew and setup local `.nearhat` domain. You need to be logged into Github Container Registry (until all docker containers are published to DockerHub): https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry ![image](https://github.com/near/near-hat/assets/116191277/e20331ce-670f-43c2-b4aa-b152d490e328) ## Starting and stopping local environment ``` $ ./start.sh ``` ## Forking mainnet smart contracts NEARHat allows to fork mainnet contracts and refer to them through `http://rpc.nearhat`. To fork the USDC contract (with account id `17208628f84f5d6ad33f0da3bbbeb27ffcb398eac501a31bd6ad2`) start NEARHat with the following command: ```bash RUST_BACKTRACE=1 RUST_LOG=info cargo run -p near-hat-cli -- start --contracts-to-spoon 17208628f84f5d6ad33f0da3bbbeb27ffcb398eac501a31bd6ad2 ``` # Build Guide 1. Run `./build.sh` 2. The resulting contract WASM will be at `./target/wasm32-unknown-unknown/release/distribute_contract.wasm`
marcosbitetti_js13k-2021
README.md back .gitpod.yml README.md contract README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts index.ts tsconfig.json compile.js package-lock.json package.json dist global.e50bbfba.css global.e50bbfba.js global.eca22910.css index.html index.js logo-black.3916bf24.svg logo-black.eab7a939.svg logo-white.7fec831f.svg logo-white.c927fc35.svg src.4085dabb.js src.e31bb0bc.js package.json src assets logo-black.svg logo-white.svg config.js global.css index.html index.js main.test.js utils.js wallet login index.html front README.md index.html pack index.html main.js package.json postcss.config.js r.json src index.js matrix.js shaders.js webpack.dev.js webpack.prod.js
# JS 13K context 2021 # front ## install cd front yarn install ## run cd front yarn start and open bronwser in **http://localhost:8080** ## validate cd front yarn validate game-js13k-back ================== This app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `game-js13k-back.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `game-js13k-back.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account game-js13k-back.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'game-js13k-back.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages game-js13k-back Smart Contract ================== A [smart contract] written in [AssemblyScript] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install [Node.js] ≥ 12 Exploring The Code ================== 1. The main smart contract code lives in `assembly/index.ts`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard AssemblyScript tests using [as-pect]. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [AssemblyScript]: https://www.assemblyscript.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [as-pect]: https://www.npmjs.com/package/@as-pect/cli <img alt="Webpack Starter Basic Loo" src="https://github.com/lifenautjoe/webpack-starter-basic/blob/master/src/assets/logo-on-dark-bg.png?raw=true" width="250"> # webpack-starter-basic [![forthebadge](http://forthebadge.com/images/badges/fo-real.svg)](http://forthebadge.com)[![forthebadge](http://forthebadge.com/images/badges/built-with-love.svg)](http://forthebadge.com) [![dependencies](https://david-dm.org/lifenautjoe/webpack-starter-basic.svg)](https://david-dm.org/lifenautjoe/webpack-starter-basic) A simple **webpack 4 starter project** for your basic web development needs. Read more on the [demo website](https://lifenautjoe.github.io/webpack-starter-basic/) or continue reading below. ## Table of Contents - [Motivation](#motivation) - [Features](#features) - [Requirements](#requirements) - [Usage](#usage) - [FAQ](#faq) * [When should I use this starter?](#when-should-i-use-this-starter) * [Where's the common webpack config?](#wheres-the-common-webpack-config) * [How to load fonts](#how-to-load-fonts) * [How to load images](#how-to-load-images) + [In JavaScript](#in-javascript) + [In `index.html`](#in-indexhtml) * [How to install bootstrap 4](#how-to-install-bootstrap-4) - [Websites using this starter kit on the wild](#websites-using-this-starter-kit-on-the-wild) ## Motivation I needed to make a plain ol' "drop your mail to stay updated of ongoing developments" page. I did not need anything fancy, no frontend framework, no unit testing, simply a **starter project that would let me use sass, ES6, load assets, add vendor prefixes, start a dev server, generate sourcemaps and optimize everything for production.** I looked around and all I found were heavily specialized and complicated webpack starter projects (`webpack-angular-starter`, `webpack-react-starter`, etc) that are so intertwined with plugins that stripping undesired functionality is almost impossible. So I did this. ## Features * Separated development and production webpack settings you can understand * Sass * ES6 * Asset loading * CSS Vendor prefixing * Development server * Sourcemaps * Favicons generation * Production optimizations * Mobile browser header color ## Requirements * [Node](https://nodejs.org) > 7.6 ## Usage Substitute `PROJECT-NAME` for your project name. Clone the repository ```sh git clone https://github.com/lifenautjoe/webpack-starter-basic PROJECT-NAME cd PROJECT-NAME ``` Install npm dependencies ```sh npm install ``` Run the kickstart command ```sh npm run kickstart ``` **After the project has been kickstarted** To start the development server ```sh npm start ``` To build for production ```sh npm run build ``` To preview the production build ```sh npm run preview ``` ## FAQ ### When should I use this starter? You should use this starter if any of the following are true: * You want to make a static page. e.g. splash screen, onboarding screen, phaser game, threejs visualization, countdown. * You found no good starter kit for whatever you want to do and need a solid place to start from. **Please note**: If you are going to use a frontend framework like angular or react, you can of course add the required plugins and configuration but it's normally complicated and quirky enough that it's highly recommended to use one of the existing starter projects such as [react-webpack-babel](https://github.com/alicoding/react-webpack-babel) or for angular projects the [angular-cli](https://github.com/angular/angular-cli). ### Where's the common webpack config? **There is none and that is good thing.** The pattern creates unnecessary confusion over the setup, at the end the config will always be different across environments. People just put booleans everywhere on the common config to switch between these differing configuration options which is just awful to see and confusing for someone who's just starting on webpack. The only truly shared config between these files are the entry js point and the main html template. ### How to load fonts If you don't support Opera Mini, browsers support the .woff format. Its newer version .woff2, is widely supported by modern browsers and can be a good alternative. If you decide to use only this format you can load the fonts in a similar manner to images. In your `webpack.dev.js` and `webpack.prod.js` add the following ```js module.exports = { // .. module: { rules: [ // .. { test: /\.woff$/, loader: 'url-loader', options: { // Limit at 50k. Above that it emits separate files limit: 50000, // url-loader sets mimetype if it's passed. // Without this it derives it from the file extension mimetype: 'application/font-woff', // Output below fonts directory name: './fonts/[name].[ext]', }, } // .. ] } // .. }; ``` And let's say your font is in the folder `assets` with the name `pixel.woff` You can add it and use it in `index.scss` as ```scss @font-face { font-family: "Pixel"; src: url('./../assets/pixel.woff') format('woff'); } .body{ font-family: 'Pixel', sans-serif; } ``` If you would like to support all kinds of font types, remove the woff rule we previously added to `webpack.dev.js` and `webpack.prod.js` and add the following ```js module.exports = { // .. module: { rules: [ // .. { test: /\.(ttf|eot|woff|woff2)$/, loader: 'file-loader', options: { name: 'fonts/[name].[ext]', }, } // .. ] } // .. }; ``` And assuming you have your fonts in the directory `assets` with names `pixel.woff`, `pixel.ttf`, `pixel.eot` , etc. You can add it and use it in `index.scss` as ```scss @font-face { font-family: 'Pixel'; src: url('./../assets/pixel.woff2') format('woff2'), url('./../assets/pixel.woff') format('woff'), url('./../assets/pixel.eot') format('embedded-opentype'), url('./../assets/pixel.ttf') format('truetype'); /* Add other formats as you see fit */ } ``` ### How to load images #### In JavaScript You can require an image from JavaScript like ```js const myImage = require('./assets/icon.png'); ``` If the image size in bytes is smaller than `8192`you, `myImage` will be a string with the encoded image path such as ``` data:image/svg+xml;base64,bW9kdWxlLmV4cG9ydHMgPSBfX3dlYnBhY2tfcHVibGljX3BhdGhfXyArICJhc3NldHMvaW1hZ2VzL3RpY2stQ3lydkhSdi5zdmciOw== ``` If the image size is larger than `8192` it will be a string with the url to the image such as ``` src/assets/icon.png?hash=5b1f36bc41ab31f5b801 ``` This limit is set so images like icons are not loaded through a request but you can force the loader to give you image urls always by doing the following but should not be necessary. The limit works 90% of the time. ```js const myImage = require('!!url!/assets/icon.png'); ``` #### In `index.html` If you would like to include an image on your `index.html` file, place the path of the image in a webpack require statement`<%= require(imagePath) %>`. ```html <img class="splash-title__img" src="<%= require('./src/assets/logo-on-dark-bg.png') %>" alt="webpack logo"></a> ``` ### How to install Bootstrap 4 **After the project has been kickstarted** Install bootstrap ````sh npm install bootstrap@4 --save ```` Install bootstrap dependencies. ````sh npm install popper.js --save npm install jquery --save ```` Replace the project `index.scss` with ````scss @import "~bootstrap/scss/bootstrap"; ```` And replace the project `index.js` with ````js require('./styles/index.scss'); import PopperJs from 'popper.js'; import jquery from 'jquery'; jquery(()=>{ console.log('Hello jQuery + bootstrap 4!'); }); ```` To see it all come together, replace the index.html body tag with ````html <body> <nav class="navbar navbar-expand-md navbar-dark bg-dark fixed-top"> <a class="navbar-brand" href="#">Navbar</a> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarsExampleDefault" aria-controls="navbarsExampleDefault" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarsExampleDefault"> <ul class="navbar-nav mr-auto"> <li class="nav-item active"> <a class="nav-link" href="#">Home <span class="sr-only">(current)</span></a> </li> <li class="nav-item"> <a class="nav-link" href="#">Link</a> </li> <li class="nav-item"> <a class="nav-link disabled" href="#">Disabled</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="https://example.com" id="dropdown01" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">Dropdown</a> <div class="dropdown-menu" aria-labelledby="dropdown01"> <a class="dropdown-item" href="#">Action</a> <a class="dropdown-item" href="#">Another action</a> <a class="dropdown-item" href="#">Something else here</a> </div> </li> </ul> <form class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="text" placeholder="Search" aria-label="Search"> <button class="btn btn-outline-success my-2 my-sm-0" type="submit">Search</button> </form> </div> </nav> <main role="main" class="container"> <div class="starter-template"> <h1>Bootstrap starter template</h1> <p class="lead">Use this document as a way to quickly start any new project.<br> All you get is this text and a mostly barebones HTML document.</p> </div> </main><!-- /.container --> </body> ```` Start the development server and `voilà`. ```sh npm start ``` To build for production ```sh npm run build ``` To preview the production build ```sh npm run preview ``` ⚠️ Please remember to remove the Google Analytics tag in the `index.html` file as soon as you make the template yours. ```html <!-- Global Site Tag (gtag.js) - Google Analytics --> <script async src="https://www.googletagmanager.com/gtag/js?id=UA-101423651-2"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-101423651-2'); </script> ``` ## Websites using this starter kit on the wild * [Droppable library](https://github.com/lifenautjoe/droppable) * [Noel Event Emitter](https://github.com/lifenautjoe/noel) * [ChooseIT Wishbot](http://voeux2018.choosit.com/) * [Webpack Starter Basic](https://lifenautjoe.github.io/webpack-starter-basic/) * [Okuna](https://www.okuna.io/) Have a website online built with this starter kit and would like to add it to the list? Open an issue! ___ Author [Joel Hernandez](www.lifenautjoe.com)
frol_near_enhanced_api
Cargo.toml DB_DESIGN.md README.md src config.rs db_helpers.rs errors.rs main.rs modules coin data_provider balance.rs history.rs metadata.rs mod.rs models.rs mod.rs resources.rs schemas.rs mod.rs nft data_provider history.rs metadata.rs mod.rs models.rs nft_info.rs mod.rs resources.rs schemas.rs rpc_helpers.rs types account_id.rs mod.rs numeric.rs query_params.rs vector.rs
# NEAR Enhanced API API for providing useful information about NEAR blockchain. Still under heavy development. ### Phase 1 goals: [development goes here now] - Provide NEAR balances information, recent history - Provide FT balances information, recent FT coin history for the contracts implementing Events NEP - Provide NFT information and recent history for the contracts implementing Events NEP - Provide corresponding Metadata for FT, NFT contracts, NFT items - [aspirational] Collect usage statistics which could help us to prioritize next steps Note, Phase 1 will **not** provide pagination through all the history. Phase 1 also does **not** provide the information about contracts which are not upgraded to Events NEP. If you are interested in a more detailed development status, use `git clone` and search for `TODO PHASE 1`. See also our thoughts about the proposed [DB design](DB_DESIGN.md) which will help us to achieve Phase 2 goals. ### Future plans. Phase 2 goals: - Provide pagination for all existing endpoints where applicable - Support contracts which are not upgraded to Events NEP, such as `wrap.near`, `aurora` - Add reconciliation logic - [aspirational] Show data from failed receipts in the history - [aspirational] Support MT contracts - [aspirational] Support of querying the balance/history info by symbol (e.g. `GET "/accounts/{account_id}/coins/USN"`) ### Future plans. Phase 3+ goals: - Make wrappers around existing RPC endpoints for the general blockchain info (blocks, chunks, transactions, etc.) ## How to run it yourself You need to create `.env` file with 3 variables: `DATABASE_URL`, `DATABASE_URL_BALANCES`, `RPC_URL`. `DATABASE_URL_BALANCES` is a temp solution with the new table, it's under development. All the other stuff is super standard for Rust world. To modify and then review tests, use `cargo insta review`.
mileself_NDC-Practice-2
README.md as-pect.config.js asconfig.json neardev dev-account.env node_modules @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md as-bignum .travis.yml README.md as-pect.config.js assembly __tests__ as-pect.d.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts index.js package.json tsconfig.json asbuild README.md dist cli.d.ts cli.js index.d.ts index.js main.d.ts main.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md as-pect.config.js assembly JSON.ts __tests__ as-pect.d.ts json-parse.spec.ts roundtrip.spec.ts to-string.spec.ts usage.spec.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md index.js package.json assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js decamelize index.js package.json readme.md diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js find-up index.d.ts index.js package.json readme.md follow-redirects README.md http.js https.js index.js package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob README.md changelog.md common.js glob.js package.json sync.js has-flag index.d.ts index.js package.json readme.md hasurl README.md index.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md js-base64 LICENSE.md README.md base64.d.ts base64.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.sortby README.md index.js package.json long README.md dist long.js index.js package.json src long.js minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ exportAs.ts model.ts sentences.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json require-directory .travis.yml index.js package.json require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js package.json tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist decorator.d.ts decorator.js index.d.ts index.js mixin-tracking.d.ts mixin-tracking.js mixins.d.ts mixins.js proxy.d.ts proxy.js settings.d.ts settings.js types.d.ts types.js util.d.ts util.js package.json universal-url README.md browser.js index.js package.json visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json | package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
# AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) Shims used when bundling asc for browser usage. # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # ASBuild A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc. ## Usage ``` asb [entry file] [options] -- [args passed to asc] ``` ### Background AssemblyScript greater than v0.14.4 provides a `asconfig.json` configuration file that can be used to describe the options for building a project. ASBuild uses this and some defaults to create an easier CLI interface. ### Defaults #### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./test) Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) Like `chown -R`. Takes the same arguments as `fs.chown()` Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value: JSON.Value = changetype<JSON.Value>(valueOrNull); // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let stringValue: string = changetype<JSON.Str>(value).toString(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.toString(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/yargs-parser@19.0.0/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) [![build status](https://secure.travis-ci.org/dankogai/js-base64.png)](http://travis-ci.org/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## HEADS UP In version 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see below). And since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/js-base64@3.6.0/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.6.0/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/js-base64@3.6.0/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IE11. Do the following in your shell. ```shell $ make base64.es5.js ``` # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <tannerntannern@gmail.com> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## Work in progress --- ### WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 66 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` # `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde (olalonde@gmail.com) - James Talmage (james@talmage.io) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(value: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE)
JIC1816_Near-rent-a-car
near-rent-a-car-app README.md package.json public index.html manifest.json robots.txt src App.css App.js App.test.js components Wallet.js marketplace AddCar.js Car.js Cars.js utils Cover.js Loader.js Notifications.js index.css index.js logo.svg reportWebVitals.js setupTests.js utils config.js marketplace.js near.js near-rent-a-car-contract asconfig.json assembly as_types.d.ts index.ts model.ts tsconfig.json node_modules @as-covers assembly CONTRIBUTING.md README.md index.ts package.json tsconfig.json core CONTRIBUTING.md README.md package.json glue README.md lib index.d.ts index.js package.json transform README.md lib index.d.ts index.js util.d.ts util.js node_modules visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js toString.d.ts toString.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformRange.d.ts transformRange.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json package.json @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json @babel code-frame README.md lib index.js package.json helper-validator-identifier README.md lib identifier.js index.js keyword.js package.json scripts generate-identifier-regex.js highlight README.md lib index.js node_modules ansi-styles index.js package.json readme.md chalk index.js package.json readme.md templates.js types index.d.ts color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name .eslintrc.json README.md index.js package.json test.js escape-string-regexp index.js package.json readme.md has-flag index.js package.json readme.md supports-color browser.js index.js package.json readme.md package.json @eslint eslintrc CHANGELOG.md README.md conf config-schema.js environments.js eslint-all.js eslint-recommended.js lib cascading-config-array-factory.js config-array-factory.js config-array config-array.js config-dependency.js extracted-config.js ignore-pattern.js index.js override-tester.js flat-compat.js index.js shared ajv.js config-ops.js config-validator.js deprecation-warnings.js naming.js relative-module-resolver.js types.js package.json @humanwhocodes config-array README.md api.js package.json object-schema .eslintrc.js .github workflows nodejs-test.yml release-please.yml CHANGELOG.md README.md package.json src index.js merge-strategy.js object-schema.js validation-strategy.js tests merge-strategy.js object-schema.js validation-strategy.js acorn-jsx README.md index.d.ts index.js package.json xhtml.js acorn CHANGELOG.md README.md dist acorn.d.ts acorn.js acorn.mjs.d.ts bin.js package.json ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js ansi-colors README.md index.js package.json symbols.js types index.d.ts ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md argparse CHANGELOG.md README.md index.js lib action.js action append.js append constant.js count.js help.js store.js store constant.js false.js true.js subparsers.js version.js action_container.js argparse.js argument error.js exclusive.js group.js argument_parser.js const.js help added_formatters.js formatter.js namespace.js utils.js package.json as-bignum README.md assembly __tests__ as-pect.d.ts i128.spec.as.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts package.json asbuild README.md dist cli.d.ts cli.js commands build.d.ts build.js fmt.d.ts fmt.js index.d.ts index.js init cmd.d.ts cmd.js files asconfigJson.d.ts asconfigJson.js aspecConfig.d.ts aspecConfig.js assembly_files.d.ts assembly_files.js eslintConfig.d.ts eslintConfig.js gitignores.d.ts gitignores.js index.d.ts index.js indexJs.d.ts indexJs.js packageJson.d.ts packageJson.js test_files.d.ts test_files.js index.d.ts index.js interfaces.d.ts interfaces.js run.d.ts run.js test.d.ts test.js index.d.ts index.js main.d.ts main.js utils.d.ts utils.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md assembly JSON.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts index.js package.json temp-docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md assemblyscript-regex .eslintrc.js .github workflows benchmark.yml release.yml test.yml README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __spec_tests__ generated.spec.ts __tests__ alterations.spec.ts as-pect.d.ts boundary-assertions.spec.ts capture-group.spec.ts character-classes.spec.ts character-sets.spec.ts characters.ts empty.ts quantifiers.spec.ts range-quantifiers.spec.ts regex.spec.ts utils.ts char.ts env.ts index.ts nfa matcher.ts nfa.ts types.ts walker.ts parser node.ts parser.ts string-iterator.ts walker.ts regexp.ts tsconfig.json util.ts benchmark benchmark.js package.json spec test-generator.js ts index.ts tsconfig.json assemblyscript-temporal .github workflows node.js.yml release.yml .vscode launch.json README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __tests__ README.md as-pect.d.ts date.spec.ts duration.spec.ts empty.ts plaindate.spec.ts plaindatetime.spec.ts plainmonthday.spec.ts plaintime.spec.ts plainyearmonth.spec.ts timezone.spec.ts zoneddatetime.spec.ts constants.ts date.ts duration.ts enums.ts env.ts index.ts instant.ts now.ts plaindate.ts plaindatetime.ts plainmonthday.ts plaintime.ts plainyearmonth.ts timezone.ts tsconfig.json tz __tests__ index.spec.ts rule.spec.ts zone.spec.ts iana.ts index.ts rule.ts zone.ts utils.ts zoneddatetime.ts development.md package.json tzdb README.md iana theory.html zoneinfo2tdf.pl assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json astral-regex index.d.ts index.js package.json readme.md axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match .github FUNDING.yml LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json callsites index.d.ts index.js package.json readme.md camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js cross-spawn CHANGELOG.md README.md index.js lib enoent.js parse.js util escape.js readShebang.js resolveCommand.js package.json csv-stringify README.md lib browser index.js sync.js es5 index.d.ts index.js sync.d.ts sync.js index.d.ts index.js sync.d.ts sync.js package.json debug README.md package.json src browser.js common.js index.js node.js decamelize index.js package.json readme.md deep-is .travis.yml example cmp.js index.js package.json test NaN.js cmp.js neg-vs-pos-0.js diff CONTRIBUTING.md README.md dist diff.js diff.min.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js doctrine CHANGELOG.md README.md lib doctrine.js typed.js utility.js package.json emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js enquirer CHANGELOG.md README.md index.d.ts index.js lib ansi.js combos.js completer.js interpolate.js keypress.js placeholder.js prompt.js prompts autocomplete.js basicauth.js confirm.js editable.js form.js index.js input.js invisible.js list.js multiselect.js numeral.js password.js quiz.js scale.js select.js snippet.js sort.js survey.js text.js toggle.js render.js roles.js state.js styles.js symbols.js theme.js timer.js types array.js auth.js boolean.js index.js number.js string.js utils.js package.json env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js escape-string-regexp index.d.ts index.js package.json readme.md eslint-scope CHANGELOG.md README.md lib definition.js index.js pattern-visitor.js reference.js referencer.js scope-manager.js scope.js variable.js node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json eslint-utils README.md index.js package.json eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json eslint CHANGELOG.md README.md bin eslint.js conf category-list.json config-schema.js default-cli-options.js eslint-all.js eslint-recommended.js replacements.json lib api.js cli-engine cli-engine.js file-enumerator.js formatters checkstyle.js codeframe.js compact.js html.js jslint-xml.js json-with-metadata.js json.js junit.js stylish.js table.js tap.js unix.js visualstudio.js hash.js index.js lint-result-cache.js load-rules.js xml-escape.js cli.js config default-config.js flat-config-array.js flat-config-schema.js rule-validator.js eslint eslint.js index.js init autoconfig.js config-file.js config-initializer.js config-rule.js npm-utils.js source-code-utils.js linter apply-disable-directives.js code-path-analysis code-path-analyzer.js code-path-segment.js code-path-state.js code-path.js debug-helpers.js fork-context.js id-generator.js config-comment-parser.js index.js interpolate.js linter.js node-event-generator.js report-translator.js rule-fixer.js rules.js safe-emitter.js source-code-fixer.js timing.js options.js rule-tester index.js rule-tester.js rules accessor-pairs.js array-bracket-newline.js array-bracket-spacing.js array-callback-return.js array-element-newline.js arrow-body-style.js arrow-parens.js arrow-spacing.js block-scoped-var.js block-spacing.js brace-style.js callback-return.js camelcase.js capitalized-comments.js class-methods-use-this.js comma-dangle.js comma-spacing.js comma-style.js complexity.js computed-property-spacing.js consistent-return.js consistent-this.js constructor-super.js curly.js default-case-last.js default-case.js default-param-last.js dot-location.js dot-notation.js eol-last.js eqeqeq.js for-direction.js func-call-spacing.js func-name-matching.js func-names.js func-style.js function-call-argument-newline.js function-paren-newline.js generator-star-spacing.js getter-return.js global-require.js grouped-accessor-pairs.js guard-for-in.js handle-callback-err.js id-blacklist.js id-denylist.js id-length.js id-match.js implicit-arrow-linebreak.js indent-legacy.js indent.js index.js init-declarations.js jsx-quotes.js key-spacing.js keyword-spacing.js line-comment-position.js linebreak-style.js lines-around-comment.js lines-around-directive.js lines-between-class-members.js max-classes-per-file.js max-depth.js max-len.js max-lines-per-function.js max-lines.js max-nested-callbacks.js max-params.js max-statements-per-line.js max-statements.js multiline-comment-style.js multiline-ternary.js new-cap.js new-parens.js newline-after-var.js newline-before-return.js newline-per-chained-call.js no-alert.js no-array-constructor.js no-async-promise-executor.js no-await-in-loop.js no-bitwise.js no-buffer-constructor.js no-caller.js no-case-declarations.js no-catch-shadow.js no-class-assign.js no-compare-neg-zero.js no-cond-assign.js no-confusing-arrow.js no-console.js no-const-assign.js no-constant-condition.js no-constructor-return.js no-continue.js no-control-regex.js no-debugger.js no-delete-var.js no-div-regex.js no-dupe-args.js no-dupe-class-members.js no-dupe-else-if.js no-dupe-keys.js no-duplicate-case.js no-duplicate-imports.js no-else-return.js no-empty-character-class.js no-empty-function.js no-empty-pattern.js no-empty.js no-eq-null.js no-eval.js no-ex-assign.js no-extend-native.js no-extra-bind.js no-extra-boolean-cast.js no-extra-label.js no-extra-parens.js no-extra-semi.js no-fallthrough.js no-floating-decimal.js no-func-assign.js no-global-assign.js no-implicit-coercion.js no-implicit-globals.js no-implied-eval.js no-import-assign.js no-inline-comments.js no-inner-declarations.js no-invalid-regexp.js no-invalid-this.js no-irregular-whitespace.js no-iterator.js no-label-var.js no-labels.js no-lone-blocks.js no-lonely-if.js no-loop-func.js no-loss-of-precision.js no-magic-numbers.js no-misleading-character-class.js no-mixed-operators.js no-mixed-requires.js no-mixed-spaces-and-tabs.js no-multi-assign.js no-multi-spaces.js no-multi-str.js no-multiple-empty-lines.js no-native-reassign.js no-negated-condition.js no-negated-in-lhs.js no-nested-ternary.js no-new-func.js no-new-object.js no-new-require.js no-new-symbol.js no-new-wrappers.js no-new.js no-nonoctal-decimal-escape.js no-obj-calls.js no-octal-escape.js no-octal.js no-param-reassign.js no-path-concat.js no-plusplus.js no-process-env.js no-process-exit.js no-promise-executor-return.js no-proto.js no-prototype-builtins.js no-redeclare.js no-regex-spaces.js no-restricted-exports.js no-restricted-globals.js no-restricted-imports.js no-restricted-modules.js no-restricted-properties.js no-restricted-syntax.js no-return-assign.js no-return-await.js no-script-url.js no-self-assign.js no-self-compare.js no-sequences.js no-setter-return.js no-shadow-restricted-names.js no-shadow.js no-spaced-func.js no-sparse-arrays.js no-sync.js no-tabs.js no-template-curly-in-string.js no-ternary.js no-this-before-super.js no-throw-literal.js no-trailing-spaces.js no-undef-init.js no-undef.js no-undefined.js no-underscore-dangle.js no-unexpected-multiline.js no-unmodified-loop-condition.js no-unneeded-ternary.js no-unreachable-loop.js no-unreachable.js no-unsafe-finally.js no-unsafe-negation.js no-unsafe-optional-chaining.js no-unused-expressions.js no-unused-labels.js no-unused-vars.js no-use-before-define.js no-useless-backreference.js no-useless-call.js no-useless-catch.js no-useless-computed-key.js no-useless-concat.js no-useless-constructor.js no-useless-escape.js no-useless-rename.js no-useless-return.js no-var.js no-void.js no-warning-comments.js no-whitespace-before-property.js no-with.js nonblock-statement-body-position.js object-curly-newline.js object-curly-spacing.js object-property-newline.js object-shorthand.js one-var-declaration-per-line.js one-var.js operator-assignment.js operator-linebreak.js padded-blocks.js padding-line-between-statements.js prefer-arrow-callback.js prefer-const.js prefer-destructuring.js prefer-exponentiation-operator.js prefer-named-capture-group.js prefer-numeric-literals.js prefer-object-spread.js prefer-promise-reject-errors.js prefer-reflect.js prefer-regex-literals.js prefer-rest-params.js prefer-spread.js prefer-template.js quote-props.js quotes.js radix.js require-atomic-updates.js require-await.js require-jsdoc.js require-unicode-regexp.js require-yield.js rest-spread-spacing.js semi-spacing.js semi-style.js semi.js sort-imports.js sort-keys.js sort-vars.js space-before-blocks.js space-before-function-paren.js space-in-parens.js space-infix-ops.js space-unary-ops.js spaced-comment.js strict.js switch-colon-spacing.js symbol-description.js template-curly-spacing.js template-tag-spacing.js unicode-bom.js use-isnan.js utils ast-utils.js fix-tracker.js keywords.js lazy-loading-rule-map.js patterns letters.js unicode index.js is-combining-character.js is-emoji-modifier.js is-regional-indicator-symbol.js is-surrogate-pair.js valid-jsdoc.js valid-typeof.js vars-on-top.js wrap-iife.js wrap-regex.js yield-star-spacing.js yoda.js shared ajv.js ast-utils.js config-validator.js deprecation-warnings.js logging.js relative-module-resolver.js runtime-info.js string-utils.js traverser.js types.js source-code index.js source-code.js token-store backward-token-comment-cursor.js backward-token-cursor.js cursor.js cursors.js decorative-cursor.js filter-cursor.js forward-token-comment-cursor.js forward-token-cursor.js index.js limit-cursor.js padded-token-cursor.js skip-cursor.js utils.js messages all-files-ignored.js extend-config-missing.js failed-to-read-json.js file-not-found.js no-config-found.js plugin-conflict.js plugin-invalid.js plugin-missing.js print-config-with-directory-path.js whitespace-found.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json espree CHANGELOG.md README.md espree.js lib ast-node-types.js espree.js features.js options.js token-translator.js visitor-keys.js package.json esprima README.md bin esparse.js esvalidate.js dist esprima.js package.json esquery README.md dist esquery.esm.js esquery.esm.min.js esquery.js esquery.lite.js esquery.lite.min.js esquery.min.js license.txt package.json parser.js esrecurse README.md esrecurse.js gulpfile.babel.js package.json estraverse README.md estraverse.js gulpfile.js package.json esutils README.md lib ast.js code.js keyword.js utils.js package.json fast-deep-equal README.md es6 index.d.ts index.js react.d.ts react.js index.d.ts index.js package.json react.d.ts react.js fast-json-stable-stringify .eslintrc.yml .github FUNDING.yml .travis.yml README.md benchmark index.js test.json example key_cmp.js nested.js str.js value_cmp.js index.d.ts index.js package.json test cmp.js nested.js str.js to-json.js fast-levenshtein LICENSE.md README.md levenshtein.js package.json file-entry-cache README.md cache.js changelog.md package.json find-up index.d.ts index.js package.json readme.md flat-cache README.md changelog.md package.json src cache.js del.js utils.js flatted .github FUNDING.yml workflows node.js.yml README.md SPECS.md cjs index.js package.json es.js esm.js esm index.js index.js min.js package.json php flatted.php types.d.ts follow-redirects README.md http.js https.js index.js node_modules debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js ms index.js license.md package.json readme.md package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json functional-red-black-tree README.md bench test.js package.json rbtree.js test test.js get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob-parent CHANGELOG.md README.md index.js package.json glob README.md common.js glob.js package.json sync.js globals globals.json index.d.ts index.js package.json readme.md has-flag index.d.ts index.js package.json readme.md hasurl README.md index.js package.json ignore CHANGELOG.md README.md index.d.ts index.js legacy.js package.json import-fresh index.d.ts index.js package.json readme.md imurmurhash README.md imurmurhash.js imurmurhash.min.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json is-extglob README.md index.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md is-glob README.md index.js package.json isarray .travis.yml README.md component.json index.js package.json test.js isexe README.md index.js mode.js package.json test basic.js windows.js isobject README.md index.js package.json js-base64 LICENSE.md README.md base64.d.ts base64.js package.json js-tokens CHANGELOG.md README.md index.js package.json js-yaml CHANGELOG.md README.md bin js-yaml.js dist js-yaml.js js-yaml.min.js index.js lib js-yaml.js js-yaml common.js dumper.js exception.js loader.js mark.js schema.js schema core.js default_full.js default_safe.js failsafe.js json.js type.js type binary.js bool.js float.js int.js js function.js regexp.js undefined.js map.js merge.js null.js omap.js pairs.js seq.js set.js str.js timestamp.js package.json json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js json-stable-stringify-without-jsonify .travis.yml example key_cmp.js nested.js str.js value_cmp.js index.js package.json test cmp.js nested.js replacer.js space.js str.js to-json.js levn README.md lib cast.js index.js parse-string.js package.json line-column README.md lib line-column.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.merge README.md index.js package.json lodash.sortby README.md index.js package.json lodash.truncate README.md index.js package.json long README.md dist long.js index.js package.json src long.js lru-cache README.md index.js package.json minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.d.ts index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md natural-compare README.md index.js package.json near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as README.md as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt datetime.spec.ts empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js out assembly __tests__ ason.ts model.ts ~lib as-bignum integer safe u128.ts package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts datetime.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ empty.ts exportAs.ts model.ts sentences.ts singleton copy.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json optionator CHANGELOG.md README.md lib help.js index.js util.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md parent-module index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md path-key index.d.ts index.js package.json readme.md prelude-ls CHANGELOG.md README.md lib Func.js List.js Num.js Obj.js Str.js index.js package.json progress CHANGELOG.md Readme.md index.js lib node-progress.js package.json punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json regexpp README.md index.d.ts index.js package.json require-directory .travis.yml index.js package.json require-from-string index.js package.json readme.md require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json resolve-from index.js package.json readme.md ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json semver README.md bin semver.js classes comparator.js index.js range.js semver.js functions clean.js cmp.js coerce.js compare-build.js compare-loose.js compare.js diff.js eq.js gt.js gte.js inc.js lt.js lte.js major.js minor.js neq.js parse.js patch.js prerelease.js rcompare.js rsort.js satisfies.js sort.js valid.js index.js internal constants.js debug.js identifiers.js parse-options.js re.js package.json preload.js ranges gtr.js intersects.js ltr.js max-satisfying.js min-satisfying.js min-version.js outside.js simplify.js subset.js to-comparators.js valid.js set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json shebang-command index.js package.json readme.md shebang-regex index.d.ts index.js package.json readme.md slice-ansi index.js package.json readme.md sprintf-js README.md bower.json demo angular.html dist angular-sprintf.min.js sprintf.min.js gruntfile.js package.json src angular-sprintf.js sprintf.js test test.js string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md strip-json-comments index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md table README.md dist src alignSpanningCell.d.ts alignSpanningCell.js alignString.d.ts alignString.js alignTableData.d.ts alignTableData.js calculateCellHeight.d.ts calculateCellHeight.js calculateMaximumColumnWidths.d.ts calculateMaximumColumnWidths.js calculateOutputColumnWidths.d.ts calculateOutputColumnWidths.js calculateRowHeights.d.ts calculateRowHeights.js calculateSpanningCellWidth.d.ts calculateSpanningCellWidth.js createStream.d.ts createStream.js drawBorder.d.ts drawBorder.js drawContent.d.ts drawContent.js drawRow.d.ts drawRow.js drawTable.d.ts drawTable.js generated validators.d.ts validators.js getBorderCharacters.d.ts getBorderCharacters.js index.d.ts index.js injectHeaderConfig.d.ts injectHeaderConfig.js makeRangeConfig.d.ts makeRangeConfig.js makeStreamConfig.d.ts makeStreamConfig.js makeTableConfig.d.ts makeTableConfig.js mapDataUsingRowHeights.d.ts mapDataUsingRowHeights.js padTableData.d.ts padTableData.js schemas config.json shared.json streamConfig.json spanningCellManager.d.ts spanningCellManager.js stringifyTableData.d.ts stringifyTableData.js table.d.ts table.js truncateTableData.d.ts truncateTableData.js types api.d.ts api.js internal.d.ts internal.js utils.d.ts utils.js validateConfig.d.ts validateConfig.js validateSpanningCellConfig.d.ts validateSpanningCellConfig.js validateTableData.d.ts validateTableData.js wrapCell.d.ts wrapCell.js wrapString.d.ts wrapString.js wrapWord.d.ts wrapWord.js node_modules ajv .runkit_example.js README.md dist 2019.d.ts 2019.js 2020.d.ts 2020.js ajv.d.ts ajv.js compile codegen code.d.ts code.js index.d.ts index.js scope.d.ts scope.js errors.d.ts errors.js index.d.ts index.js jtd parse.d.ts parse.js serialize.d.ts serialize.js types.d.ts types.js names.d.ts names.js ref_error.d.ts ref_error.js resolve.d.ts resolve.js rules.d.ts rules.js util.d.ts util.js validate applicability.d.ts applicability.js boolSchema.d.ts boolSchema.js dataType.d.ts dataType.js defaults.d.ts defaults.js index.d.ts index.js keyword.d.ts keyword.js subschema.d.ts subschema.js core.d.ts core.js jtd.d.ts jtd.js refs data.json json-schema-2019-09 index.d.ts index.js meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.d.ts index.js meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.d.ts jtd-schema.js runtime equal.d.ts equal.js parseJson.d.ts parseJson.js quote.d.ts quote.js re2.d.ts re2.js timestamp.d.ts timestamp.js ucs2length.d.ts ucs2length.js uri.d.ts uri.js validation_error.d.ts validation_error.js standalone index.d.ts index.js instance.d.ts instance.js types index.d.ts index.js json-schema.d.ts json-schema.js jtd-schema.d.ts jtd-schema.js vocabularies applicator additionalItems.d.ts additionalItems.js additionalProperties.d.ts additionalProperties.js allOf.d.ts allOf.js anyOf.d.ts anyOf.js contains.d.ts contains.js dependencies.d.ts dependencies.js dependentSchemas.d.ts dependentSchemas.js if.d.ts if.js index.d.ts index.js items.d.ts items.js items2020.d.ts items2020.js not.d.ts not.js oneOf.d.ts oneOf.js patternProperties.d.ts patternProperties.js prefixItems.d.ts prefixItems.js properties.d.ts properties.js propertyNames.d.ts propertyNames.js thenElse.d.ts thenElse.js code.d.ts code.js core id.d.ts id.js index.d.ts index.js ref.d.ts ref.js discriminator index.d.ts index.js types.d.ts types.js draft2020.d.ts draft2020.js draft7.d.ts draft7.js dynamic dynamicAnchor.d.ts dynamicAnchor.js dynamicRef.d.ts dynamicRef.js index.d.ts index.js recursiveAnchor.d.ts recursiveAnchor.js recursiveRef.d.ts recursiveRef.js errors.d.ts errors.js format format.d.ts format.js index.d.ts index.js jtd discriminator.d.ts discriminator.js elements.d.ts elements.js enum.d.ts enum.js error.d.ts error.js index.d.ts index.js metadata.d.ts metadata.js nullable.d.ts nullable.js optionalProperties.d.ts optionalProperties.js properties.d.ts properties.js ref.d.ts ref.js type.d.ts type.js union.d.ts union.js values.d.ts values.js metadata.d.ts metadata.js next.d.ts next.js unevaluated index.d.ts index.js unevaluatedItems.d.ts unevaluatedItems.js unevaluatedProperties.d.ts unevaluatedProperties.js validation const.d.ts const.js dependentRequired.d.ts dependentRequired.js enum.d.ts enum.js index.d.ts index.js limitContains.d.ts limitContains.js limitItems.d.ts limitItems.js limitLength.d.ts limitLength.js limitNumber.d.ts limitNumber.js limitProperties.d.ts limitProperties.js multipleOf.d.ts multipleOf.js pattern.d.ts pattern.js required.d.ts required.js uniqueItems.d.ts uniqueItems.js lib 2019.ts 2020.ts ajv.ts compile codegen code.ts index.ts scope.ts errors.ts index.ts jtd parse.ts serialize.ts types.ts names.ts ref_error.ts resolve.ts rules.ts util.ts validate applicability.ts boolSchema.ts dataType.ts defaults.ts index.ts keyword.ts subschema.ts core.ts jtd.ts refs data.json json-schema-2019-09 index.ts meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.ts meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.ts runtime equal.ts parseJson.ts quote.ts re2.ts timestamp.ts ucs2length.ts uri.ts validation_error.ts standalone index.ts instance.ts types index.ts json-schema.ts jtd-schema.ts vocabularies applicator additionalItems.ts additionalProperties.ts allOf.ts anyOf.ts contains.ts dependencies.ts dependentSchemas.ts if.ts index.ts items.ts items2020.ts not.ts oneOf.ts patternProperties.ts prefixItems.ts properties.ts propertyNames.ts thenElse.ts code.ts core id.ts index.ts ref.ts discriminator index.ts types.ts draft2020.ts draft7.ts dynamic dynamicAnchor.ts dynamicRef.ts index.ts recursiveAnchor.ts recursiveRef.ts errors.ts format format.ts index.ts jtd discriminator.ts elements.ts enum.ts error.ts index.ts metadata.ts nullable.ts optionalProperties.ts properties.ts ref.ts type.ts union.ts values.ts metadata.ts next.ts unevaluated index.ts unevaluatedItems.ts unevaluatedProperties.ts validation const.ts dependentRequired.ts enum.ts index.ts limitContains.ts limitItems.ts limitLength.ts limitNumber.ts limitProperties.ts multipleOf.ts pattern.ts required.ts uniqueItems.ts package.json json-schema-traverse .eslintrc.yml .github FUNDING.yml workflows build.yml publish.yml README.md index.d.ts index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js normalize-windows-path.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js strip-absolute-path.js strip-trailing-slashes.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js package.json text-table .travis.yml example align.js center.js dotalign.js doubledot.js table.js index.js package.json test align.js ansi-colors.js center.js dotalign.js doubledot.js table.js tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist cjs decorator.js index.js mixin-tracking.js mixins.js proxy.js settings.js types.js util.js esm index.js index.min.js types decorator.d.ts index.d.ts mixin-tracking.d.ts mixins.d.ts proxy.d.ts settings.d.ts types.d.ts util.d.ts package.json type-check README.md lib check.js index.js parse-type.js package.json type-fest base.d.ts index.d.ts package.json readme.md source async-return-type.d.ts asyncify.d.ts basic.d.ts conditional-except.d.ts conditional-keys.d.ts conditional-pick.d.ts entries.d.ts entry.d.ts except.d.ts fixed-length-array.d.ts iterable-element.d.ts literal-union.d.ts merge-exclusive.d.ts merge.d.ts mutable.d.ts opaque.d.ts package-json.d.ts partial-deep.d.ts promisable.d.ts promise-value.d.ts readonly-deep.d.ts require-at-least-one.d.ts require-exactly-one.d.ts set-optional.d.ts set-required.d.ts set-return-type.d.ts stringified.d.ts tsconfig-json.d.ts union-to-intersection.d.ts utilities.d.ts value-of.d.ts ts41 camel-case.d.ts delimiter-case.d.ts index.d.ts kebab-case.d.ts pascal-case.d.ts snake-case.d.ts universal-url README.md browser.js index.js package.json uri-js README.md dist es5 uri.all.d.ts uri.all.js uri.all.min.d.ts uri.all.min.js esnext index.d.ts index.js regexps-iri.d.ts regexps-iri.js regexps-uri.d.ts regexps-uri.js schemes http.d.ts http.js https.d.ts https.js mailto.d.ts mailto.js urn-uuid.d.ts urn-uuid.js urn.d.ts urn.js ws.d.ts ws.js wss.d.ts wss.js uri.d.ts uri.js util.d.ts util.js package.json v8-compile-cache CHANGELOG.md README.md package.json v8-compile-cache.js visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json which CHANGELOG.md README.md package.json which.js word-wrap README.md index.d.ts index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json package.json | features not yet implemented issues with the tests differences between PCRE and JS regex | | |
# lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. # Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0,4.1,4.2?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <tannerntannern@gmail.com> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. [![NPM version](https://img.shields.io/npm/v/esprima.svg)](https://www.npmjs.com/package/esprima) [![npm download](https://img.shields.io/npm/dm/esprima.svg)](https://www.npmjs.com/package/esprima) [![Build Status](https://img.shields.io/travis/jquery/esprima/master.svg)](https://travis-ci.org/jquery/esprima) [![Coverage Status](https://img.shields.io/codecov/c/github/jquery/esprima/master.svg)](https://codecov.io/github/jquery/esprima) **Esprima** ([esprima.org](http://esprima.org), BSD license) is a high performance, standard-compliant [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) parser written in ECMAScript (also popularly known as [JavaScript](https://en.wikipedia.org/wiki/JavaScript)). Esprima is created and maintained by [Ariya Hidayat](https://twitter.com/ariyahidayat), with the help of [many contributors](https://github.com/jquery/esprima/contributors). ### Features - Full support for ECMAScript 2017 ([ECMA-262 8th Edition](http://www.ecma-international.org/publications/standards/Ecma-262.htm)) - Sensible [syntax tree format](https://github.com/estree/estree/blob/master/es5.md) as standardized by [ESTree project](https://github.com/estree/estree) - Experimental support for [JSX](https://facebook.github.io/jsx/), a syntax extension for [React](https://facebook.github.io/react/) - Optional tracking of syntax node location (index-based and line-column) - [Heavily tested](http://esprima.org/test/ci.html) (~1500 [unit tests](https://github.com/jquery/esprima/tree/master/test/fixtures) with [full code coverage](https://codecov.io/github/jquery/esprima)) ### API Esprima can be used to perform [lexical analysis](https://en.wikipedia.org/wiki/Lexical_analysis) (tokenization) or [syntactic analysis](https://en.wikipedia.org/wiki/Parsing) (parsing) of a JavaScript program. A simple example on Node.js REPL: ```javascript > var esprima = require('esprima'); > var program = 'const answer = 42'; > esprima.tokenize(program); [ { type: 'Keyword', value: 'const' }, { type: 'Identifier', value: 'answer' }, { type: 'Punctuator', value: '=' }, { type: 'Numeric', value: '42' } ] > esprima.parseScript(program); { type: 'Program', body: [ { type: 'VariableDeclaration', declarations: [Object], kind: 'const' } ], sourceType: 'script' } ``` For more information, please read the [complete documentation](http://esprima.org/doc). # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) # cross-spawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build status][appveyor-image]][appveyor-url] [![Coverage Status][codecov-image]][codecov-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/cross-spawn [downloads-image]:https://img.shields.io/npm/dm/cross-spawn.svg [npm-image]:https://img.shields.io/npm/v/cross-spawn.svg [travis-url]:https://travis-ci.org/moxystudio/node-cross-spawn [travis-image]:https://img.shields.io/travis/moxystudio/node-cross-spawn/master.svg [appveyor-url]:https://ci.appveyor.com/project/satazor/node-cross-spawn [appveyor-image]:https://img.shields.io/appveyor/ci/satazor/node-cross-spawn/master.svg [codecov-url]:https://codecov.io/gh/moxystudio/node-cross-spawn [codecov-image]:https://img.shields.io/codecov/c/github/moxystudio/node-cross-spawn/master.svg [david-dm-url]:https://david-dm.org/moxystudio/node-cross-spawn [david-dm-image]:https://img.shields.io/david/moxystudio/node-cross-spawn.svg [david-dm-dev-url]:https://david-dm.org/moxystudio/node-cross-spawn?type=dev [david-dm-dev-image]:https://img.shields.io/david/dev/moxystudio/node-cross-spawn.svg A cross platform solution to node's spawn and spawnSync. ## Installation Node.js version 8 and up: `$ npm install cross-spawn` Node.js version 7 and under: `$ npm install cross-spawn@6` ## Why Node has issues when using spawn on Windows: - It ignores [PATHEXT](https://github.com/joyent/node/issues/2318) - It does not support [shebangs](https://en.wikipedia.org/wiki/Shebang_(Unix)) - Has problems running commands with [spaces](https://github.com/nodejs/node/issues/7367) - Has problems running commands with posix relative paths (e.g.: `./my-folder/my-executable`) - Has an [issue](https://github.com/moxystudio/node-cross-spawn/issues/82) with command shims (files in `node_modules/.bin/`), where arguments with quotes and parenthesis would result in [invalid syntax error](https://github.com/moxystudio/node-cross-spawn/blob/e77b8f22a416db46b6196767bcd35601d7e11d54/test/index.test.js#L149) - No `options.shell` support on node `<v4.8` All these issues are handled correctly by `cross-spawn`. There are some known modules, such as [win-spawn](https://github.com/ForbesLindesay/win-spawn), that try to solve this but they are either broken or provide faulty escaping of shell arguments. ## Usage Exactly the same way as node's [`spawn`](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options) or [`spawnSync`](https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options), so it's a drop in replacement. ```js const spawn = require('cross-spawn'); // Spawn NPM asynchronously const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); // Spawn NPM synchronously const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); ``` ## Caveats ### Using `options.shell` as an alternative to `cross-spawn` Starting from node `v4.8`, `spawn` has a `shell` option that allows you run commands from within a shell. This new option solves the [PATHEXT](https://github.com/joyent/node/issues/2318) issue but: - It's not supported in node `<v4.8` - You must manually escape the command and arguments which is very error prone, specially when passing user input - There are a lot of other unresolved issues from the [Why](#why) section that you must take into account If you are using the `shell` option to spawn a command in a cross platform way, consider using `cross-spawn` instead. You have been warned. ### `options.shell` support While `cross-spawn` adds support for `options.shell` in node `<v4.8`, all of its enhancements are disabled. This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using `options.shell` you are probably targeting a specific platform anyway and you don't want things to get into your way. ### Shebangs support While `cross-spawn` handles shebangs on Windows, its support is limited. More specifically, it just supports `#!/usr/bin/env <program>` where `<program>` must not contain any arguments. If you would like to have the shebang support improved, feel free to contribute via a pull-request. Remember to always test your code on Windows! ## Tests `$ npm test` `$ npm test -- --watch` during development ## License Released under the [MIT License](https://www.opensource.org/licenses/mit-license.php). Like `chown -R`. Takes the same arguments as `fs.chown()` These files are compiled dot templates from dot folder. Do NOT edit them directly, edit the templates and run `npm run build` from main ajv folder. # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde (olalonde@gmail.com) - James Talmage (james@talmage.io) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 10.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # eslint-utils [![npm version](https://img.shields.io/npm/v/eslint-utils.svg)](https://www.npmjs.com/package/eslint-utils) [![Downloads/month](https://img.shields.io/npm/dm/eslint-utils.svg)](http://www.npmtrends.com/eslint-utils) [![Build Status](https://github.com/mysticatea/eslint-utils/workflows/CI/badge.svg)](https://github.com/mysticatea/eslint-utils/actions) [![Coverage Status](https://codecov.io/gh/mysticatea/eslint-utils/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/eslint-utils) [![Dependency Status](https://david-dm.org/mysticatea/eslint-utils.svg)](https://david-dm.org/mysticatea/eslint-utils) ## 🏁 Goal This package provides utility functions and classes for make ESLint custom rules. For examples: - [getStaticValue](https://eslint-utils.mysticatea.dev/api/ast-utils.html#getstaticvalue) evaluates static value on AST. - [ReferenceTracker](https://eslint-utils.mysticatea.dev/api/scope-utils.html#referencetracker-class) checks the members of modules/globals as handling assignments and destructuring. ## 📖 Usage See [documentation](https://eslint-utils.mysticatea.dev/). ## 📰 Changelog See [releases](https://github.com/mysticatea/eslint-utils/releases). ## ❤️ Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run clean` removes the coverage result of `npm test` command. - `npm run coverage` shows the coverage result of the last `npm test` command. - `npm run lint` runs ESLint. - `npm run watch` runs tests on each file change. [Build]: http://img.shields.io/travis/litejs/natural-compare-lite.png [Coverage]: http://img.shields.io/coveralls/litejs/natural-compare-lite.png [1]: https://travis-ci.org/litejs/natural-compare-lite [2]: https://coveralls.io/r/litejs/natural-compare-lite [npm package]: https://npmjs.org/package/natural-compare-lite [GitHub repo]: https://github.com/litejs/natural-compare-lite @version 1.4.0 @date 2015-10-26 @stability 3 - Stable Natural Compare &ndash; [![Build][]][1] [![Coverage][]][2] =============== Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a "natural ordering". ```text Standard sorting: Natural order sorting: img1.png img1.png img10.png img2.png img12.png img10.png img2.png img12.png ``` String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function. ### Installation - In browser ```html <script src=min.natural-compare.js></script> ``` - In node.js: `npm install natural-compare-lite` ```javascript require("natural-compare-lite") ``` ### Usage ```javascript // Simple case sensitive example var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"]; a.sort(String.naturalCompare); // ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"] // Use wrapper function for case insensitivity a.sort(function(a, b){ return String.naturalCompare(a.toLowerCase(), b.toLowerCase()); }) // In most cases we want to sort an array of objects var a = [ {"street":"350 5th Ave", "room":"A-1021"} , {"street":"350 5th Ave", "room":"A-21046-b"} ]; // sort by street, then by room a.sort(function(a, b){ return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room); }) // When text transformation is needed (eg toLowerCase()), // it is best for performance to keep // transformed key in that object. // There are no need to do text transformation // on each comparision when sorting. var a = [ {"make":"Audi", "model":"A6"} , {"make":"Kia", "model":"Rio"} ]; // sort by make, then by model a.map(function(car){ car.sort_key = (car.make + " " + car.model).toLowerCase(); }) a.sort(function(a, b){ return String.naturalCompare(a.sort_key, b.sort_key); }) ``` - Works well with dates in ISO format eg "Rev 2012-07-26.doc". ### Custom alphabet It is possible to configure a custom alphabet to achieve a desired order. ```javascript // Estonian alphabet String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy" ["t", "z", "x", "õ"].sort(String.naturalCompare) // ["z", "t", "õ", "x"] // Russian alphabet String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя" ["Ё", "А", "Б"].sort(String.naturalCompare) // ["А", "Б", "Ё"] ``` External links -------------- - [GitHub repo][https://github.com/litejs/natural-compare-lite] - [jsperf test](http://jsperf.com/natural-sort-2/12) Licence ------- Copyright (c) 2012-2015 Lauri Rooden &lt;lauri@rooden.ee&gt; [The MIT License](http://lauri.rooden.ee/mit-license.txt) # flatted [![Downloads](https://img.shields.io/npm/dm/flatted.svg)](https://www.npmjs.com/package/flatted) [![Coverage Status](https://coveralls.io/repos/github/WebReflection/flatted/badge.svg?branch=main)](https://coveralls.io/github/WebReflection/flatted?branch=main) [![Build Status](https://travis-ci.com/WebReflection/flatted.svg?branch=main)](https://travis-ci.com/WebReflection/flatted) [![License: ISC](https://img.shields.io/badge/License-ISC-yellow.svg)](https://opensource.org/licenses/ISC) ![WebReflection status](https://offline.report/status/webreflection.svg) ![snow flake](./flatted.jpg) <sup>**Social Media Photo by [Matt Seymour](https://unsplash.com/@mattseymour) on [Unsplash](https://unsplash.com/)**</sup> ## Announcement 📣 There is a standard approach to recursion and more data-types than what JSON allows, and it's part of the [Structured Clone polyfill](https://github.com/ungap/structured-clone/#readme). Beside acting as a polyfill, its `@ungap/structured-clone/json` export provides both `stringify` and `parse`, and it's been tested for being faster than *flatted*, but its produced output is also smaller than *flatted* in general. The *@ungap/structured-clone* module is, in short, a drop in replacement for *flatted*, but it's not compatible with *flatted* specialized syntax. However, if recursion, as well as more data-types, are what you are after, or interesting for your projects/use cases, consider switching to this new module whenever you can 👍 - - - A super light (0.5K) and fast circular JSON parser, directly from the creator of [CircularJSON](https://github.com/WebReflection/circular-json/#circularjson). Now available also for **[PHP](./php/flatted.php)**. ```js npm i flatted ``` Usable via [CDN](https://unpkg.com/flatted) or as regular module. ```js // ESM import {parse, stringify, toJSON, fromJSON} from 'flatted'; // CJS const {parse, stringify, toJSON, fromJSON} = require('flatted'); const a = [{}]; a[0].a = a; a.push(a); stringify(a); // [["1","0"],{"a":"0"}] ``` ## toJSON and fromJSON If you'd like to implicitly survive JSON serialization, these two helpers helps: ```js import {toJSON, fromJSON} from 'flatted'; class RecursiveMap extends Map { static fromJSON(any) { return new this(fromJSON(any)); } toJSON() { return toJSON([...this.entries()]); } } const recursive = new RecursiveMap; const same = {}; same.same = same; recursive.set('same', same); const asString = JSON.stringify(recursive); const asMap = RecursiveMap.fromJSON(JSON.parse(asString)); asMap.get('same') === asMap.get('same').same; // true ``` ## Flatted VS JSON As it is for every other specialized format capable of serializing and deserializing circular data, you should never `JSON.parse(Flatted.stringify(data))`, and you should never `Flatted.parse(JSON.stringify(data))`. The only way this could work is to `Flatted.parse(Flatted.stringify(data))`, as it is also for _CircularJSON_ or any other, otherwise there's no granted data integrity. Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won't be serialized and unserialized as expected. ### New in V1: Exact same JSON API * Added a [reviver](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Syntax) parameter to `.parse(string, reviver)` and revive your own objects. * Added a [replacer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Syntax) and a `space` parameter to `.stringify(object, replacer, space)` for feature parity with JSON signature. ### Compatibility All ECMAScript engines compatible with `Map`, `Set`, `Object.keys`, and `Array.prototype.reduce` will work, even if polyfilled. ### How does it work ? While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. `*` Once parsed, all indexes will be replaced through the flattened collection. <sup><sub>`*` represented as string to avoid conflicts with numbers</sub></sup> ```js // logic example var a = [{one: 1}, {two: '2'}]; a[0].a = a; // a is the main object, will be at index '0' // {one: 1} is the second object, index '1' // {two: '2'} the third, in '2', and it has a string // which will be found at index '3' Flatted.stringify(a); // [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"] // a[one,two] {one: 1, a} {two: '2'} '2' ``` ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT # assemblyscript-regex A regex engine for AssemblyScript. [AssemblyScript](https://www.assemblyscript.org/) is a new language, based on TypeScript, that runs on WebAssembly. AssemblyScript has a lightweight standard library, but lacks support for Regular Expression. The project fills that gap! This project exposes an API that mirrors the JavaScript [RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp) class: ```javascript const regex = new RegExp("fo*", "g"); const str = "table football, foul"; let match: Match | null = regex.exec(str); while (match != null) { // first iteration // match.index = 6 // match.matches[0] = "foo" // second iteration // match.index = 16 // match.matches[0] = "fo" match = regex.exec(str); } ``` ## Project status The initial focus of this implementation has been feature support and functionality over performance. It currently supports a sufficient number of regex features to be considered useful, including most character classes, common assertions, groups, alternations, capturing groups and quantifiers. The next phase of development will focussed on more extensive testing and performance. The project currently has reasonable unit test coverage, focussed on positive and negative test cases on a per-feature basis. It also includes a more exhaustive test suite with test cases borrowed from another regex library. ### Feature support Based on the classfication within the [MDN cheatsheet](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions/Cheatsheet) **Character sets** - [x] . - [x] \d - [x] \D - [x] \w - [x] \W - [x] \s - [x] \S - [x] \t - [x] \r - [x] \n - [x] \v - [x] \f - [ ] [\b] - [ ] \0 - [ ] \cX - [x] \xhh - [x] \uhhhh - [ ] \u{hhhh} or \u{hhhhh} - [x] \ **Assertions** - [x] ^ - [x] $ - [ ] \b - [ ] \B **Other assertions** - [ ] x(?=y) Lookahead assertion - [ ] x(?!y) Negative lookahead assertion - [ ] (?<=y)x Lookbehind assertion - [ ] (?<!y)x Negative lookbehind assertion **Groups and ranges** - [x] x|y - [x] [xyz][a-c] - [x] [^xyz][^a-c] - [x] (x) capturing group - [ ] \n back reference - [ ] (?<Name>x) named capturing group - [x] (?:x) Non-capturing group **Quantifiers** - [x] x\* - [x] x+ - [x] x? - [x] x{n} - [x] x{n,} - [x] x{n,m} - [ ] x\*? / x+? / ... **RegExp** - [x] global - [ ] sticky - [x] case insensitive - [x] multiline - [x] dotAll - [ ] unicode ### Development This project is open source, MIT licenced and your contributions are very much welcomed. To get started, check out the repository and install dependencies: ``` $ npm install ``` A few general points about the tools and processes this project uses: - This project uses prettier for code formatting and eslint to provide additional syntactic checks. These are both run on `npm test` and as part of the CI build. - The unit tests are executed using [as-pect](https://github.com/jtenner/as-pect) - a native AssemblyScript test runner - The specification tests are within the `spec` folder. The `npm run test:generate` target transforms these tests into as-pect tests which execute as part of the standard build / test cycle - In order to support improved debugging you can execute this library as TypeScript (rather than WebAssembly), via the `npm run tsrun` target. # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = new LRU(options) , otherCache = new LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. Setting it to a non-number or negative number will throw a `TypeError`. Setting it to 0 makes it be `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a `TypeError`. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. * `updateAgeOnGet` When using time-expiring entries with `maxAge`, setting this to `true` will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.) ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] [![Downloads][downloads-image]][downloads-url] [![Join the chat at https://gitter.im/eslint/doctrine](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/eslint/doctrine?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # Doctrine Doctrine is a [JSDoc](http://usejsdoc.org) parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file). ## Installation You can install Doctrine using [npm](https://npmjs.com): ``` $ npm install doctrine --save-dev ``` Doctrine can also be used in web browsers using [Browserify](http://browserify.org). ## Usage Require doctrine inside of your JavaScript: ```js var doctrine = require("doctrine"); ``` ### parse() The primary method is `parse()`, which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are: * `unwrap` - set to `true` to delete the leading `/**`, any `*` that begins a line, and the trailing `*/` from the source text. Default: `false`. * `tags` - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if `tags` is `["param"]`, then only `@param` tags will be returned. Default: `null`. * `recoverable` - set to `true` to keep parsing even when syntax errors occur. Default: `false`. * `sloppy` - set to `true` to allow optional parameters to be specified in brackets (`@param {string} [foo]`). Default: `false`. * `lineNumbers` - set to `true` to add `lineNumber` to each node, specifying the line on which the node is found in the source. Default: `false`. * `range` - set to `true` to add `range` to each node, specifying the start and end index of the node in the original comment. Default: `false`. Here's a simple example: ```js var ast = doctrine.parse( [ "/**", " * This function comment is parsed by doctrine", " * @param {{ok:String}} userName", "*/" ].join('\n'), { unwrap: true }); ``` This example returns the following AST: { "description": "This function comment is parsed by doctrine", "tags": [ { "title": "param", "description": null, "type": { "type": "RecordType", "fields": [ { "type": "FieldType", "key": "ok", "value": { "type": "NameExpression", "name": "String" } } ] }, "name": "userName" } ] } See the [demo page](http://eslint.org/doctrine/demo/) more detail. ## Team These folks keep the project moving and are resources for help: * Nicholas C. Zakas ([@nzakas](https://github.com/nzakas)) - project lead * Yusuke Suzuki ([@constellation](https://github.com/constellation)) - reviewer ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/doctrine/issues). ## Frequently Asked Questions ### Can I pass a whole JavaScript file to Doctrine? No. Doctrine can only parse JSDoc comments, so you'll need to pass just the JSDoc comment to Doctrine in order to work. ### License #### doctrine Copyright JS Foundation and other contributors, https://js.foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. #### esprima some of functions is derived from esprima Copyright (C) 2012, 2011 [Ariya Hidayat](http://ariya.ofilabs.com/about) (twitter: [@ariyahidayat](http://twitter.com/ariyahidayat)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #### closure-compiler some of extensions is derived from closure-compiler Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ ### Where to ask for help? Join our [Chatroom](https://gitter.im/eslint/doctrine) [npm-image]: https://img.shields.io/npm/v/doctrine.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/doctrine [travis-image]: https://img.shields.io/travis/eslint/doctrine/master.svg?style=flat-square [travis-url]: https://travis-ci.org/eslint/doctrine [coveralls-image]: https://img.shields.io/coveralls/eslint/doctrine/master.svg?style=flat-square [coveralls-url]: https://coveralls.io/r/eslint/doctrine?branch=master [downloads-image]: http://img.shields.io/npm/dm/doctrine.svg?style=flat-square [downloads-url]: https://www.npmjs.com/package/doctrine # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # regexpp [![npm version](https://img.shields.io/npm/v/regexpp.svg)](https://www.npmjs.com/package/regexpp) [![Downloads/month](https://img.shields.io/npm/dm/regexpp.svg)](http://www.npmtrends.com/regexpp) [![Build Status](https://github.com/mysticatea/regexpp/workflows/CI/badge.svg)](https://github.com/mysticatea/regexpp/actions) [![codecov](https://codecov.io/gh/mysticatea/regexpp/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/regexpp) [![Dependency Status](https://david-dm.org/mysticatea/regexpp.svg)](https://david-dm.org/mysticatea/regexpp) A regular expression parser for ECMAScript. ## 💿 Installation ```bash $ npm install regexpp ``` - require Node.js 8 or newer. ## 📖 Usage ```ts import { AST, RegExpParser, RegExpValidator, RegExpVisitor, parseRegExpLiteral, validateRegExpLiteral, visitRegExpAST } from "regexpp" ``` ### parseRegExpLiteral(source, options?) Parse a given regular expression literal then make AST object. This is equivalent to `new RegExpParser(options).parseLiteral(source)`. - **Parameters:** - `source` (`string | RegExp`) The source code to parse. - `options?` ([`RegExpParser.Options`]) The options to parse. - **Return:** - The AST of the regular expression. ### validateRegExpLiteral(source, options?) Validate a given regular expression literal. This is equivalent to `new RegExpValidator(options).validateLiteral(source)`. - **Parameters:** - `source` (`string`) The source code to validate. - `options?` ([`RegExpValidator.Options`]) The options to validate. ### visitRegExpAST(ast, handlers) Visit each node of a given AST. This is equivalent to `new RegExpVisitor(handlers).visit(ast)`. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. ### RegExpParser #### new RegExpParser(options?) - **Parameters:** - `options?` ([`RegExpParser.Options`]) The options to parse. #### parser.parseLiteral(source, start?, end?) Parse a regular expression literal. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"/abc/g"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression. #### parser.parsePattern(source, start?, end?, uFlag?) Parse a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"abc"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. - **Return:** - The AST of the regular expression pattern. #### parser.parseFlags(source, start?, end?) Parse a regular expression flags. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"gim"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression flags. ### RegExpValidator #### new RegExpValidator(options) - **Parameters:** - `options` ([`RegExpValidator.Options`]) The options to validate. #### validator.validateLiteral(source, start, end) Validate a regular expression literal. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. #### validator.validatePattern(source, start, end, uFlag) Validate a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. #### validator.validateFlags(source, start, end) Validate a regular expression flags. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. ### RegExpVisitor #### new RegExpVisitor(handlers) - **Parameters:** - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. #### visitor.visit(ast) Validate a regular expression literal. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. ## 📰 Changelog - [GitHub Releases](https://github.com/mysticatea/regexpp/releases) ## 🍻 Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run build` compiles TypeScript source code to `index.js`, `index.js.map`, and `index.d.ts`. - `npm run clean` removes the temporary files which are created by `npm test` and `npm run build`. - `npm run lint` runs ESLint. - `npm run update:test` updates test fixtures. - `npm run update:ids` updates `src/unicode/ids.ts`. - `npm run watch` runs tests with `--watch` option. [`AST.Node`]: src/ast.ts#L4 [`RegExpParser.Options`]: src/parser.ts#L539 [`RegExpValidator.Options`]: src/validator.ts#L127 [`RegExpVisitor.Handlers`]: src/visitor.ts#L204 # flat-cache > A stupidly simple key/value storage using files to persist the data [![NPM Version](http://img.shields.io/npm/v/flat-cache.svg?style=flat)](https://npmjs.org/package/flat-cache) [![Build Status](https://api.travis-ci.org/royriojas/flat-cache.svg?branch=master)](https://travis-ci.org/royriojas/flat-cache) ## install ```bash npm i --save flat-cache ``` ## Usage ```js var flatCache = require('flat-cache') // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var cache = flatCache.load('cacheId'); // sets a key on the cache cache.setKey('key', { foo: 'var' }); // get a key from the cache cache.getKey('key') // { foo: 'var' } // fetch the entire persisted object cache.all() // { 'key': { foo: 'var' } } // remove a key cache.removeKey('key'); // removes a key from the cache // save it to disk cache.save(); // very important, if you don't save no changes will be persisted. // cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys // loads the cache from a given directory, if one does // not exists for the given Id a new one will be prepared to be created var cache = flatCache.load('cacheId', path.resolve('./path/to/folder')); // The following methods are useful to clear the cache // delete a given cache flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists. // delete all cache flatCache.clearAll(); // remove the cache directory ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistance in order to make a script that will beutify files with `esformatter` only execute on the files that were changed since the last run. To make that possible we need to store the `fileSize` and `modificationTime` of the files. So a simple `key/value` storage was needed and Bam! this module was born. ## Important notes - If no directory is especified when the `load` method is called, a folder named `.cache` will be created inside the module directory when `cache.save` is called. If you're committing your `node_modules` to any vcs, you might want to ignore the default `.cache` folder, or specify a custom directory. - The values set on the keys of the cache should be `stringify-able` ones, meaning no circular references - All the changes to the cache state are done to memory - I could have used a timer or `Object.observe` to deliver the changes to disk, but I wanted to keep this module intentionally dumb and simple - Non visited keys are removed when `cache.save()` is called. If this is not desired, you can pass `true` to the save call like: `cache.save( true /* noPrune */ )`. ## License MIT ## Changelog [changelog](./changelog.md) long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` # minimatch A minimal matching utility. [![Build Status](https://travis-ci.org/isaacs/minimatch.svg?branch=master)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ### partial Compare a partial path to a pattern. As long as the parts of the path that are present are not contradicted by the pattern, it will be treated as a match. This is useful in applications where you're walking through a folder structure, and don't yet have the full path, but want to ensure that you do not walk down paths that can never be a match. For example, ```js minimatch('/a/b', '/a/*/c/d', { partial: true }) // true, might be /a/b/c/d minimatch('/a/b', '/**/d', { partial: true }) // true, might be /a/b/.../d minimatch('/x/y/z', '/a/**/z', { partial: true }) // false, because x !== a ``` ### allowWindowsEscape Windows path separator `\` is by default converted to `/`, which prohibits the usage of `\` as a escape character. This flag skips that behavior and allows using the escape character. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. # line-column [![Build Status](https://travis-ci.org/io-monad/line-column.svg?branch=master)](https://travis-ci.org/io-monad/line-column) [![Coverage Status](https://coveralls.io/repos/github/io-monad/line-column/badge.svg?branch=master)](https://coveralls.io/github/io-monad/line-column?branch=master) [![npm version](https://badge.fury.io/js/line-column.svg)](https://badge.fury.io/js/line-column) Node module to convert efficiently index to/from line-column in a string. ## Install npm install line-column ## Usage ### lineColumn(str, options = {}) Returns a `LineColumnFinder` instance for given string `str`. #### Options | Key | Description | Default | | ------- | ----------- | ------- | | `origin` | The origin value of line number and column number | `1` | ### lineColumn(str, index) This is just a shorthand for `lineColumn(str).fromIndex(index)`. ### LineColumnFinder#fromIndex(index) Find line and column from index in the string. Parameters: - `index` - `number` Index in the string. (0-origin) Returns: - `{ line: x, col: y }` Found line number and column number. - `null` if the given index is out of range. ### LineColumnFinder#toIndex(line, column) Find index from line and column in the string. Parameters: - `line` - `number` Line number in the string. - `column` - `number` Column number in the string. or - `{ line: x, col: y }` - `Object` line and column numbers in the string.<br>A key name `column` can be used instead of `col`. or - `[ line, col ]` - `Array` line and column numbers in the string. Returns: - `number` Found index in the string. - `-1` if the given line or column is out of range. ## Example ```js var lineColumn = require("line-column"); var testString = [ "ABCDEFG\n", // line:0, index:0 "HIJKLMNOPQRSTU\n", // line:1, index:8 "VWXYZ\n", // line:2, index:23 "日本語の文字\n", // line:3, index:29 "English words" // line:4, index:36 ].join(""); // length:49 lineColumn(testString).fromIndex(3) // { line: 1, col: 4 } lineColumn(testString).fromIndex(33) // { line: 4, col: 5 } lineColumn(testString).toIndex(1, 4) // 3 lineColumn(testString).toIndex(4, 5) // 33 // Shorthand of .fromIndex (compatible with find-line-column) lineColumn(testString, 33) // { line:4, col: 5 } // Object or Array is also acceptable lineColumn(testString).toIndex({ line: 4, col: 5 }) // 33 lineColumn(testString).toIndex({ line: 4, column: 5 }) // 33 lineColumn(testString).toIndex([4, 5]) // 33 // You can cache it for the same string. It is so efficient. (See benchmark) var finder = lineColumn(testString); finder.fromIndex(33) // { line: 4, column: 5 } finder.toIndex(4, 5) // 33 // For 0-origin line and column numbers var oneOrigin = lineColumn(testString, { origin: 0 }); oneOrigin.fromIndex(33) // { line: 3, column: 4 } oneOrigin.toIndex(3, 4) // 33 ``` ## Testing npm test ## Benchmark The popular package [find-line-column](https://www.npmjs.com/package/find-line-column) provides the same "index to line-column" feature. Here is some benchmarking on `line-column` vs `find-line-column`. You can run this benchmark by `npm run benchmark`. See [benchmark/](benchmark/) for the source code. ``` long text + line-column (not cached) x 72,989 ops/sec ±0.83% (89 runs sampled) long text + line-column (cached) x 13,074,242 ops/sec ±0.32% (89 runs sampled) long text + find-line-column x 33,887 ops/sec ±0.54% (84 runs sampled) short text + line-column (not cached) x 1,636,766 ops/sec ±0.77% (82 runs sampled) short text + line-column (cached) x 21,699,686 ops/sec ±1.04% (82 runs sampled) short text + find-line-column x 382,145 ops/sec ±1.04% (85 runs sampled) ``` As you might have noticed, even not cached version of `line-column` is 2x - 4x faster than `find-line-column`, and cached version of `line-column` is remarkable 50x - 380x faster. ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -am 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## License MIT (See LICENSE) # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # levn [![Build Status](https://travis-ci.org/gkz/levn.png)](https://travis-ci.org/gkz/levn) <a name="levn" /> __Light ECMAScript (JavaScript) Value Notation__ Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments). Levn aims to concisely describe JavaScript values in text, and allow for the extraction and validation of those values. Levn uses [type-check](https://github.com/gkz/type-check) for its type format, and to validate the results. MIT license. Version 0.4.1. __How is this different than JSON?__ levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose. npm install levn For updates on levn, [follow me on twitter](https://twitter.com/gkzahariev). ## Quick Examples ```js var parse = require('levn').parse; parse('Number', '2'); // 2 parse('String', '2'); // '2' parse('String', 'levn'); // 'levn' parse('String', 'a b'); // 'a b' parse('Boolean', 'true'); // true parse('Date', '#2011-11-11#'); // (Date object) parse('Date', '2011-11-11'); // (Date object) parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi parse('RegExp', 're'); // /re/ parse('Int', '2'); // 2 parse('Number | String', 'str'); // 'str' parse('Number | String', '2'); // 2 parse('[Number]', '[1,2,3]'); // [1,2,3] parse('(String, Boolean)', '(hi, false)'); // ['hi', false] parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2} // at the top level, you can ommit surrounding delimiters parse('[Number]', '1,2,3'); // [1,2,3] parse('(String, Boolean)', 'hi, false'); // ['hi', false] parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2} // wildcard - auto choose type parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}] ``` ## Usage `require('levn');` returns an object that exposes three properties. `VERSION` is the current version of the library as a string. `parse` and `parsedTypeParse` are functions. ```js // parse(type, input, options); parse('[Number]', '1,2,3'); // [1, 2, 3] // parsedTypeParse(parsedType, input, options); var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ### parse(type, input, options) `parse` casts the string `input` into a JavaScript value according to the specified `type` in the [type format](https://github.com/gkz/type-check#type-format) (and taking account the optional `options`) and returns the resulting JavaScript value. ##### arguments * type - `String` - the type written in the [type format](https://github.com/gkz/type-check#type-format) which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js parse('[Number]', '1,2,3'); // [1, 2, 3] ``` ### parsedTypeParse(parsedType, input, options) `parsedTypeParse` casts the string `input` into a JavaScript value according to the specified `type` which has already been parsed (and taking account the optional `options`) and returns the resulting JavaScript value. You can parse a type using the [type-check](https://github.com/gkz/type-check) library's `parseType` function. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ## Levn Format Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, `parse('Number', '2')` will produce the number `2`, but `parse('String', '2')` will produce the string `"2"`. If you do not provide type information, and simply use `*`, levn will parse the input according the unambiguous "explicit" mode, which we will now detail - you can also set the `explicit` option to true manually in the [options](#options). * `"string"`, `'string'` are parsed as a String, eg. `"a msg"` is `"a msg"` * `#date#` is parsed as a Date, eg. `#2011-11-11#` is `new Date('2011-11-11')` * `/regexp/flags` is parsed as a RegExp, eg. `/re/gi` is `/re/gi` * `undefined`, `null`, `NaN`, `true`, and `false` are all their JavaScript equivalents * `[element1, element2, etc]` is an Array, and the casting procedure is recursively applied to each element. Eg. `[1,2,3]` is `[1,2,3]`. * `(element1, element2, etc)` is an tuple, and the casting procedure is recursively applied to each element. Eg. `(1, a)` is `(1, a)` (is `[1, 'a']`). * `{key1: val1, key2: val2, ...}` is an Object, and the casting procedure is recursively applied to each property. Eg. `{a: 1, b: 2}` is `{a: 1, b: 2}`. * Any test which does not fall under the above, and which does not contain special characters (`[``]``(``)``{``}``:``,`) is a string, eg. `$12- blah` is `"$12- blah"`. If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the [type format](https://github.com/gkz/type-check#type-format) section of [type-check](https://github.com/gkz/type-check) for more information about how to specify types. There are some rules about what levn can do with the information: * If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. `[({})]` is `"[({})]"`, and `"hi"` is `'"hi"'`. * If a Date is expected, the surrounding `#` can be omitted from date literals. Eg. `2011-11-11` is `new Date('2011-11-11')`. * If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing `/` can be omitted - this will have the affect of setting the source of the regex to the input. Eg. `regex` is `/regex/`. * If an Array is expected, and it is the root node (at the top level), the opening `[` and closing `]` can be omitted. Eg. `1,2,3` is `[1,2,3]`. * If a tuple is expected, and it is the root node (at the top level), the opening `(` and closing `)` can be omitted. Eg. `1, a` is `(1, a)` (is `[1, 'a']`). * If an Object is expected, and it is the root node (at the top level), the opening `{` and closing `}` can be omitted. Eg `a: 1, b: 2` is `{a: 1, b: 2}`. If you list multiple types (eg. `Number | String`), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list. Whitespace between special characters and elements is inconsequential. ## Options Options is an object. It is an optional parameter to the `parse` and `parsedTypeParse` functions. ### Explicit A `Boolean`. By default it is `false`. __Example:__ ```js parse('RegExp', 're', {explicit: false}); // /re/ parse('RegExp', 're', {explicit: true}); // Error: ... does not type check... parse('RegExp | String', 're', {explicit: true}); // 're' ``` `explicit` sets whether to be in explicit mode or not. Using `*` automatically activates explicit mode. For more information, read the [levn format](#levn-format) section. ### customTypes An `Object`. Empty `{}` by default. __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function (x) { return x % 2 === 0; }, cast: function (x) { return {type: 'Just', value: parseInt(x)}; } } } } parse('Even', '2', options); // 2 parse('Even', '3', options); // Error: Value: "3" does not type check... ``` __Another Example:__ ```js function Person(name, age){ this.name = name; this.age = age; } var options = { customTypes: { Person: { typeOf: 'Object', validate: function (x) { x instanceof Person; }, cast: function (value, options, typesCast) { var name, age; if ({}.toString.call(value).slice(8, -1) !== 'Object') { return {type: 'Nothing'}; } name = typesCast(value.name, [{type: 'String'}], options); age = typesCast(value.age, [{type: 'Numger'}], options); return {type: 'Just', value: new Person(name, age)}; } } } parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25} ``` `customTypes` is an object whose keys are the name of the types, and whose values are an object with three properties, `typeOf`, `validate`, and `cast`. For more information about `typeOf` and `validate`, please see the [custom types](https://github.com/gkz/type-check#custom-types) section of type-check. `cast` is a function which receives three arguments, the value under question, options, and the typesCast function. In `cast`, attempt to cast the value into the specified type. If you are successful, return an object in the format `{type: 'Just', value: CAST-VALUE}`, if you know it won't work, return `{type: 'Nothing'}`. You can use the `typesCast` function to cast any child values. Remember to pass `options` to it. In your function you can also check for `options.explicit` and act accordingly. ## Technical About `levn` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [type-check](https://github.com/gkz/type-check) to both parse types and validate values. It also uses the [prelude.ls](http://preludels.com/) library. # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![build](https://github.com/epoberezkin/json-schema-traverse/workflows/build/badge.svg)](https://github.com/epoberezkin/json-schema-traverse/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/json-schema-traverse)](https://www.npmjs.com/package/json-schema-traverse) [![coverage](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## Enterprise support json-schema-traverse package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-json-schema-traverse?utm_source=npm-json-schema-traverse&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # isexe Minimal module to check if a file is executable, and a normal file. Uses `fs.stat` and tests against the `PATHEXT` environment variable on Windows. ## USAGE ```javascript var isexe = require('isexe') isexe('some-file-name', function (err, isExe) { if (err) { console.error('probably file does not exist or something', err) } else if (isExe) { console.error('this thing can be run') } else { console.error('cannot be run') } }) // same thing but synchronous, throws errors var isExe = isexe.sync('some-file-name') // treat errors as just "not executable" isexe('maybe-missing-file', { ignoreErrors: true }, callback) var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true }) ``` ## API ### `isexe(path, [options], [callback])` Check if the path is executable. If no callback provided, and a global `Promise` object is available, then a Promise will be returned. Will raise whatever errors may be raised by `fs.stat`, unless `options.ignoreErrors` is set to true. ### `isexe.sync(path, [options])` Same as `isexe` but returns the value and throws any errors raised. ### Options * `ignoreErrors` Treat all errors as "no, this is not executable", but don't raise them. * `uid` Number to use as the user id * `gid` Number to use as the group id * `pathExt` List of path extensions to use instead of `PATHEXT` environment variable on Windows. # Acorn-JSX [![Build Status](https://travis-ci.org/acornjs/acorn-jsx.svg?branch=master)](https://travis-ci.org/acornjs/acorn-jsx) [![NPM version](https://img.shields.io/npm/v/acorn-jsx.svg)](https://www.npmjs.org/package/acorn-jsx) This is plugin for [Acorn](http://marijnhaverbeke.nl/acorn/) - a tiny, fast JavaScript parser, written completely in JavaScript. It was created as an experimental alternative, faster [React.js JSX](http://facebook.github.io/react/docs/jsx-in-depth.html) parser. Later, it replaced the [official parser](https://github.com/facebookarchive/esprima) and these days is used by many prominent development tools. ## Transpiler Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out [Babel](https://babeljs.io/) and [Buble](https://buble.surge.sh/) transpilers which use `acorn-jsx` under the hood. ## Usage Requiring this module provides you with an Acorn plugin that you can use like this: ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');"); ``` Note that official spec doesn't support mix of XML namespaces and object-style access in tag names (#27) like in `<namespace:Object.Property />`, so it was deprecated in `acorn-jsx@3.0`. If you still want to opt-in to support of such constructions, you can pass the following option: ```javascript acorn.Parser.extend(jsx({ allowNamespacedObjects: true })) ``` Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely: ```javascript acorn.Parser.extend(jsx({ allowNamespaces: false })) ``` Note that by default `allowNamespaces` is enabled for spec compliancy. ## License This plugin is issued under the [MIT license](./LICENSE). # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. # Optionator <a name="optionator" /> Optionator is a JavaScript/Node.js option parsing and help generation library used by [eslint](http://eslint.org), [Grasp](http://graspjs.com), [LiveScript](http://livescript.net), [esmangle](https://github.com/estools/esmangle), [escodegen](https://github.com/estools/escodegen), and [many more](https://www.npmjs.com/browse/depended/optionator). For an online demo, check out the [Grasp online demo](http://www.graspjs.com/#demo). [About](#about) &middot; [Usage](#usage) &middot; [Settings Format](#settings-format) &middot; [Argument Format](#argument-format) ## Why? The problem with other option parsers, such as `yargs` or `minimist`, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application. $ cmd --halp Invalid option '--halp' - perhaps you meant '--help'? $ cmd --count str Invalid value for option 'count' - expected type Int, received value: str. Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier. ## About Optionator uses [type-check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) behind the scenes to cast and verify input according the specified types. MIT license. Version 0.9.1 npm install optionator For updates on Optionator, [follow me on twitter](https://twitter.com/gkzahariev). Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify. ## Usage `require('optionator');` returns a function. It has one property, `VERSION`, the current version of the library as a string. This function is called with an object specifying your options and other information, see the [settings format section](#settings-format). This in turn returns an object with three properties, `parse`, `parseArgv`, `generateHelp`, and `generateHelpForOption`, which are all functions. ```js var optionator = require('optionator')({ prepend: 'Usage: cmd [options]', append: 'Version 1.0.0', options: [{ option: 'help', alias: 'h', type: 'Boolean', description: 'displays help' }, { option: 'count', alias: 'c', type: 'Int', description: 'number of things', example: 'cmd --count 2' }] }); var options = optionator.parseArgv(process.argv); if (options.help) { console.log(optionator.generateHelp()); } ... ``` ### parse(input, parseOptions) `parse` processes the `input` according to your settings, and returns an object with the results. ##### arguments * input - `[String] | Object | String` - the input you wish to parse * parseOptions - `{slice: Int}` - all options optional - `slice` specifies how much to slice away from the beginning if the input is an array or string - by default `0` for string, `2` for array (works with `process.argv`) ##### returns `Object` - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the `_` key. ##### example ```js parse(['node', 't.js', '--count', '2', 'positional']); // {count: 2, _: ['positional']} parse('--count 2 positional'); // {count: 2, _: ['positional']} parse({count: 2, _:['positional']}); // {count: 2, _: ['positional']} ``` ### parseArgv(input) `parseArgv` works exactly like `parse`, but only for array input and it slices off the first two elements. ##### arguments * input - `[String]` - the input you wish to parse ##### returns See "returns" section in "parse" ##### example ```js parseArgv(process.argv); ``` ### generateHelp(helpOptions) `generateHelp` produces help text based on your settings. ##### arguments * helpOptions - `{showHidden: Boolean, interpolate: Object}` - all options optional - `showHidden` specifies whether to show options with `hidden: true` specified, by default it is `false` - `interpolate` specify data to be interpolated in `prepend` and `append` text, `{{key}}` is the format - eg. `generateHelp({interpolate:{version: '0.4.2'}})`, will change this `append` text: `Version {{version}}` to `Version 0.4.2` ##### returns `String` - the generated help text ##### example ```js generateHelp(); /* "Usage: cmd [options] positional -h, --help displays help -c, --count Int number of things Version 1.0.0 "*/ ``` ### generateHelpForOption(optionName) `generateHelpForOption` produces expanded help text for the specified with `optionName` option. If an `example` was specified for the option, it will be displayed, and if a `longDescription` was specified, it will display that instead of the `description`. ##### arguments * optionName - `String` - the name of the option to display ##### returns `String` - the generated help text for the option ##### example ```js generateHelpForOption('count'); /* "-c, --count Int description: number of things example: cmd --count 2 "*/ ``` ## Settings Format When your `require('optionator')`, you get a function that takes in a settings object. This object has the type: { prepend: String, append: String, options: [{heading: String} | { option: String, alias: [String] | String, type: String, enum: [String], default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: [String] | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: [String] | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | [String]]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object } All of the properties are optional (the `Maybe` has been excluded for brevities sake), except for having either `heading: String` or `option: String` in each object in the `options` array. ### Top Level Properties * `prepend` is an optional string to be placed before the options in the help text * `append` is an optional string to be placed after the options in the help text * `options` is a required array specifying your options and headings, the options and headings will be displayed in the order specified * `helpStyle` is an optional object which enables you to change the default appearance of some aspects of the help text * `mutuallyExclusive` is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is present * `concatRepeatedArrays` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `mergeRepeatedObjects` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `positionalAnywhere` is an optional boolean (defaults to `true`) - when `true` it allows positional arguments anywhere, when `false`, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with `positionalAnywhere: false`, the arguments `--flag --boom 12 --crack` would have two positional arguments: `12` and `--crack` * `typeAliases` is an optional object, it allows you to set aliases for types, eg. `{Path: 'String'}` would allow you to use the type `Path` as an alias for the type `String` * `defaults` is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do `default: { type: "String" }` to set the default type of all options to `String`, and then override that default in an individual option by setting the `type` property #### Heading Properties * `heading` a required string, the name of the heading #### Option Properties * `option` the required name of the option - use dash-case, without the leading dashes * `alias` is an optional string or array of strings which specify any aliases for the option * `type` is a required string in the [type check](https://github.com/gkz/type-check) [format](https://github.com/gkz/type-check#type-format), this will be used to cast the inputted value and validate it * `enum` is an optional array of strings, each string will be parsed by [levn](https://github.com/gkz/levn) - the argument value must be one of the resulting values - each potential value must validate against the specified `type` * `default` is a optional string, which will be parsed by [levn](https://github.com/gkz/levn) and used as the default value if none is set - the value must validate against the specified `type` * `restPositional` is an optional boolean - if set to `true`, everything after the option will be taken to be a positional argument, even if it looks like a named argument * `required` is an optional boolean - if set to `true`, the option parsing will fail if the option is not defined * `overrideRequired` is a optional boolean - if set to `true` and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use `--help` or `--version` flags * `concatRepeatedArrays` is an optional boolean or tuple with boolean and options object (defaults to `false`) - when set to `true` and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option `g` of type `[String]`: `-g a -g b -g c,d` will result in `['a','b','c','d']` You can supply an options object by giving the following value: `[true, options]`. The one currently supported option is `oneValuePerFlag`, this only allows one array value per flag. This is useful if your potential values contain a comma. * `mergeRepeatedObjects` is an optional boolean (defaults to `false`) - when set to `true` and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option `g` of type `Object`: `-g a:1 -g b:2 -g c:3,d:4` will result in `{a: 1, b: 2, c: 3, d: 4}` * `dependsOn` is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether `'and'` or `'or'` is first, it will either check whether all (`['and', 'option-a', 'option-b']`), or at least one (`['or', 'option-a', 'option-b']`) other options are set * `description` is an optional string, which will be displayed next to the option in the help text * `longDescription` is an optional string, it will be displayed instead of the `description` when `generateHelpForOption` is used * `example` is an optional string or array of strings with example(s) for the option - these will be displayed when `generateHelpForOption` is used #### Help Style Properties * `aliasSeparator` is an optional string, separates multiple names from each other - default: ' ,' * `typeSeparator` is an optional string, separates the type from the names - default: ' ' * `descriptionSeparator` is an optional string , separates the description from the padded name and type - default: ' ' * `initialIndent` is an optional int - the amount of indent for options - default: 2 * `secondaryIndent` is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4 * `maxPadFactor` is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5 ## Argument Format At the highest level there are two types of arguments: named, and positional. Name arguments of any length are prefixed with `--` (eg. `--go`), and those of one character may be prefixed with either `--` or `-` (eg. `-g`). There are two types of named arguments: boolean flags (eg. `--problemo`, `-p`) which take no value and result in a `true` if they are present, the falsey `undefined` if they are not present, or `false` if present and explicitly prefixed with `no` (eg. `--no-problemo`). Named arguments with values (eg. `--tseries 800`, `-t 800`) are the other type. If the option has a type `Boolean` it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value. For more information about how to properly set types to get the value you want, take a look at the [type check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) pages. You can group single character arguments that use a single `-`, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. `-ba 2` is equivalent to `-b -a 2`. Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in `cmd -b one -a 2 two` where `b` is a boolean flag, and `a` has the type `Number`, there are two positional arguments, `one` and `two`. Everything after an `--` is positional, even if it looks like a named argument. You may optionally use `=` to separate option names from values, for example: `--count=2`. If you specify the option `NUM`, then any argument using a single `-` followed by a number will be valid and will set the value of `NUM`. Eg. `-2` will be parsed into `NUM: 2`. If duplicate named arguments are present, the last one will be taken. ## Technical About `optionator` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [levn](https://github.com/gkz/levn) to cast arguments to their specified type, and uses [type-check](https://github.com/gkz/type-check) to validate values. It also uses the [prelude.ls](http://preludels.com/) library. # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config # isarray `Array#isArray` for older browsers. [![build status](https://secure.travis-ci.org/juliangruber/isarray.svg)](http://travis-ci.org/juliangruber/isarray) [![downloads](https://img.shields.io/npm/dm/isarray.svg)](https://www.npmjs.org/package/isarray) [![browser support](https://ci.testling.com/juliangruber/isarray.png) ](https://ci.testling.com/juliangruber/isarray) ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # word-wrap [![NPM version](https://img.shields.io/npm/v/word-wrap.svg?style=flat)](https://www.npmjs.com/package/word-wrap) [![NPM monthly downloads](https://img.shields.io/npm/dm/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![NPM total downloads](https://img.shields.io/npm/dt/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/word-wrap.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/word-wrap) > Wrap words to a specified length. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save word-wrap ``` ## Usage ```js var wrap = require('word-wrap'); wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'); ``` Results in: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. ``` ## Options ![image](https://cloud.githubusercontent.com/assets/383994/6543728/7a381c08-c4f6-11e4-8b7d-b6ba197569c9.png) ### options.width Type: `Number` Default: `50` The width of the text before wrapping to a new line. **Example:** ```js wrap(str, {width: 60}); ``` ### options.indent Type: `String` Default: `` (two spaces) The string to use at the beginning of each line. **Example:** ```js wrap(str, {indent: ' '}); ``` ### options.newline Type: `String` Default: `\n` The string to use at the end of each line. **Example:** ```js wrap(str, {newline: '\n\n'}); ``` ### options.escape Type: `function` Default: `function(str){return str;}` An escape function to run on each line after splitting them. **Example:** ```js var xmlescape = require('xml-escape'); wrap(str, { escape: function(string){ return xmlescape(string); } }); ``` ### options.trim Type: `Boolean` Default: `false` Trim trailing whitespace from the returned string. This option is included since `.trim()` would also strip the leading indentation from the first line. **Example:** ```js wrap(str, {trim: true}); ``` ### options.cut Type: `Boolean` Default: `false` Break a word between any two letters when the word is longer than the specified width. **Example:** ```js wrap(str, {cut: true}); ``` ## About ### Related projects * [common-words](https://www.npmjs.com/package/common-words): Updated list (JSON) of the 100 most common words in the English language. Useful for… [more](https://github.com/jonschlinkert/common-words) | [homepage](https://github.com/jonschlinkert/common-words "Updated list (JSON) of the 100 most common words in the English language. Useful for excluding these words from arrays.") * [shuffle-words](https://www.npmjs.com/package/shuffle-words): Shuffle the words in a string and optionally the letters in each word using the… [more](https://github.com/jonschlinkert/shuffle-words) | [homepage](https://github.com/jonschlinkert/shuffle-words "Shuffle the words in a string and optionally the letters in each word using the Fisher-Yates algorithm. Useful for creating test fixtures, benchmarking samples, etc.") * [unique-words](https://www.npmjs.com/package/unique-words): Return the unique words in a string or array. | [homepage](https://github.com/jonschlinkert/unique-words "Return the unique words in a string or array.") * [wordcount](https://www.npmjs.com/package/wordcount): Count the words in a string. Support for english, CJK and Cyrillic. | [homepage](https://github.com/jonschlinkert/wordcount "Count the words in a string. Support for english, CJK and Cyrillic.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Contributors | **Commits** | **Contributor** | | --- | --- | | 43 | [jonschlinkert](https://github.com/jonschlinkert) | | 2 | [lordvlad](https://github.com/lordvlad) | | 2 | [hildjj](https://github.com/hildjj) | | 1 | [danilosampaio](https://github.com/danilosampaio) | | 1 | [2fd](https://github.com/2fd) | | 1 | [toddself](https://github.com/toddself) | | 1 | [wolfgang42](https://github.com/wolfgang42) | | 1 | [zachhale](https://github.com/zachhale) | ### Building docs _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` ### Running tests Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](https://twitter.com/jonschlinkert) ### License Copyright © 2017, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 02, 2017._ # type-check [![Build Status](https://travis-ci.org/gkz/type-check.png?branch=master)](https://travis-ci.org/gkz/type-check) <a name="type-check" /> `type-check` is a library which allows you to check the types of JavaScript values at runtime with a Haskell like type syntax. It is great for checking external input, for testing, or even for adding a bit of safety to your internal code. It is a major component of [levn](https://github.com/gkz/levn). MIT license. Version 0.4.0. Check out the [demo](http://gkz.github.io/type-check/). For updates on `type-check`, [follow me on twitter](https://twitter.com/gkzahariev). npm install type-check ## Quick Examples ```js // Basic types: var typeCheck = require('type-check').typeCheck; typeCheck('Number', 1); // true typeCheck('Number', 'str'); // false typeCheck('Error', new Error); // true typeCheck('Undefined', undefined); // true // Comment typeCheck('count::Number', 1); // true // One type OR another type: typeCheck('Number | String', 2); // true typeCheck('Number | String', 'str'); // true // Wildcard, matches all types: typeCheck('*', 2) // true // Array, all elements of a single type: typeCheck('[Number]', [1, 2, 3]); // true typeCheck('[Number]', [1, 'str', 3]); // false // Tuples, or fixed length arrays with elements of different types: typeCheck('(String, Number)', ['str', 2]); // true typeCheck('(String, Number)', ['str']); // false typeCheck('(String, Number)', ['str', 2, 5]); // false // Object properties: typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true typeCheck('{x: Number, y: Boolean}', {x: 2}); // false typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true // A particular type AND object properties: typeCheck('RegExp{source: String, ...}', /re/i); // true typeCheck('RegExp{source: String, ...}', {source: 're'}); // false // Custom types: var opt = {customTypes: {Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}}; typeCheck('Even', 2, opt); // true // Nested: var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}' typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true ``` Check out the [type syntax format](#syntax) and [guide](#guide). ## Usage `require('type-check');` returns an object that exposes four properties. `VERSION` is the current version of the library as a string. `typeCheck`, `parseType`, and `parsedTypeCheck` are functions. ```js // typeCheck(type, input, options); typeCheck('Number', 2); // true // parseType(type); var parsedType = parseType('Number'); // object // parsedTypeCheck(parsedType, input, options); parsedTypeCheck(parsedType, 2); // true ``` ### typeCheck(type, input, options) `typeCheck` checks a JavaScript value `input` against `type` written in the [type format](#type-format) (and taking account the optional `options`) and returns whether the `input` matches the `type`. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js typeCheck('Number', 2); // true ``` ### parseType(type) `parseType` parses string `type` written in the [type format](#type-format) into an object representing the parsed type. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to parse ##### returns `Object` - an object in the parsed type format representing the parsed type ##### example ```js parseType('Number'); // [{type: 'Number'}] ``` ### parsedTypeCheck(parsedType, input, options) `parsedTypeCheck` checks a JavaScript value `input` against parsed `type` in the parsed type format (and taking account the optional `options`) and returns whether the `input` matches the `type`. Use this in conjunction with `parseType` if you are going to use a type more than once. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js parsedTypeCheck([{type: 'Number'}], 2); // true var parsedType = parseType('String'); parsedTypeCheck(parsedType, 'str'); // true ``` <a name="type-format" /> ## Type Format ### Syntax White space is ignored. The root node is a __Types__. * __Identifier__ = `[\$\w]+` - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. `String` * __Type__ = an `Identifier`, an `Identifier` followed by a `Structure`, just a `Structure`, or a wildcard `*` - eg. `String`, `Object{x: Number}`, `{x: Number}`, `Array{0: String, 1: Boolean, length: Number}`, `*` * __Types__ = optionally a comment (an `Identifier` followed by a `::`), optionally the identifier `Maybe`, one or more `Type`, separated by `|` - eg. `Number`, `String | Date`, `Maybe Number`, `Maybe Boolean | String` * __Structure__ = `Fields`, or a `Tuple`, or an `Array` - eg. `{x: Number}`, `(String, Number)`, `[Date]` * __Fields__ = a `{`, followed one or more `Field` separated by a comma `,` (trailing comma `,` is permitted), optionally an `...` (always preceded by a comma `,`), followed by a `}` - eg. `{x: Number, y: String}`, `{k: Function, ...}` * __Field__ = an `Identifier`, followed by a colon `:`, followed by `Types` - eg. `x: Date | String`, `y: Boolean` * __Tuple__ = a `(`, followed by one or more `Types` separated by a comma `,` (trailing comma `,` is permitted), followed by a `)` - eg `(Date)`, `(Number, Date)` * __Array__ = a `[` followed by exactly one `Types` followed by a `]` - eg. `[Boolean]`, `[Boolean | Null]` ### Guide `type-check` uses `Object.toString` to find out the basic type of a value. Specifically, ```js {}.toString.call(VALUE).slice(8, -1) {}.toString.call(true).slice(8, -1) // 'Boolean' ``` A basic type, eg. `Number`, uses this check. This is much more versatile than using `typeof` - for example, with `document`, `typeof` produces `'object'` which isn't that useful, and our technique produces `'HTMLDocument'`. You may check for multiple types by separating types with a `|`. The checker proceeds from left to right, and passes if the value is any of the types - eg. `String | Boolean` first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false. Adding a `Maybe` in front of a list of multiple types is the same as also checking for `Null` and `Undefined` - eg. `Maybe String` is equivalent to `Undefined | Null | String`. You may add a comment to remind you of what the type is for by following an identifier with a `::` before a type (or multiple types). The comment is simply thrown out. The wildcard `*` matches all types. There are three types of structures for checking the contents of a value: 'fields', 'tuple', and 'array'. If used by itself, a 'fields' structure will pass with any type of object as long as it is an instance of `Object` and the properties pass - this allows for duck typing - eg. `{x: Boolean}`. To check if the properties pass, and the value is of a certain type, you can specify the type - eg. `Error{message: String}`. If you want to make a field optional, you can simply use `Maybe` - eg. `{x: Boolean, y: Maybe String}` will still pass if `y` is undefined (or null). If you don't care if the value has properties beyond what you have specified, you can use the 'etc' operator `...` - eg. `{x: Boolean, ...}` will match an object with an `x` property that is a boolean, and with zero or more other properties. For an array, you must specify one or more types (separated by `|`) - it will pass for something of any length as long as each element passes the types provided - eg. `[Number]`, `[Number | String]`. A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. `(String, Number)`. An array and tuple structure check that the value is of type `Array` by default, but if another type is specified, they will check for that instead - eg. `Int32Array[Number]`. You can use the wildcard `*` to search for any type at all. Check out the [type precedence](https://github.com/zaboco/type-precedence) library for type-check. ## Options Options is an object. It is an optional parameter to the `typeCheck` and `parsedTypeCheck` functions. The only current option is `customTypes`. <a name="custom-types" /> ### Custom Types __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; } } } }; typeCheck('Even', 2, options); // true typeCheck('Even', 3, options); // false ``` `customTypes` allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a `typeOf` property - a string, and `validate` property - a function. The `typeOf` property is the type the value should be (optional - if not set only `validate` will be used), and `validate` is a function which should return true if the value is of that type. `validate` receives one parameter, which is the value that we are checking. ## Technical About `type-check` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It also uses the [prelude.ls](http://preludels.com/) library. # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers # `asbuild` [![Stars](https://img.shields.io/github/stars/AssemblyScript/asbuild.svg?style=social&maxAge=3600&label=Star)](https://github.com/AssemblyScript/asbuild/stargazers) *A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc.* ## 🚩 Table of Contents - [Installing](#-installing) - [Usage](#-usage) - [`asb init`](#asb-init---create-an-empty-project) - [`asb test`](#asb-test---run-as-pect-tests) - [`asb fmt`](#asb-fmt---format-as-files-using-eslint) - [`asb run`](#asb-run---run-a-wasi-binary) - [`asb build`](#asb-build---compile-the-project-using-asc) - [Background](#-background) ## 🔧 Installing Install it globally ``` npm install -g asbuild ``` Or, locally as dev dependencies ``` npm install --save-dev asbuild ``` ## 💡 Usage ``` Build tool for AssemblyScript projects. Usage: asb [command] [options] Commands: asb Alias of build command, to maintain back-ward compatibility [default] asb build Compile a local package and all of its dependencies [aliases: compile, make] asb init [baseDir] Create a new AS package in an given directory asb test Run as-pect tests asb fmt [paths..] This utility formats current module using eslint. [aliases: format, lint] Options: --version Show version number [boolean] --help Show help [boolean] ``` ### `asb init` - Create an empty project ``` asb init [baseDir] Create a new AS package in an given directory Positionals: baseDir Create a sample AS project in this directory [string] [default: "."] Options: --version Show version number [boolean] --help Show help [boolean] --yes Skip the interactive prompt [boolean] [default: false] ``` ### `asb test` - Run as-pect tests ``` asb test Run as-pect tests USAGE: asb test [options] -- [aspect_options] Options: --version Show version number [boolean] --help Show help [boolean] --verbose, --vv Print out arguments passed to as-pect [boolean] [default: false] ``` ### `asb fmt` - Format AS files using ESlint ``` asb fmt [paths..] This utility formats current module using eslint. Positionals: paths Paths to format [array] [default: ["."]] Initialisation: --init Generates recommended eslint config for AS Projects [boolean] Miscellaneous --lint, --dry-run Tries to fix problems without saving the changes to the file system [boolean] [default: false] Options: --version Show version number [boolean] --help Show help ``` ### `asb run` - Run a WASI binary ``` asb run Run a WASI binary USAGE: asb run [options] [binary path] -- [binary options] Positionals: binary path to Wasm binary [string] [required] Options: --version Show version number [boolean] --help Show help [boolean] --preopen, -p comma separated list of directories to open. [default: "."] ``` ### `asb build` - Compile the project using asc ``` asb build Compile a local package and all of its dependencies USAGE: asb build [entry_file] [options] -- [asc_options] Options: --version Show version number [boolean] --help Show help [boolean] --baseDir, -d Base directory of project. [string] [default: "."] --config, -c Path to asconfig file [string] [default: "./asconfig.json"] --wat Output wat file to outDir [boolean] [default: false] --outDir Directory to place built binaries. Default "./build/<target>/" [string] --target Target for compilation [string] [default: "release"] --verbose Print out arguments passed to asc [boolean] [default: false] Examples: asb build Build release of 'assembly/index.ts to build/release/packageName.wasm asb build --target release Build a release binary asb build -- --measure Pass argument to 'asc' ``` #### Defaults ##### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ##### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./tests/build_test) ## 📖 Background Asbuild started as wrapper around `asc` to provide an easier CLI interface and now has been extened to support other commands like `init`, `test` and `fmt` just like `cargo` to become a one stop build tool for AS Projects. ## 📜 License This library is provided under the open-source [MIT license](https://choosealicense.com/licenses/mit/). [![build status](https://app.travis-ci.com/dankogai/js-base64.svg)](https://app.travis-ci.com/github/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.encode(latin, true)); // ZGFua29nYWk skips padding Base64.encodeURI(latin)); // ZGFua29nYWk Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.decode( 'ZGFua29nYWk'); // dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend Uint8Array.prototype Base64.extendUint8Array(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IEs before 11. Do the following in your shell. ```shell $ make base64.es5.js ``` ## Brief History * Since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. * Since version 3.7 `base64.js` is ES5-compatible again (hence IE11-compabile). * Since 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see above) <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> [![npm version](https://img.shields.io/npm/v/espree.svg)](https://www.npmjs.com/package/espree) [![Build Status](https://travis-ci.org/eslint/espree.svg?branch=master)](https://travis-ci.org/eslint/espree) [![npm downloads](https://img.shields.io/npm/dm/espree.svg)](https://www.npmjs.com/package/espree) [![Bountysource](https://www.bountysource.com/badge/tracker?tracker_id=9348450)](https://www.bountysource.com/trackers/9348450-eslint?utm_source=9348450&utm_medium=shield&utm_campaign=TRACKER_BADGE) # Espree Espree started out as a fork of [Esprima](http://esprima.org) v1.2.2, the last stable published released of Esprima before work on ECMAScript 6 began. Espree is now built on top of [Acorn](https://github.com/ternjs/acorn), which has a modular architecture that allows extension of core functionality. The goal of Espree is to produce output that is similar to Esprima with a similar API so that it can be used in place of Esprima. ## Usage Install: ``` npm i espree ``` And in your Node.js code: ```javascript const espree = require("espree"); const ast = espree.parse(code); ``` ## API ### `parse()` `parse` parses the given code and returns a abstract syntax tree (AST). It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). ```javascript const espree = require("espree"); const ast = espree.parse(code, options); ``` **Example :** ```js const ast = espree.parse('let foo = "bar"', { ecmaVersion: 6 }); console.log(ast); ``` <details><summary>Output</summary> <p> ``` Node { type: 'Program', start: 0, end: 15, body: [ Node { type: 'VariableDeclaration', start: 0, end: 15, declarations: [Array], kind: 'let' } ], sourceType: 'script' } ``` </p> </details> ### `tokenize()` `tokenize` returns the tokens of a given code. It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). Even if `options` is empty or undefined or `options.tokens` is `false`, it assigns it to `true` in order to get the `tokens` array **Example :** ```js const tokens = espree.tokenize('let foo = "bar"', { ecmaVersion: 6 }); console.log(tokens); ``` <details><summary>Output</summary> <p> ``` Token { type: 'Keyword', value: 'let', start: 0, end: 3 }, Token { type: 'Identifier', value: 'foo', start: 4, end: 7 }, Token { type: 'Punctuator', value: '=', start: 8, end: 9 }, Token { type: 'String', value: '"bar"', start: 10, end: 15 } ``` </p> </details> ### `version` Returns the current `espree` version ### `VisitorKeys` Returns all visitor keys for traversing the AST from [eslint-visitor-keys](https://github.com/eslint/eslint-visitor-keys) ### `latestEcmaVersion` Returns the latest ECMAScript supported by `espree` ### `supportedEcmaVersions` Returns an array of all supported ECMAScript versions ## Options ```js const options = { // attach range information to each node range: false, // attach line/column location information to each node loc: false, // create a top-level comments array containing all comments comment: false, // create a top-level tokens array containing all tokens tokens: false, // Set to 3, 5 (default), 6, 7, 8, 9, 10, 11, or 12 to specify the version of ECMAScript syntax you want to use. // You can also set to 2015 (same as 6), 2016 (same as 7), 2017 (same as 8), 2018 (same as 9), 2019 (same as 10), 2020 (same as 11), or 2021 (same as 12) to use the year-based naming. ecmaVersion: 5, // specify which type of script you're parsing ("script" or "module") sourceType: "script", // specify additional language features ecmaFeatures: { // enable JSX parsing jsx: false, // enable return in global scope globalReturn: false, // enable implied strict mode (if ecmaVersion >= 5) impliedStrict: false } } ``` ## Esprima Compatibility Going Forward The primary goal is to produce the exact same AST structure and tokens as Esprima, and that takes precedence over anything else. (The AST structure being the [ESTree](https://github.com/estree/estree) API with JSX extensions.) Separate from that, Espree may deviate from what Esprima outputs in terms of where and how comments are attached, as well as what additional information is available on AST nodes. That is to say, Espree may add more things to the AST nodes than Esprima does but the overall AST structure produced will be the same. Espree may also deviate from Esprima in the interface it exposes. ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/espree/issues). Espree is licensed under a permissive BSD 2-clause license. ## Security Policy We work hard to ensure that Espree is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting * `npm run browserify` - creates a version of Espree that is usable in a browser ## Differences from Espree 2.x * The `tokenize()` method does not use `ecmaFeatures`. Any string will be tokenized completely based on ECMAScript 6 semantics. * Trailing whitespace no longer is counted as part of a node. * `let` and `const` declarations are no longer parsed by default. You must opt-in by using an `ecmaVersion` newer than `5` or setting `sourceType` to `module`. * The `esparse` and `esvalidate` binary scripts have been removed. * There is no `tolerant` option. We will investigate adding this back in the future. ## Known Incompatibilities In an effort to help those wanting to transition from other parsers to Espree, the following is a list of noteworthy incompatibilities with other parsers. These are known differences that we do not intend to change. ### Esprima 1.2.2 * Esprima counts trailing whitespace as part of each AST node while Espree does not. In Espree, the end of a node is where the last token occurs. * Espree does not parse `let` and `const` declarations by default. * Error messages returned for parsing errors are different. * There are two addition properties on every node and token: `start` and `end`. These represent the same data as `range` and are used internally by Acorn. ### Esprima 2.x * Esprima 2.x uses a different comment attachment algorithm that results in some comments being added in different places than Espree. The algorithm Espree uses is the same one used in Esprima 1.2.2. ## Frequently Asked Questions ### Why another parser [ESLint](http://eslint.org) had been relying on Esprima as its parser from the beginning. While that was fine when the JavaScript language was evolving slowly, the pace of development increased dramatically and Esprima had fallen behind. ESLint, like many other tools reliant on Esprima, has been stuck in using new JavaScript language features until Esprima updates, and that caused our users frustration. We decided the only way for us to move forward was to create our own parser, bringing us inline with JSHint and JSLint, and allowing us to keep implementing new features as we need them. We chose to fork Esprima instead of starting from scratch in order to move as quickly as possible with a compatible API. With Espree 2.0.0, we are no longer a fork of Esprima but rather a translation layer between Acorn and Esprima syntax. This allows us to put work back into a community-supported parser (Acorn) that is continuing to grow and evolve while maintaining an Esprima-compatible parser for those utilities still built on Esprima. ### Have you tried working with Esprima? Yes. Since the start of ESLint, we've regularly filed bugs and feature requests with Esprima and will continue to do so. However, there are some different philosophies around how the projects work that need to be worked through. The initial goal was to have Espree track Esprima and eventually merge the two back together, but we ultimately decided that building on top of Acorn was a better choice due to Acorn's plugin support. ### Why don't you just use Acorn? Acorn is a great JavaScript parser that produces an AST that is compatible with Esprima. Unfortunately, ESLint relies on more than just the AST to do its job. It relies on Esprima's tokens and comment attachment features to get a complete picture of the source code. We investigated switching to Acorn, but the inconsistencies between Esprima and Acorn created too much work for a project like ESLint. We are building on top of Acorn, however, so that we can contribute back and help make Acorn even better. ### What ECMAScript features do you support? Espree supports all ECMAScript 2020 features and partially supports ECMAScript 2021 features. Because ECMAScript 2021 is still under development, we are implementing features as they are finalized. Currently, Espree supports: * [Logical Assignment Operators](https://github.com/tc39/proposal-logical-assignment) * [Numeric Separators](https://github.com/tc39/proposal-numeric-separator) See [finished-proposals.md](https://github.com/tc39/proposals/blob/master/finished-proposals.md) to know what features are finalized. ### How do you determine which experimental features to support? In general, we do not support experimental JavaScript features. We may make exceptions from time to time depending on the maturity of the features. <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/images/ajv_logo.png"> # Ajv: Another JSON Schema Validator The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07. [![Build Status](https://travis-ci.org/ajv-validator/ajv.svg?branch=master)](https://travis-ci.org/ajv-validator/ajv) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm (beta)](https://img.shields.io/npm/v/ajv/beta)](https://www.npmjs.com/package/ajv/v/7.0.0-beta.0) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv v7 beta is released [Ajv version 7.0.0-beta.0](https://github.com/ajv-validator/ajv/tree/v7-beta) is released with these changes: - to reduce the mistakes in JSON schemas and unexpected validation results, [strict mode](./docs/strict-mode.md) is added - it prohibits ignored or ambiguous JSON Schema elements. - to make code injection from untrusted schemas impossible, [code generation](./docs/codegen.md) is fully re-written to be safe. - to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas. - schemas are compiled to ES6 code (ES5 code generation is supported with an option). - to improve reliability and maintainability the code is migrated to TypeScript. **Please note**: - the support for JSON-Schema draft-04 is removed - if you have schemas using "id" attributes you have to replace them with "\$id" (or continue using version 6 that will be supported until 02/28/2021). - all formats are separated to ajv-formats package - they have to be explicitely added if you use them. See [release notes](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) for the details. To install the new version: ```bash npm install ajv@beta ``` See [Getting started with v7](https://github.com/ajv-validator/ajv/tree/v7-beta#usage) for code example. ## Mozilla MOSS grant and OpenJS Foundation [<img src="https://www.poberezkin.com/images/mozilla.png" width="240" height="68">](https://www.mozilla.org/en-US/moss/) &nbsp;&nbsp;&nbsp; [<img src="https://www.poberezkin.com/images/openjs.png" width="220" height="68">](https://openjsf.org/blog/2020/08/14/ajv-joins-openjs-foundation-as-an-incubation-project/) Ajv has been awarded a grant from Mozilla’s [Open Source Support (MOSS) program](https://www.mozilla.org/en-US/moss/) in the “Foundational Technology” track! It will sponsor the development of Ajv support of [JSON Schema version 2019-09](https://tools.ietf.org/html/draft-handrews-json-schema-02) and of [JSON Type Definition](https://tools.ietf.org/html/draft-ucarion-json-type-definition-04). Ajv also joined [OpenJS Foundation](https://openjsf.org/) – having this support will help ensure the longevity and stability of Ajv for all its users. This [blog post](https://www.poberezkin.com/posts/2020-08-14-ajv-json-validator-mozilla-open-source-grant-openjs-foundation.html) has more details. I am looking for the long term maintainers of Ajv – working with [ReadySet](https://www.thereadyset.co/), also sponsored by Mozilla, to establish clear guidelines for the role of a "maintainer" and the contribution standards, and to encourage a wider, more inclusive, contribution from the community. ## Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Using version 6 [JSON Schema draft-07](http://json-schema.org/latest/json-schema-validation.html) is published. [Ajv version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0) that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes). __Please note__: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance: ```javascript ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json')); ``` To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId: ```javascript var ajv = new Ajv({schemaId: 'id'}); // If you want to use both draft-04 and draft-06/07 schemas: // var ajv = new Ajv({schemaId: 'auto'}); ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json')); ``` ## Contents - [Performance](#performance) - [Features](#features) - [Getting started](#getting-started) - [Frequently Asked Questions](https://github.com/ajv-validator/ajv/blob/master/FAQ.md) - [Using in browser](#using-in-browser) - [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) - [Command line interface](#command-line-interface) - Validation - [Keywords](#validation-keywords) - [Annotation keywords](#annotation-keywords) - [Formats](#formats) - [Combining schemas with $ref](#ref) - [$data reference](#data-reference) - NEW: [$merge and $patch keywords](#merge-and-patch-keywords) - [Defining custom keywords](#defining-custom-keywords) - [Asynchronous schema compilation](#asynchronous-schema-compilation) - [Asynchronous validation](#asynchronous-validation) - [Security considerations](#security-considerations) - [Security contact](#security-contact) - [Untrusted schemas](#untrusted-schemas) - [Circular references in objects](#circular-references-in-javascript-objects) - [Trusted schemas](#security-risks-of-trusted-schemas) - [ReDoS attack](#redos-attack) - Modifying data during validation - [Filtering data](#filtering-data) - [Assigning defaults](#assigning-defaults) - [Coercing data types](#coercing-data-types) - API - [Methods](#api) - [Options](#options) - [Validation errors](#validation-errors) - [Plugins](#plugins) - [Related packages](#related-packages) - [Some packages using Ajv](#some-packages-using-ajv) - [Tests, Contributing, Changes history](#tests) - [Support, Code of conduct, License](#open-source-software-support) ## Performance Ajv generates code using [doT templates](https://github.com/olado/doT) to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=32,4,1&chs=600x416&chxl=-1:|djv|ajv|json-schema-validator-generator|jsen|is-my-json-valid|themis|z-schema|jsck|skeemas|json-schema-library|tv4&chd=t:100,98,72.1,66.8,50.1,15.1,6.1,3.8,1.2,0.7,0.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements full JSON Schema [draft-06/07](http://json-schema.org/) and draft-04 standards: - all validation keywords (see [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md)) - full support of remote refs (remote schemas have to be added with `addSchema` or compiled to be available) - support of circular references between schemas - correct string lengths for strings with unicode pairs (can be turned off) - [formats](#formats) defined by JSON Schema draft-07 standard and custom formats (can be turned off) - [validates schemas against meta-schema](#api-validateschema) - supports [browsers](#using-in-browser) and Node.js 0.10-14.x - [asynchronous loading](#asynchronous-schema-compilation) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](#options) - [error messages with parameters](#validation-errors) describing error reasons to allow creating custom error messages - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [filtering data](#filtering-data) from additional properties - [assigning defaults](#assigning-defaults) to missing properties and items - [coercing data](#coercing-data-types) to the types specified in `type` keywords - [custom keywords](#defining-custom-keywords) - draft-06/07 keywords `const`, `contains`, `propertyNames` and `if/then/else` - draft-06 boolean schemas (`true`/`false` as a schema to always pass/fail). - keywords `switch`, `patternRequired`, `formatMaximum` / `formatMinimum` and `formatExclusiveMaximum` / `formatExclusiveMinimum` from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [$data reference](#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](#asynchronous-validation) of custom formats and keywords ## Install ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://tonicdev.com/npm/ajv The fastest validation call: ```javascript // Node.js require: var Ajv = require('ajv'); // or ESM/TypeScript import import Ajv from 'ajv'; var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true} var validate = ajv.compile(schema); var valid = validate(data); if (!valid) console.log(validate.errors); ``` or with less code ```javascript // ... var valid = ajv.validate(schema, data); if (!valid) console.log(ajv.errors); // ... ``` or ```javascript // ... var valid = ajv.addSchema(schema, 'mySchema') .validate('mySchema', data); if (!valid) console.log(ajv.errorsText()); // ... ``` See [API](#api) and [Options](#options) for more details. Ajv compiles schemas to functions and caches them in all cases (using schema serialized with [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again. The best performance is achieved when using compiled functions returned by `compile` or `getSchema` methods (there is no additional function call). __Please note__: every time a validation function or `ajv.validate` are called `errors` property is overwritten. You need to copy `errors` array reference to another variable if you want to use it later (e.g., in the callback). See [Validation errors](#validation-errors) __Note for TypeScript users__: `ajv` provides its own TypeScript declarations out of the box, so you don't need to install the deprecated `@types/ajv` module. ## Using in browser You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle. If you need to use Ajv in several bundles you can create a separate UMD bundle using `npm run bundle` script (thanks to [siddo420](https://github.com/siddo420)). Then you need to load Ajv in the browser: ```html <script src="ajv.min.js"></script> ``` This bundle can be used with different module systems; it creates global `Ajv` if no module system is found. The browser bundle is available on [cdnjs](https://cdnjs.com/libraries/ajv). Ajv is tested with these browsers: [![Sauce Test Status](https://saucelabs.com/browser-matrix/epoberezkin.svg)](https://saucelabs.com/u/epoberezkin) __Please note__: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue [#234](https://github.com/ajv-validator/ajv/issues/234)). ### Ajv and Content Security Policies (CSP) If you're using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a `script-src` directive that includes the value `'unsafe-eval'`. :warning: NOTE, however, that `unsafe-eval` is NOT recommended in a secure CSP[[1]](https://developer.chrome.com/extensions/contentSecurityPolicy#relaxing-eval), as it has the potential to open the document to cross-site scripting (XSS) attacks. In order to make use of Ajv without easing your CSP, you can [pre-compile a schema using the CLI](https://github.com/ajv-validator/ajv-cli#compile-schemas). This will transpile the schema JSON into a JavaScript file that exports a `validate` function that works simlarly to a schema compiled at runtime. Note that pre-compilation of schemas is performed using [ajv-pack](https://github.com/ajv-validator/ajv-pack) and there are [some limitations to the schema features it can compile](https://github.com/ajv-validator/ajv-pack#limitations). A successfully pre-compiled schema is equivalent to the same schema compiled at runtime. ## Command line interface CLI is available as a separate npm package [ajv-cli](https://github.com/ajv-validator/ajv-cli). It supports: - compiling JSON Schemas to test their validity - BETA: generating standalone module exporting a validation function to be used without Ajv (using [ajv-pack](https://github.com/ajv-validator/ajv-pack)) - migrate schemas to draft-07 (using [json-schema-migrate](https://github.com/epoberezkin/json-schema-migrate)) - validating data file(s) against JSON Schema - testing expected validity of data against JSON Schema - referenced schemas - custom meta-schemas - files in JSON, JSON5, YAML, and JavaScript format - all Ajv options - reporting changes in data after validation in [JSON-patch](https://tools.ietf.org/html/rfc6902) format ## Validation keywords Ajv supports all validation keywords from draft-07 of JSON Schema standard: - [type](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#type) - [for numbers](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-numbers) - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf - [for strings](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-strings) - maxLength, minLength, pattern, format - [for arrays](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-arrays) - maxItems, minItems, uniqueItems, items, additionalItems, [contains](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#contains) - [for objects](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-objects) - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, [propertyNames](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#propertynames) - [for all types](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-all-types) - enum, [const](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#const) - [compound keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#compound-keywords) - not, oneOf, anyOf, allOf, [if/then/else](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#ifthenelse) With [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package Ajv also supports validation keywords from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) for JSON Schema standard: - [patternRequired](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#patternrequired-proposed) - like `required` but with patterns that some property should match. - [formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#formatmaximum--formatminimum-and-exclusiveformatmaximum--exclusiveformatminimum-proposed) - setting limits for date, time, etc. See [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md) for more details. ## Annotation keywords JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation. - `title` and `description`: information about the data represented by that schema - `$comment` (NEW in draft-07): information for developers. With option `$comment` Ajv logs or passes the comment string to the user-supplied function. See [Options](#options). - `default`: a default value of the data instance, see [Assigning defaults](#assigning-defaults). - `examples` (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema. - `readOnly` and `writeOnly` (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.). - `contentEncoding`: [RFC 2045](https://tools.ietf.org/html/rfc2045#section-6.1 ), e.g., "base64". - `contentMediaType`: [RFC 2046](https://tools.ietf.org/html/rfc2046), e.g., "image/png". __Please note__: Ajv does not implement validation of the keywords `examples`, `contentEncoding` and `contentMediaType` but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance. ## Formats Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use "format" keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see [ReDoS attack](#redos-attack). __Please note__: if you need to use "format" keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios. The following formats are implemented for string validation with "format" keyword: - _date_: full-date according to [RFC3339](http://tools.ietf.org/html/rfc3339#section-5.6). - _time_: time with optional time-zone. - _date-time_: date-time from the same source (time-zone is mandatory). `date`, `time` and `date-time` validate ranges in `full` mode and only regexp in `fast` mode (see [options](#options)). - _uri_: full URI. - _uri-reference_: URI reference, including full and relative URIs. - _uri-template_: URI template according to [RFC6570](https://tools.ietf.org/html/rfc6570) - _url_ (deprecated): [URL record](https://url.spec.whatwg.org/#concept-url). - _email_: email address. - _hostname_: host name according to [RFC1034](http://tools.ietf.org/html/rfc1034#section-3.5). - _ipv4_: IP address v4. - _ipv6_: IP address v6. - _regex_: tests whether a string is a valid regular expression by passing it to RegExp constructor. - _uuid_: Universally Unique IDentifier according to [RFC4122](http://tools.ietf.org/html/rfc4122). - _json-pointer_: JSON-pointer according to [RFC6901](https://tools.ietf.org/html/rfc6901). - _relative-json-pointer_: relative JSON-pointer according to [this draft](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00). __Please note__: JSON Schema draft-07 also defines formats `iri`, `iri-reference`, `idn-hostname` and `idn-email` for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here. There are two modes of format validation: `fast` and `full`. This mode affects formats `date`, `time`, `date-time`, `uri`, `uri-reference`, and `email`. See [Options](#options) for details. You can add additional formats and replace any of the formats above using [addFormat](#api-addformat) method. The option `unknownFormats` allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See [Options](#options) for details. You can find regular expressions used for format validation and the sources that were used in [formats.js](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js). ## <a name="ref"></a>Combining schemas with $ref You can structure your validation logic across multiple schema files and have schemas reference each other using `$ref` keyword. Example: ```javascript var schema = { "$id": "http://example.com/schemas/schema.json", "type": "object", "properties": { "foo": { "$ref": "defs.json#/definitions/int" }, "bar": { "$ref": "defs.json#/definitions/str" } } }; var defsSchema = { "$id": "http://example.com/schemas/defs.json", "definitions": { "int": { "type": "integer" }, "str": { "type": "string" } } }; ``` Now to compile your schema you can either pass all schemas to Ajv instance: ```javascript var ajv = new Ajv({schemas: [schema, defsSchema]}); var validate = ajv.getSchema('http://example.com/schemas/schema.json'); ``` or use `addSchema` method: ```javascript var ajv = new Ajv; var validate = ajv.addSchema(defsSchema) .compile(schema); ``` See [Options](#options) and [addSchema](#api) method. __Please note__: - `$ref` is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of `addSchema` method or as a property name in `schemas` option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using `compileAsync` method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See [Asynchronous schema compilation](#asynchronous-schema-compilation). ## $data reference With `$data` option you can use values from the validated data as the values for the schema keywords. See [proposal](https://github.com/json-schema-org/json-schema-spec/issues/51) for more information about how it works. `$data` reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems. The value of "$data" should be a [JSON-pointer](https://tools.ietf.org/html/rfc6901) to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a [relative JSON-pointer](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00) (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema). Examples. This schema requires that the value in property `smaller` is less or equal than the value in the property larger: ```javascript var ajv = new Ajv({$data: true}); var schema = { "properties": { "smaller": { "type": "number", "maximum": { "$data": "1/larger" } }, "larger": { "type": "number" } } }; var validData = { smaller: 5, larger: 7 }; ajv.validate(schema, validData); // true ``` This schema requires that the properties have the same format as their field names: ```javascript var schema = { "additionalProperties": { "type": "string", "format": { "$data": "0#" } } }; var validData = { 'date-time': '1963-06-19T08:30:06.283185Z', email: 'joe.bloggs@example.com' } ``` `$data` reference is resolved safely - it won't throw even if some property is undefined. If `$data` resolves to `undefined` the validation succeeds (with the exclusion of `const` keyword). If `$data` resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails. ## $merge and $patch keywords With the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) you can use the keywords `$merge` and `$patch` that allow extending JSON Schemas with patches using formats [JSON Merge Patch (RFC 7396)](https://tools.ietf.org/html/rfc7396) and [JSON Patch (RFC 6902)](https://tools.ietf.org/html/rfc6902). To add keywords `$merge` and `$patch` to Ajv instance use this code: ```javascript require('ajv-merge-patch')(ajv); ``` Examples. Using `$merge`: ```json { "$merge": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": { "properties": { "q": { "type": "number" } } } } } ``` Using `$patch`: ```json { "$patch": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": [ { "op": "add", "path": "/properties/q", "value": { "type": "number" } } ] } } ``` The schemas above are equivalent to this schema: ```json { "type": "object", "properties": { "p": { "type": "string" }, "q": { "type": "number" } }, "additionalProperties": false } ``` The properties `source` and `with` in the keywords `$merge` and `$patch` can use absolute or relative `$ref` to point to other schemas previously added to the Ajv instance or to the fragments of the current schema. See the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) for more information. ## Defining custom keywords The advantages of using custom keywords are: - allow creating validation scenarios that cannot be expressed using JSON Schema - simplify your schemas - help bringing a bigger part of the validation logic to your schemas - make your schemas more expressive, less verbose and closer to your application domain - implement custom data processors that modify your data (`modifying` option MUST be used in keyword definition) and/or create side effects while the data is being validated If a keyword is used only for side-effects and its validation result is pre-defined, use option `valid: true/false` in keyword definition to simplify both generated code (no error handling in case of `valid: true`) and your keyword functions (no need to return any validation result). The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas. You can define custom keywords with [addKeyword](#api-addkeyword) method. Keywords are defined on the `ajv` instance level - new instances will not have previously defined keywords. Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema. Example. `range` and `exclusiveRange` keywords using compiled schema: ```javascript ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) { var min = sch[0]; var max = sch[1]; return parentSchema.exclusiveRange === true ? function (data) { return data > min && data < max; } : function (data) { return data >= min && data <= max; } } }); var schema = { "range": [2, 4], "exclusiveRange": true }; var validate = ajv.compile(schema); console.log(validate(2.01)); // true console.log(validate(3.99)); // true console.log(validate(2)); // false console.log(validate(4)); // false ``` Several custom keywords (typeof, instanceof, range and propertyNames) are defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - they can be used for your schemas and as a starting point for your own custom keywords. See [Defining custom keywords](https://github.com/ajv-validator/ajv/blob/master/CUSTOM.md) for more details. ## Asynchronous schema compilation During asynchronous compilation remote references are loaded using supplied function. See `compileAsync` [method](#api-compileAsync) and `loadSchema` [option](#options). Example: ```javascript var ajv = new Ajv({ loadSchema: loadSchema }); ajv.compileAsync(schema).then(function (validate) { var valid = validate(data); // ... }); function loadSchema(uri) { return request.json(uri).then(function (res) { if (res.statusCode >= 400) throw new Error('Loading error: ' + res.statusCode); return res.body; }); } ``` __Please note__: [Option](#options) `missingRefs` should NOT be set to `"ignore"` or `"fail"` for asynchronous compilation to work. ## Asynchronous validation Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add `async: true` in the keyword or format definition (see [addFormat](#api-addformat), [addKeyword](#api-addkeyword) and [Defining custom keywords](#defining-custom-keywords)). If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have `"$async": true` keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without `$async` keyword Ajv will throw an exception during schema compilation. __Please note__: all asynchronous subschemas that are referenced from the current or other schemas should have `"$async": true` keyword as well, otherwise the schema compilation will fail. Validation function for an asynchronous custom format/keyword should return a promise that resolves with `true` or `false` (or rejects with `new Ajv.ValidationError(errors)` if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to [es7 async functions](http://tc39.github.io/ecmascript-asyncawait/) that can optionally be transpiled with [nodent](https://github.com/MatAtBread/nodent). Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via `processCode` option. See [Options](#options). The compiled validation function has `$async: true` property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas. Validation result will be a promise that resolves with validated data or rejects with an exception `Ajv.ValidationError` that contains the array of validation errors in `errors` property. Example: ```javascript var ajv = new Ajv; // require('ajv-async')(ajv); ajv.addKeyword('idExists', { async: true, type: 'number', validate: checkIdExists }); function checkIdExists(schema, data) { return knex(schema.table) .select('id') .where('id', data) .then(function (rows) { return !!rows.length; // true if record is found }); } var schema = { "$async": true, "properties": { "userId": { "type": "integer", "idExists": { "table": "users" } }, "postId": { "type": "integer", "idExists": { "table": "posts" } } } }; var validate = ajv.compile(schema); validate({ userId: 1, postId: 19 }) .then(function (data) { console.log('Data is valid', data); // { userId: 1, postId: 19 } }) .catch(function (err) { if (!(err instanceof Ajv.ValidationError)) throw err; // data is invalid console.log('Validation errors:', err.errors); }); ``` ### Using transpilers with asynchronous validation functions. [ajv-async](https://github.com/ajv-validator/ajv-async) uses [nodent](https://github.com/MatAtBread/nodent) to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser). #### Using nodent ```javascript var ajv = new Ajv; require('ajv-async')(ajv); // in the browser if you want to load ajv-async bundle separately you can: // window.ajvAsync(ajv); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` #### Using other transpilers ```javascript var ajv = new Ajv({ processCode: transpileFunc }); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` See [Options](#options). ## Security considerations JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider. ##### Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ##### Untrusted schemas Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application. If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. [#557](https://github.com/ajv-validator/ajv/issues/557)) - validating certain data can be slow It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in `pattern` and `patternProperties` keywords. Regardless the measures you take, using untrusted schemas increases security risks. ##### Circular references in JavaScript objects Ajv does not support schemas and validated data that have circular references in objects. See [issue #802](https://github.com/ajv-validator/ajv/issues/802). An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references. ##### Security risks of trusted schemas Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to): - `pattern` and `format` for large strings - in some cases using `maxLength` can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see [ReDoS attack](#redos-attack)). - `patternProperties` for large property names - use `propertyNames` to mitigate, but some regular expressions can have exponential evaluation time as well. - `uniqueItems` for large non-scalar arrays - use `maxItems` to mitigate __Please note__: The suggestions above to prevent slow validation would only work if you do NOT use `allErrors: true` in production code (using it would continue validation after validation errors). You can validate your JSON schemas against [this meta-schema](https://github.com/ajv-validator/ajv/blob/master/lib/refs/json-schema-secure.json) to check that these recommendations are followed: ```javascript const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json')); const schema1 = {format: 'email'}; isSchemaSecure(schema1); // false const schema2 = {format: 'email', maxLength: MAX_LENGTH}; isSchemaSecure(schema2); // true ``` __Please note__: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results. ##### Content Security Policies (CSP) See [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) ## ReDoS attack Certain regular expressions can lead to the exponential evaluation time even with relatively short strings. Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see [safe-regex](https://github.com/substack/safe-regex), for example. __Please note__: some formats that Ajv implements use [regular expressions](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js) that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources __it is strongly recommended__ to consider the following: - making assessment of "format" implementations in Ajv. - using `format: 'fast'` option that simplifies some of the regular expressions (although it does not guarantee that they are safe). - replacing format implementations provided by Ajv with your own implementations of "format" keyword that either uses different regular expressions or another approach to format validation. Please see [addFormat](#api-addformat) method. - disabling format validation by ignoring "format" keyword with option `format: false` Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios. ## Filtering data With [option `removeAdditional`](#options) (added by [andyscott](https://github.com/andyscott)) you can filter data during the validation. This option modifies original data. Example: ```javascript var ajv = new Ajv({ removeAdditional: true }); var schema = { "additionalProperties": false, "properties": { "foo": { "type": "number" }, "bar": { "additionalProperties": { "type": "number" }, "properties": { "baz": { "type": "string" } } } } } var data = { "foo": 0, "additional1": 1, // will be removed; `additionalProperties` == false "bar": { "baz": "abc", "additional2": 2 // will NOT be removed; `additionalProperties` != false }, } var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 } ``` If `removeAdditional` option in the example above were `"all"` then both `additional1` and `additional2` properties would have been removed. If the option were `"failing"` then property `additional1` would have been removed regardless of its value and property `additional2` would have been removed only if its value were failing the schema in the inner `additionalProperties` (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed). __Please note__: If you use `removeAdditional` option with `additionalProperties` keyword inside `anyOf`/`oneOf` keywords your validation can fail with this schema, for example: ```json { "type": "object", "oneOf": [ { "properties": { "foo": { "type": "string" } }, "required": [ "foo" ], "additionalProperties": false }, { "properties": { "bar": { "type": "integer" } }, "required": [ "bar" ], "additionalProperties": false } ] } ``` The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties. With the option `removeAdditional: true` the validation will pass for the object `{ "foo": "abc"}` but will fail for the object `{"bar": 1}`. It happens because while the first subschema in `oneOf` is validated, the property `bar` is removed because it is an additional property according to the standard (because it is not included in `properties` keyword in the same schema). While this behaviour is unexpected (issues [#129](https://github.com/ajv-validator/ajv/issues/129), [#134](https://github.com/ajv-validator/ajv/issues/134)), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way: ```json { "type": "object", "properties": { "foo": { "type": "string" }, "bar": { "type": "integer" } }, "additionalProperties": false, "oneOf": [ { "required": [ "foo" ] }, { "required": [ "bar" ] } ] } ``` The schema above is also more efficient - it will compile into a faster function. ## Assigning defaults With [option `useDefaults`](#options) Ajv will assign values from `default` keyword in the schemas of `properties` and `items` (when it is the array of schemas) to the missing properties and items. With the option value `"empty"` properties and items equal to `null` or `""` (empty string) will be considered missing and assigned defaults. This option modifies original data. __Please note__: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema. Example 1 (`default` in `properties`): ```javascript var ajv = new Ajv({ useDefaults: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "string", "default": "baz" } }, "required": [ "foo", "bar" ] }; var data = { "foo": 1 }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": "baz" } ``` Example 2 (`default` in `items`): ```javascript var schema = { "type": "array", "items": [ { "type": "number" }, { "type": "string", "default": "foo" } ] } var data = [ 1 ]; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // [ 1, "foo" ] ``` `default` keywords in other cases are ignored: - not in `properties` or `items` subschemas - in schemas inside `anyOf`, `oneOf` and `not` (see [#42](https://github.com/ajv-validator/ajv/issues/42)) - in `if` subschema of `switch` keyword - in schemas generated by custom macro keywords The [`strictDefaults` option](#options) customizes Ajv's behavior for the defaults that Ajv ignores (`true` raises an error, and `"log"` outputs a warning). ## Coercing data types When you are validating user inputs all your data properties are usually strings. The option `coerceTypes` allows you to have your data types coerced to the types specified in your schema `type` keywords, both to pass the validation and to use the correctly typed data afterwards. This option modifies original data. __Please note__: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value. Example 1: ```javascript var ajv = new Ajv({ coerceTypes: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "boolean" } }, "required": [ "foo", "bar" ] }; var data = { "foo": "1", "bar": "false" }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": false } ``` Example 2 (array coercions): ```javascript var ajv = new Ajv({ coerceTypes: 'array' }); var schema = { "properties": { "foo": { "type": "array", "items": { "type": "number" } }, "bar": { "type": "boolean" } } }; var data = { "foo": "1", "bar": ["false"] }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": [1], "bar": false } ``` The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords). See [Coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md) for details. ## API ##### new Ajv(Object options) -&gt; Object Create Ajv instance. ##### .compile(Object schema) -&gt; Function&lt;Object data&gt; Generate validating function and cache the compiled schema for future use. Validating function returns a boolean value. This function has properties `errors` and `schema`. Errors encountered during the last validation are assigned to `errors` property (it is assigned `null` if there was no errors). `schema` property contains the reference to the original schema. The schema passed to this method will be validated against meta-schema unless `validateSchema` option is false. If schema is invalid, an error will be thrown. See [options](#options). ##### <a name="api-compileAsync"></a>.compileAsync(Object schema [, Boolean meta] [, Function callback]) -&gt; Promise Asynchronous version of `compile` method that loads missing remote schemas using asynchronous function in `options.loadSchema`. This function returns a Promise that resolves to a validation function. An optional callback passed to `compileAsync` will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when: - missing schema can't be loaded (`loadSchema` returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved. - schema (or some loaded/referenced schema) is invalid. The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded. You can asynchronously compile meta-schema by passing `true` as the second parameter. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### .validate(Object schema|String key|String ref, data) -&gt; Boolean Validate data using passed schema (it will be compiled and cached). Instead of the schema you can use the key that was previously passed to `addSchema`, the schema id if it was present in the schema or any previously resolved reference. Validation errors will be available in the `errors` property of Ajv instance (`null` if there were no errors). __Please note__: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later. If the schema is asynchronous (has `$async` keyword on the top level) this method returns a Promise. See [Asynchronous validation](#asynchronous-validation). ##### .addSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole. Array of schemas can be passed (schemas should have ids), the second parameter will be ignored. Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key. Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data. Although `addSchema` does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time. By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by `validateSchema` option. __Please note__: Ajv uses the [method chaining syntax](https://en.wikipedia.org/wiki/Method_chaining) for all methods with the prefix `add*` and `remove*`. This allows you to do nice things like the following. ```javascript var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri); ``` ##### .addMetaSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of `addSchema` because there may be instance options that would compile a meta schema incorrectly (at the moment it is `removeAdditional` option). There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option `meta` is set to `false`. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See `validateSchema`. ##### <a name="api-validateschema"></a>.validateSchema(Object schema) -&gt; Boolean Validates schema. This method should be used to validate schemas rather than `validate` due to the inconsistency of `uri` format in JSON Schema standard. By default this method is called automatically when the schema is added, so you rarely need to use it directly. If schema doesn't have `$schema` property, it is validated against draft 6 meta-schema (option `meta` should not be false). If schema has `$schema` property, then the schema with this id (that should be previously added) is used to validate passed schema. Errors will be available at `ajv.errors`. ##### .getSchema(String key) -&gt; Function&lt;Object data&gt; Retrieve compiled schema previously added with `addSchema` by the key passed to `addSchema` or by its full reference (id). The returned validating function has `schema` property with the reference to the original schema. ##### .removeSchema([Object schema|String key|String ref|RegExp pattern]) -&gt; Ajv Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references. Schema can be removed using: - key passed to `addSchema` - it's full reference (id) - RegExp that should match schema id or key (meta-schemas won't be removed) - actual schema object that will be stable-stringified to remove schema from cache If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared. ##### <a name="api-addformat"></a>.addFormat(String name, String|RegExp|Function|Object format) -&gt; Ajv Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance. Strings are converted to RegExp. Function should return validation result as `true` or `false`. If object is passed it should have properties `validate`, `compare` and `async`: - _validate_: a string, RegExp or a function as described above. - _compare_: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords `formatMaximum`/`formatMinimum` (defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package). It should return `1` if the first value is bigger than the second value, `-1` if it is smaller and `0` if it is equal. - _async_: an optional `true` value if `validate` is an asynchronous function; in this case it should return a promise that resolves with a value `true` or `false`. - _type_: an optional type of data that the format applies to. It can be `"string"` (default) or `"number"` (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass. Custom formats can be also added via `formats` option. ##### <a name="api-addkeyword"></a>.addKeyword(String keyword, Object definition) -&gt; Ajv Add custom validation keyword to Ajv instance. Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance. Keyword must start with a letter, `_` or `$`, and may continue with letters, numbers, `_`, `$`, or `-`. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions. Example Keywords: - `"xyz-example"`: valid, and uses prefix for the xyz project to avoid name collisions. - `"example"`: valid, but not recommended as it could collide with future versions of JSON Schema etc. - `"3-example"`: invalid as numbers are not allowed to be the first character in a keyword Keyword definition is an object with the following properties: - _type_: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types. - _validate_: validating function - _compile_: compiling function - _macro_: macro function - _inline_: compiling function that returns code (as string) - _schema_: an optional `false` value used with "validate" keyword to not pass schema - _metaSchema_: an optional meta-schema for keyword schema - _dependencies_: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation - _modifying_: `true` MUST be passed if keyword modifies data - _statements_: `true` can be passed in case inline keyword generates statements (as opposed to expression) - _valid_: pass `true`/`false` to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - _$data_: an optional `true` value to support [$data reference](#data-reference) as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - _async_: an optional `true` value if the validation function is asynchronous (whether it is compiled or passed in _validate_ property); in this case it should return a promise that resolves with a value `true` or `false`. This option is ignored in case of "macro" and "inline" keywords. - _errors_: an optional boolean or string `"full"` indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation. _compile_, _macro_ and _inline_ are mutually exclusive, only one should be used at a time. _validate_ can be used separately or in addition to them to support $data reference. __Please note__: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate `type` keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed. See [Defining custom keywords](#defining-custom-keywords) for more details. ##### .getKeyword(String keyword) -&gt; Object|Boolean Returns custom keyword definition, `true` for pre-defined keywords and `false` if the keyword is unknown. ##### .removeKeyword(String keyword) -&gt; Ajv Removes custom or pre-defined keyword so you can redefine them. While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results. __Please note__: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use `removeSchema` method and compile them again. ##### .errorsText([Array&lt;Object&gt; errors [, Object options]]) -&gt; String Returns the text with all errors in a String. Options can have properties `separator` (string used to separate errors, ", " by default) and `dataVar` (the variable name that dataPaths are prefixed with, "data" by default). ## Options Defaults: ```javascript { // validation and reporting options: $data: false, allErrors: false, verbose: false, $comment: false, // NEW in Ajv version 6.0 jsonPointers: false, uniqueItems: true, unicode: true, nullable: false, format: 'fast', formats: {}, unknownFormats: true, schemas: {}, logger: undefined, // referenced schema options: schemaId: '$id', missingRefs: true, extendRefs: 'ignore', // recommended 'fail' loadSchema: undefined, // function(uri: string): Promise {} // options to modify validated data: removeAdditional: false, useDefaults: false, coerceTypes: false, // strict mode options strictDefaults: false, strictKeywords: false, strictNumbers: false, // asynchronous validation options: transpile: undefined, // requires ajv-async package // advanced options: meta: true, validateSchema: true, addUsedSchema: true, inlineRefs: true, passContext: false, loopRequired: Infinity, ownProperties: false, multipleOfPrecision: false, errorDataPath: 'object', // deprecated messages: true, sourceCode: false, processCode: undefined, // function (str: string, schema: object): string {} cache: new Cache, serialize: undefined } ``` ##### Validation and reporting options - _$data_: support [$data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See [API](#api). - _allErrors_: check all rules collecting all errors. Default is to return after the first error. - _verbose_: include the reference to the part of the schema (`schema` and `parentSchema`) and validated data in errors (false by default). - _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$comment` keyword to a function. Option values: - `false` (default): ignore $comment keyword. - `true`: log the keyword value to console. - function: pass the keyword value, its schema path and root schema to the specified function - _jsonPointers_: set `dataPath` property of errors using [JSON Pointers](https://tools.ietf.org/html/rfc6901) instead of JavaScript property access notation. - _uniqueItems_: validate `uniqueItems` keyword (true by default). - _unicode_: calculate correct length of strings with unicode pairs (true by default). Pass `false` to use `.length` of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - _nullable_: support keyword "nullable" from [Open API 3 specification](https://swagger.io/docs/specification/data-models/data-types/). - _format_: formats validation mode. Option values: - `"fast"` (default) - simplified and fast validation (see [Formats](#formats) for details of which formats are available and affected by this option). - `"full"` - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - `false` - ignore all format keywords. - _formats_: an object with custom formats. Keys and values will be passed to `addFormat` method. - _keywords_: an object with custom keywords. Keys and values will be passed to `addKeyword` method. - _unknownFormats_: handling of unknown formats. Option values: - `true` (default) - if an unknown format is encountered the exception is thrown during schema compilation. If `format` keyword value is [$data reference](#data-reference) and it is unknown the validation will fail. - `[String]` - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. If `format` keyword value is [$data reference](#data-reference) and it is not in this array the validation will fail. - `"ignore"` - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification. - _schemas_: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method `addSchema(value, key)` will be called for each schema in this object. - _logger_: sets the logging method. Default is the global `console` object that should have methods `log`, `warn` and `error`. See [Error logging](#error-logging). Option values: - custom logger - it should have methods `log`, `warn` and `error`. If any of these methods is missing an exception will be thrown. - `false` - logging is disabled. ##### Referenced schema options - _schemaId_: this option defines which keywords are used as schema URI. Option value: - `"$id"` (default) - only use `$id` keyword as schema URI (as specified in JSON Schema draft-06/07), ignore `id` keyword (if it is present a warning will be logged). - `"id"` - only use `id` keyword as schema URI (as specified in JSON Schema draft-04), ignore `$id` keyword (if it is present a warning will be logged). - `"auto"` - use both `$id` and `id` keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation. - _missingRefs_: handling of missing referenced schemas. Option values: - `true` (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties `missingRef` (with hash fragment) and `missingSchema` (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). - `"ignore"` - to log error during compilation and always pass validation. - `"fail"` - to log error and successfully compile schema but fail validation if this rule is checked. - _extendRefs_: validation of other keywords when `$ref` is present in the schema. Option values: - `"ignore"` (default) - when `$ref` is used other keywords are ignored (as per [JSON Reference](https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03#section-3) standard). A warning will be logged during the schema compilation. - `"fail"` (recommended) - if other validation keywords are used together with `$ref` the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing. - `true` - validate all keywords in the schemas with `$ref` (the default behaviour in versions before 5.0.0). - _loadSchema_: asynchronous function that will be used to load remote schemas when `compileAsync` [method](#api-compileAsync) is used and some reference is missing (option `missingRefs` should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### Options to modify validated data - _removeAdditional_: remove additional properties - see example in [Filtering data](#filtering-data). This option is not used if schema is added with `addMetaSchema` method. Option values: - `false` (default) - not to remove additional properties - `"all"` - all additional properties are removed, regardless of `additionalProperties` keyword in schema (and no validation is made for them). - `true` - only additional properties with `additionalProperties` keyword equal to `false` are removed. - `"failing"` - additional properties that fail schema validation will be removed (where `additionalProperties` keyword is `false` or schema). - _useDefaults_: replace missing or undefined properties and items with the values from corresponding `default` keywords. Default behaviour is to ignore `default` keywords. This option is not used if schema is added with `addMetaSchema` method. See examples in [Assigning defaults](#assigning-defaults). Option values: - `false` (default) - do not use defaults - `true` - insert defaults by value (object literal is used). - `"empty"` - in addition to missing or undefined, use defaults for properties and items that are equal to `null` or `""` (an empty string). - `"shared"` (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well. - _coerceTypes_: change data type of data to match `type` keyword. See the example in [Coercing data types](#coercing-data-types) and [coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md). Option values: - `false` (default) - no type coercion. - `true` - coerce scalar data types. - `"array"` - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema). ##### Strict mode options - _strictDefaults_: report ignored `default` keywords in schemas. Option values: - `false` (default) - ignored defaults are not reported - `true` - if an ignored default is present, throw an error - `"log"` - if an ignored default is present, log warning - _strictKeywords_: report unknown keywords in schemas. Option values: - `false` (default) - unknown keywords are not reported - `true` - if an unknown keyword is present, throw an error - `"log"` - if an unknown keyword is present, log warning - _strictNumbers_: validate numbers strictly, failing validation for NaN and Infinity. Option values: - `false` (default) - NaN or Infinity will pass validation for numeric types - `true` - NaN or Infinity will not pass validation for numeric types ##### Asynchronous validation options - _transpile_: Requires [ajv-async](https://github.com/ajv-validator/ajv-async) package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values: - `undefined` (default) - transpile with [nodent](https://github.com/MatAtBread/nodent) if async functions are not supported. - `true` - always transpile with nodent. - `false` - do not transpile; if async functions are not supported an exception will be thrown. ##### Advanced options - _meta_: add [meta-schema](http://json-schema.org/documentation.html) so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no `$schema` keyword. This default meta-schema MUST have `$schema` keyword. - _validateSchema_: validate added/compiled schemas against meta-schema (true by default). `$schema` property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with `addMetaSchema` method. Option values: - `true` (default) - if the validation fails, throw the exception. - `"log"` - if the validation fails, log error. - `false` - skip schema validation. - _addUsedSchema_: by default methods `compile` and `validate` add schemas to the instance if they have `$id` (or `id`) property that doesn't start with "#". If `$id` is present and it is not unique the exception will be thrown. Set this option to `false` to skip adding schemas to the instance and the `$id` uniqueness check when these methods are used. This option does not affect `addSchema` method. - _inlineRefs_: Affects compilation of referenced schemas. Option values: - `true` (default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. - `false` - to not inline referenced schemas (they will be compiled as separate functions). - integer number - to limit the maximum number of keywords of the schema that will be inlined. - _passContext_: pass validation context to custom keyword functions. If this option is `true` and you pass some context to the compiled validation function with `validate.call(context, data)`, the `context` will be available as `this` in your custom keywords. By default `this` is Ajv instance. - _loopRequired_: by default `required` keyword is compiled into a single expression (or a sequence of statements in `allErrors` mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which `required` keyword will be validated in a loop - smaller validation function size but also worse performance. - _ownProperties_: by default Ajv iterates over all enumerable object properties; when this option is `true` only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - _multipleOfPrecision_: by default `multipleOf` keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue [#84](https://github.com/ajv-validator/ajv/issues/84)). If you need to use fractional dividers set this option to some positive integer N to have `multipleOf` validated using this formula: `Math.abs(Math.round(division) - division) < 1e-N` (it is slower but allows for float arithmetics deviations). - _errorDataPath_ (deprecated): set `dataPath` to point to 'object' (default) or to 'property' when validating keywords `required`, `additionalProperties` and `dependencies`. - _messages_: Include human-readable messages in errors. `true` by default. `false` can be passed when custom messages are used (e.g. with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n)). - _sourceCode_: add `sourceCode` property to validating function (for debugging; this code can be different from the result of toString call). - _processCode_: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options: - `beautify` that formatted the generated function using [js-beautify](https://github.com/beautify-web/js-beautify). If you want to beautify the generated code pass a function calling `require('js-beautify').js_beautify` as `processCode: code => js_beautify(code)`. - `transpile` that transpiled asynchronous validation function. You can still use `transpile` option with [ajv-async](https://github.com/ajv-validator/ajv-async) package. See [Asynchronous validation](#asynchronous-validation) for more information. - _cache_: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache [sacjs](https://github.com/epoberezkin/sacjs) can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods `put(key, value)`, `get(key)`, `del(key)` and `clear()`. - _serialize_: an optional function to serialize schema to cache key. Pass `false` to use schema itself as a key (e.g., if WeakMap used as a cache). By default [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) is used. ## Validation errors In case of validation failure, Ajv assigns the array of errors to `errors` property of validation function (or to `errors` property of Ajv instance when `validate` or `validateSchema` methods were called). In case of [asynchronous validation](#asynchronous-validation), the returned promise is rejected with exception `Ajv.ValidationError` that has `errors` property. ### Error objects Each error is an object with the following properties: - _keyword_: validation keyword. - _dataPath_: the path to the part of the data that was validated. By default `dataPath` uses JavaScript property access notation (e.g., `".prop[1].subProp"`). When the option `jsonPointers` is true (see [Options](#options)) `dataPath` will be set using JSON pointer standard (e.g., `"/prop/1/subProp"`). - _schemaPath_: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation. - _params_: the object with the additional information about error that can be used to create custom error messages (e.g., using [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package). See below for parameters set by all keywords. - _message_: the standard error message (can be excluded with option `messages` set to false). - _schema_: the schema of the keyword (added with `verbose` option). - _parentSchema_: the schema containing the keyword (added with `verbose` option) - _data_: the data validated by the keyword (added with `verbose` option). __Please note__: `propertyNames` keyword schema validation errors have an additional property `propertyName`, `dataPath` points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property `keyword` equal to `"propertyNames"`. ### Error parameters Properties of `params` object in errors depend on the keyword that failed validation. - `maxItems`, `minItems`, `maxLength`, `minLength`, `maxProperties`, `minProperties` - property `limit` (number, the schema of the keyword). - `additionalItems` - property `limit` (the maximum number of allowed items in case when `items` keyword is an array of schemas and `additionalItems` is false). - `additionalProperties` - property `additionalProperty` (the property not used in `properties` and `patternProperties` keywords). - `dependencies` - properties: - `property` (dependent property), - `missingProperty` (required missing dependency - only the first one is reported currently) - `deps` (required dependencies, comma separated list as a string), - `depsCount` (the number of required dependencies). - `format` - property `format` (the schema of the keyword). - `maximum`, `minimum` - properties: - `limit` (number, the schema of the keyword), - `exclusive` (boolean, the schema of `exclusiveMaximum` or `exclusiveMinimum`), - `comparison` (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=") - `multipleOf` - property `multipleOf` (the schema of the keyword) - `pattern` - property `pattern` (the schema of the keyword) - `required` - property `missingProperty` (required property that is missing). - `propertyNames` - property `propertyName` (an invalid property name). - `patternRequired` (in ajv-keywords) - property `missingPattern` (required pattern that did not match any property). - `type` - property `type` (required type(s), a string, can be a comma-separated list) - `uniqueItems` - properties `i` and `j` (indices of duplicate items). - `const` - property `allowedValue` pointing to the value (the schema of the keyword). - `enum` - property `allowedValues` pointing to the array of values (the schema of the keyword). - `$ref` - property `ref` with the referenced schema URI. - `oneOf` - property `passingSchemas` (array of indices of passing schemas, null if no schema passes). - custom keywords (in case keyword definition doesn't create errors) - property `keyword` (the keyword name). ### Error logging Using the `logger` option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - **Required Methods**: `log`, `warn`, `error` ```javascript var otherLogger = new OtherLogger(); var ajv = new Ajv({ logger: { log: console.log.bind(console), warn: function warn() { otherLogger.logWarn.apply(otherLogger, arguments); }, error: function error() { otherLogger.logError.apply(otherLogger, arguments); console.error.apply(console, arguments); } } }); ``` ## Plugins Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions: - it exports a function - this function accepts ajv instance as the first parameter and returns the same instance to allow chaining - this function can accept an optional configuration as the second parameter If you have published a useful plugin please submit a PR to add it to the next section. ## Related packages - [ajv-async](https://github.com/ajv-validator/ajv-async) - plugin to configure async validation mode - [ajv-bsontype](https://github.com/BoLaMN/ajv-bsontype) - plugin to validate mongodb's bsonType formats - [ajv-cli](https://github.com/jessedc/ajv-cli) - command line interface - [ajv-errors](https://github.com/ajv-validator/ajv-errors) - plugin for custom error messages - [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) - internationalised error messages - [ajv-istanbul](https://github.com/ajv-validator/ajv-istanbul) - plugin to instrument generated validation code to measure test coverage of your schemas - [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) - plugin with custom validation keywords (select, typeof, etc.) - [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) - plugin with keywords $merge and $patch - [ajv-pack](https://github.com/ajv-validator/ajv-pack) - produces a compact module exporting validation functions - [ajv-formats-draft2019](https://github.com/luzlab/ajv-formats-draft2019) - format validators for draft2019 that aren't already included in ajv (ie. `idn-hostname`, `idn-email`, `iri`, `iri-reference` and `duration`). ## Some packages using Ajv - [webpack](https://github.com/webpack/webpack) - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser - [jsonscript-js](https://github.com/JSONScript/jsonscript-js) - the interpreter for [JSONScript](http://www.jsonscript.org) - scripted processing of existing endpoints and services - [osprey-method-handler](https://github.com/mulesoft-labs/osprey-method-handler) - Express middleware for validating requests and responses based on a RAML method object, used in [osprey](https://github.com/mulesoft/osprey) - validating API proxy generated from a RAML definition - [har-validator](https://github.com/ahmadnassri/har-validator) - HTTP Archive (HAR) validator - [jsoneditor](https://github.com/josdejong/jsoneditor) - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org - [JSON Schema Lint](https://github.com/nickcmaynard/jsonschemalint) - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com - [objection](https://github.com/vincit/objection.js) - SQL-friendly ORM for Node.js - [table](https://github.com/gajus/table) - formats data into a string table - [ripple-lib](https://github.com/ripple/ripple-lib) - a JavaScript API for interacting with [Ripple](https://ripple.com) in Node.js and the browser - [restbase](https://github.com/wikimedia/restbase) - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content - [hippie-swagger](https://github.com/CacheControl/hippie-swagger) - [Hippie](https://github.com/vesln/hippie) wrapper that provides end to end API testing with swagger validation - [react-form-controlled](https://github.com/seeden/react-form-controlled) - React controlled form components with validation - [rabbitmq-schema](https://github.com/tjmehta/rabbitmq-schema) - a schema definition module for RabbitMQ graphs and messages - [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema - [chai-ajv-json-schema](https://github.com/peon374/chai-ajv-json-schema) - chai plugin to us JSON Schema with expect in mocha tests - [grunt-jsonschema-ajv](https://github.com/SignpostMarv/grunt-jsonschema-ajv) - Grunt plugin for validating files against JSON Schema - [extract-text-webpack-plugin](https://github.com/webpack-contrib/extract-text-webpack-plugin) - extract text from bundle into a file - [electron-builder](https://github.com/electron-userland/electron-builder) - a solution to package and build a ready for distribution Electron app - [addons-linter](https://github.com/mozilla/addons-linter) - Mozilla Add-ons Linter - [gh-pages-generator](https://github.com/epoberezkin/gh-pages-generator) - multi-page site generator converting markdown files to GitHub pages - [ESLint](https://github.com/eslint/eslint) - the pluggable linting utility for JavaScript and JSX ## Tests ``` npm install git submodule update --init npm test ``` ## Contributing All validation functions are generated using doT templates in [dot](https://github.com/ajv-validator/ajv/tree/master/lib/dot) folder. Templates are precompiled so doT is not a run-time dependency. `npm run build` - compiles templates to [dotjs](https://github.com/ajv-validator/ajv/tree/master/lib/dotjs) folder. `npm run watch` - automatically compiles templates when files in dot folder change Please see [Contributing guidelines](https://github.com/ajv-validator/ajv/blob/master/CONTRIBUTING.md) ## Changes history See https://github.com/ajv-validator/ajv/releases __Please note__: [Changes in version 7.0.0-beta](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](https://github.com/ajv-validator/ajv/blob/master/CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](https://github.com/ajv-validator/ajv/blob/master/LICENSE) # which Like the unix `which` utility. Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so `hash -r` is not needed when the PATH changes. ## USAGE ```javascript var which = require('which') // async usage which('node', function (er, resolvedPath) { // er is returned if no "node" is found on the PATH // if it is found, then the absolute path to the exec is returned }) // or promise which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... }) // sync usage // throws if not found var resolved = which.sync('node') // if nothrow option is used, returns null if not found resolved = which.sync('node', {nothrow: true}) // Pass options to override the PATH and PATHEXT environment vars. which('node', { path: someOtherPath }, function (er, resolved) { if (er) throw er console.log('found at %j', resolved) }) ``` ## CLI USAGE Same as the BSD `which(1)` binary. ``` usage: which [-as] program ... ``` ## OPTIONS You may pass an options object as the second argument. - `path`: Use instead of the `PATH` environment variable. - `pathExt`: Use instead of the `PATHEXT` environment variable. - `all`: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string. <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. ## assemblyscript-temporal An implementation of temporal within AssemblyScript, with an initial focus on non-timezone-aware classes and functionality. ### Why? AssemblyScript has minimal `Date` support, however, the JS Date API itself is terrible and people tend not to use it that often. As a result libraries like moment / luxon have become staple replacements. However, there is now a [relatively mature TC39 proposal](https://github.com/tc39/proposal-temporal) that adds greatly improved date support to JS. The goal of this project is to implement Temporal for AssemblyScript. ### Usage This library currently supports the following types: #### `PlainDateTime` A `PlainDateTime` represents a calendar date and wall-clock time that does not carry time zone information, e.g. December 7th, 1995 at 3:00 PM (in the Gregorian calendar). For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindatetime.html), this implementation follows the specification as closely as possible. You can create a `PlainDateTime` from individual components, a string or an object literal: ```javascript datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.year; // 2019; datetime.month; // 11; // ... datetime.nanosecond; // 789; datetime = PlainDateTime.from("1976-11-18T12:34:56"); datetime.toString(); // "1976-11-18T12:34:56" datetime = PlainDateTime.from({ year: 1966, month: 3, day: 3 }); datetime.toString(); // "1966-03-03T00:00:00" ``` There are various ways you can manipulate a date: ```javascript // use 'with' to copy a date but with various property values overriden datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.with({ year: 2019 }).toString(); // "2019-11-18T15:23:30.123456789" // use 'add' or 'substract' to add / subtract a duration datetime = PlainDateTime.from("2020-01-12T15:00"); datetime.add({ months: 1 }).toString(); // "2020-02-12T15:00:00"); // add / subtract support Duration objects or object literals datetime.add(new Duration(1)).toString(); // "2021-01-12T15:00:00"); ``` You can compare dates and check for equality ```javascript dt1 = PlainDateTime.from("1976-11-18"); dt2 = PlainDateTime.from("2019-10-29"); PlainDateTime.compare(dt1, dt1); // 0 PlainDateTime.compare(dt1, dt2); // -1 dt1.equals(dt1); // true ``` Currently `PlainDateTime` only supports the ISO 8601 (Gregorian) calendar. #### `PlainDate` A `PlainDate` object represents a calendar date that is not associated with a particular time or time zone, e.g. August 24th, 2006. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindate.html), this implementation follows the specification as closely as possible. The `PlainDate` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainTime` A `PlainTime` object represents a wall-clock time that is not associated with a particular date or time zone, e.g. 7:39 PM. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaintime.html), this implementation follows the specification as closely as possible. The `PlainTime` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainMonthDay` A date without a year component. This is useful to express things like "Bastille Day is on the 14th of July". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainmonthday.html) , this implementation follows the specification as closely as possible. ```javascript const monthDay = PlainMonthDay.from({ month: 7, day: 14 }); // => 07-14 const date = monthDay.toPlainDate({ year: 2030 }); // => 2030-07-14 date.dayOfWeek; // => 7 ``` The `PlainMonthDay` API is almost identical to `PlainDateTime`, so see above for more API usage examples. #### `PlainYearMonth` A date without a day component. This is useful to express things like "the October 2020 meeting". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainyearmonth.html) , this implementation follows the specification as closely as possible. The `PlainYearMonth` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `now` The `now` object has several methods which give information about the current time and date. ```javascript dateTime = now.plainDateTimeISO(); dateTime.toString(); // 2021-04-01T12:05:47.357 ``` ## Contributing This project is open source, MIT licensed and your contributions are very much welcomed. There is a [brief document that outlines implementation progress and priorities](./development.md). # Acorn A tiny, fast JavaScript parser written in JavaScript. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface **parse**`(input, options)` is the main interface to the library. The `input` parameter is a string, `options` can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the [ESTree spec](https://github.com/estree/estree). ```javascript let acorn = require("acorn"); console.log(acorn.parse("1 + 1")); ``` When encountering a syntax error, the parser will raise a `SyntaxError` object with a meaningful message. The error object will have a `pos` property that indicates the string offset at which the error occurred, and a `loc` object that contains a `{line, column}` object referring to that same position. Options can be provided by passing a second argument, which should be an object containing any of these fields: - **ecmaVersion**: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10. **NOTE**: Only 'stage 4' (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins. - **sourceType**: Indicate the mode the code should be parsed in. Can be either `"script"` or `"module"`. This influences global strict mode and parsing of `import` and `export` declarations. **NOTE**: If set to `"module"`, then static `import` / `export` syntax will be valid, even if `ecmaVersion` is less than 6. - **onInsertedSemicolon**: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if `locations` is on, also a `{line, column}` object representing this position. - **onTrailingComma**: Like `onInsertedSemicolon`, but for trailing commas. - **allowReserved**: If `false`, using a reserved word will generate an error. Defaults to `true` for `ecmaVersion` 3, `false` for higher versions. When given the value `"never"`, reserved words and keywords can also not be used as property names (as in Internet Explorer's old parser). - **allowReturnOutsideFunction**: By default, a return statement at the top level raises an error. Set this to `true` to accept such code. - **allowImportExportEverywhere**: By default, `import` and `export` declarations can only appear at a program's top level. Setting this option to `true` allows them anywhere where a statement is allowed. - **allowAwaitOutsideFunction**: By default, `await` expressions can only appear inside `async` functions. Setting this option to `true` allows to have top-level `await` expressions. They are still not allowed in non-`async` functions, though. - **allowHashBang**: When this is enabled (off by default), if the code starts with the characters `#!` (as in a shellscript), the first line will be treated as a comment. - **locations**: When `true`, each node has a `loc` object attached with `start` and `end` subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. Default is `false`. - **onToken**: If a function is passed for this option, each found token will be passed in same format as tokens returned from `tokenizer().getToken()`. If array is passed, each found token is pushed to it. Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **onComment**: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters: - `block`: `true` if the comment is a block comment, false if it is a line comment. - `text`: The content of the comment. - `start`: Character offset of the start of the comment. - `end`: Character offset of the end of the comment. When the `locations` options is on, the `{line, column}` locations of the comment’s start and end are passed as two additional parameters. If array is passed for this option, each found comment is pushed to it as object in Esprima format: ```javascript { "type": "Line" | "Block", "value": "comment text", "start": Number, "end": Number, // If `locations` option is on: "loc": { "start": {line: Number, column: Number} "end": {line: Number, column: Number} }, // If `ranges` option is on: "range": [Number, Number] } ``` Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **ranges**: Nodes have their start and end characters offsets recorded in `start` and `end` properties (directly on the node, rather than the `loc` object, which holds line/column data. To also add a [semi-standardized](https://bugzilla.mozilla.org/show_bug.cgi?id=745678) `range` property holding a `[start, end]` array with the same numbers, set the `ranges` option to `true`. - **program**: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the `program` option in subsequent parses. This will add the toplevel forms of the parsed file to the "Program" (top) node of an existing parse tree. - **sourceFile**: When the `locations` option is `true`, you can pass this option to add a `source` attribute in every node’s `loc` object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose. - **directSourceFile**: Like `sourceFile`, but a `sourceFile` property will be added (regardless of the `location` option) directly to the nodes, rather than the `loc` object. - **preserveParens**: If this option is `true`, parenthesized expressions are represented by (non-standard) `ParenthesizedExpression` nodes that have a single `expression` property containing the expression inside parentheses. **parseExpressionAt**`(input, offset, options)` will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression. **tokenizer**`(input, options)` returns an object with a `getToken` method that can be called repeatedly to get the next token, a `{start, end, type, value}` object (with added `loc` property when the `locations` option is enabled and `range` property when the `ranges` option is enabled). When the token's type is `tokTypes.eof`, you should stop calling the method, since it will keep returning that same token forever. In ES6 environment, returned result can be used as any other protocol-compliant iterable: ```javascript for (let token of acorn.tokenizer(str)) { // iterate over the tokens } // transform code to array of tokens: var tokens = [...acorn.tokenizer(str)]; ``` **tokTypes** holds an object mapping names to the token type objects that end up in the `type` properties of tokens. **getLineInfo**`(input, offset)` can be used to get a `{line, column}` object for a given program string and offset. ### The `Parser` class Instances of the **`Parser`** class contain all the state and logic that drives a parse. It has static methods `parse`, `parseExpressionAt`, and `tokenizer` that match the top-level functions by the same name. When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static `extend` method. ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); var JSXParser = acorn.Parser.extend(jsx()); JSXParser.parse("foo(<bar/>)"); ``` The `extend` method takes any number of plugin values, and returns a new `Parser` class that includes the extra parser logic provided by the plugins. ## Command line interface The `bin/acorn` utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options: - `--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10`: Sets the ECMAScript version to parse. Default is version 9. - `--module`: Sets the parsing mode to `"module"`. Is set to `"script"` otherwise. - `--locations`: Attaches a "loc" object to each node with "start" and "end" subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. - `--allow-hash-bang`: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment. - `--compact`: No whitespace is used in the AST output. - `--silent`: Do not output the AST, just return the exit status. - `--help`: Print the usage information and quit. The utility spits out the syntax tree as JSON data. ## Existing plugins - [`acorn-jsx`](https://github.com/RReverser/acorn-jsx): Parse [Facebook JSX syntax extensions](https://github.com/facebook/jsx) Plugins for ECMAScript proposals: - [`acorn-stage3`](https://github.com/acornjs/acorn-stage3): Parse most stage 3 proposals, bundling: - [`acorn-class-fields`](https://github.com/acornjs/acorn-class-fields): Parse [class fields proposal](https://github.com/tc39/proposal-class-fields) - [`acorn-import-meta`](https://github.com/acornjs/acorn-import-meta): Parse [import.meta proposal](https://github.com/tc39/proposal-import-meta) - [`acorn-private-methods`](https://github.com/acornjs/acorn-private-methods): parse [private methods, getters and setters proposal](https://github.com/tc39/proposal-private-methods)n # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). # near-sdk-as Collection of packages used in developing NEAR smart contracts in AssemblyScript including: - [`runtime library`](https://github.com/near/near-sdk-as/tree/master/sdk-core) - AssemblyScript near runtime library - [`bindgen`](https://github.com/near/near-sdk-as/tree/master/bindgen) - AssemblyScript transformer that adds the bindings needed to (de)serialize input and outputs. - [`near-mock-vm`](https://github.com/near/near-sdk-as/tree/master/near-mock-vm) - Core of the NEAR VM compiled to WebAssembly used for running unit tests. - [`@as-pect/cli`](https://github.com/jtenner/as-pect) - AssemblyScript testing framework similar to jest. ## To Install ```sh yarn add -D near-sdk-as ``` ## Project Setup To set up a AS project to compile with the sdk add the following `asconfig.json` file to the root: ```json { "extends": "near-sdk-as/asconfig.json" } ``` Then if your main file is `assembly/index.ts`, then the project can be build with [`asbuild`](https://github.com/willemneal/asbuild): ```sh yarn asb ``` will create a release build and place it `./build/release/<name-in-package.json>.wasm` ```sh yarn asb --target debug ``` will create a debug build and place it in `./build/debug/..` ## Testing ### Unit Testing See the [sdk's as-pect tests for an example](./sdk/assembly/__tests__) of creating unit tests. Must be ending in `.spec.ts` in a `assembly/__tests__`. ## License `near-sdk-as` is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-MIT](LICENSE-MIT) and [LICENSE-APACHE](LICENSE-APACHE) for details. functional-red-black-tree ========================= A [fully persistent](http://en.wikipedia.org/wiki/Persistent_data_structure) [red-black tree](http://en.wikipedia.org/wiki/Red%E2%80%93black_tree) written 100% in JavaScript. Works both in node.js and in the browser via [browserify](http://browserify.org/). Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy. Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module. # Install npm install functional-red-black-tree # Example Here is an example of some basic usage: ```javascript //Load the library var createTree = require("functional-red-black-tree") //Create a tree var t1 = createTree() //Insert some items into the tree var t2 = t1.insert(1, "foo") var t3 = t2.insert(2, "bar") //Remove something var t4 = t3.remove(1) ``` # API ```javascript var createTree = require("functional-red-black-tree") ``` ## Overview - [Tree methods](#tree-methods) - [`var tree = createTree([compare])`](#var-tree-=-createtreecompare) - [`tree.keys`](#treekeys) - [`tree.values`](#treevalues) - [`tree.length`](#treelength) - [`tree.get(key)`](#treegetkey) - [`tree.insert(key, value)`](#treeinsertkey-value) - [`tree.remove(key)`](#treeremovekey) - [`tree.find(key)`](#treefindkey) - [`tree.ge(key)`](#treegekey) - [`tree.gt(key)`](#treegtkey) - [`tree.lt(key)`](#treeltkey) - [`tree.le(key)`](#treelekey) - [`tree.at(position)`](#treeatposition) - [`tree.begin`](#treebegin) - [`tree.end`](#treeend) - [`tree.forEach(visitor(key,value)[, lo[, hi]])`](#treeforEachvisitorkeyvalue-lo-hi) - [`tree.root`](#treeroot) - [Node properties](#node-properties) - [`node.key`](#nodekey) - [`node.value`](#nodevalue) - [`node.left`](#nodeleft) - [`node.right`](#noderight) - [Iterator methods](#iterator-methods) - [`iter.key`](#iterkey) - [`iter.value`](#itervalue) - [`iter.node`](#iternode) - [`iter.tree`](#itertree) - [`iter.index`](#iterindex) - [`iter.valid`](#itervalid) - [`iter.clone()`](#iterclone) - [`iter.remove()`](#iterremove) - [`iter.update(value)`](#iterupdatevalue) - [`iter.next()`](#iternext) - [`iter.prev()`](#iterprev) - [`iter.hasNext`](#iterhasnext) - [`iter.hasPrev`](#iterhasprev) ## Tree methods ### `var tree = createTree([compare])` Creates an empty functional tree * `compare` is an optional comparison function, same semantics as array.sort() **Returns** An empty tree ordered by `compare` ### `tree.keys` A sorted array of all the keys in the tree ### `tree.values` An array array of all the values in the tree ### `tree.length` The number of items in the tree ### `tree.get(key)` Retrieves the value associated to the given key * `key` is the key of the item to look up **Returns** The value of the first node associated to `key` ### `tree.insert(key, value)` Creates a new tree with the new pair inserted. * `key` is the key of the item to insert * `value` is the value of the item to insert **Returns** A new tree with `key` and `value` inserted ### `tree.remove(key)` Removes the first item with `key` in the tree * `key` is the key of the item to remove **Returns** A new tree with the given item removed if it exists ### `tree.find(key)` Returns an iterator pointing to the first item in the tree with `key`, otherwise `null`. ### `tree.ge(key)` Find the first item in the tree whose key is `>= key` * `key` is the key to search for **Returns** An iterator at the given element. ### `tree.gt(key)` Finds the first item in the tree whose key is `> key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.lt(key)` Finds the last item in the tree whose key is `< key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.le(key)` Finds the last item in the tree whose key is `<= key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.at(position)` Finds an iterator starting at the given element * `position` is the index at which the iterator gets created **Returns** An iterator starting at position ### `tree.begin` An iterator pointing to the first element in the tree ### `tree.end` An iterator pointing to the last element in the tree ### `tree.forEach(visitor(key,value)[, lo[, hi]])` Walks a visitor function over the nodes of the tree in order. * `visitor(key,value)` is a callback that gets executed on each node. If a truthy value is returned from the visitor, then iteration is stopped. * `lo` is an optional start of the range to visit (inclusive) * `hi` is an optional end of the range to visit (non-inclusive) **Returns** The last value returned by the callback ### `tree.root` Returns the root node of the tree ## Node properties Each node of the tree has the following properties: ### `node.key` The key associated to the node ### `node.value` The value associated to the node ### `node.left` The left subtree of the node ### `node.right` The right subtree of the node ## Iterator methods ### `iter.key` The key of the item referenced by the iterator ### `iter.value` The value of the item referenced by the iterator ### `iter.node` The value of the node at the iterator's current position. `null` is iterator is node valid. ### `iter.tree` The tree associated to the iterator ### `iter.index` Returns the position of this iterator in the sequence. ### `iter.valid` Checks if the iterator is valid ### `iter.clone()` Makes a copy of the iterator ### `iter.remove()` Removes the item at the position of the iterator **Returns** A new binary search tree with `iter`'s item removed ### `iter.update(value)` Updates the value of the node in the tree at this iterator **Returns** A new binary search tree with the corresponding node updated ### `iter.next()` Advances the iterator to the next position ### `iter.prev()` Moves the iterator backward one element ### `iter.hasNext` If true, then the iterator is not at the end of the sequence ### `iter.hasPrev` If true, then the iterator is not at the beginning of the sequence # Credits (c) 2013 Mikola Lysenko. MIT License [![npm version](https://img.shields.io/npm/v/eslint.svg)](https://www.npmjs.com/package/eslint) [![Downloads](https://img.shields.io/npm/dm/eslint.svg)](https://www.npmjs.com/package/eslint) [![Build Status](https://github.com/eslint/eslint/workflows/CI/badge.svg)](https://github.com/eslint/eslint/actions) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_shield) <br /> [![Open Collective Backers](https://img.shields.io/opencollective/backers/eslint)](https://opencollective.com/eslint) [![Open Collective Sponsors](https://img.shields.io/opencollective/sponsors/eslint)](https://opencollective.com/eslint) [![Follow us on Twitter](https://img.shields.io/twitter/follow/geteslint?label=Follow&style=social)](https://twitter.com/intent/user?screen_name=geteslint) # ESLint [Website](https://eslint.org) | [Configuring](https://eslint.org/docs/user-guide/configuring) | [Rules](https://eslint.org/docs/rules/) | [Contributing](https://eslint.org/docs/developer-guide/contributing) | [Reporting Bugs](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) | [Code of Conduct](https://eslint.org/conduct) | [Twitter](https://twitter.com/geteslint) | [Mailing List](https://groups.google.com/group/eslint) | [Chat Room](https://eslint.org/chat) ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions: * ESLint uses [Espree](https://github.com/eslint/espree) for JavaScript parsing. * ESLint uses an AST to evaluate patterns in code. * ESLint is completely pluggable, every single rule is a plugin and you can add more at runtime. ## Table of Contents 1. [Installation and Usage](#installation-and-usage) 2. [Configuration](#configuration) 3. [Code of Conduct](#code-of-conduct) 4. [Filing Issues](#filing-issues) 5. [Frequently Asked Questions](#faq) 6. [Releases](#releases) 7. [Security Policy](#security-policy) 8. [Semantic Versioning Policy](#semantic-versioning-policy) 9. [Stylistic Rule Updates](#stylistic-rule-updates) 10. [License](#license) 11. [Team](#team) 12. [Sponsors](#sponsors) 13. [Technology Sponsors](#technology-sponsors) ## <a name="installation-and-usage"></a>Installation and Usage Prerequisites: [Node.js](https://nodejs.org/) (`^10.12.0`, or `>=12.0.0`) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.) You can install ESLint using npm: ``` $ npm install eslint --save-dev ``` You should then set up a configuration file: ``` $ ./node_modules/.bin/eslint --init ``` After that, you can run ESLint on any file or directory like this: ``` $ ./node_modules/.bin/eslint yourfile.js ``` ## <a name="configuration"></a>Configuration After running `eslint --init`, you'll have a `.eslintrc` file in your directory. In it, you'll see some rules configured like this: ```json { "rules": { "semi": ["error", "always"], "quotes": ["error", "double"] } } ``` The names `"semi"` and `"quotes"` are the names of [rules](https://eslint.org/docs/rules) in ESLint. The first value is the error level of the rule and can be one of these values: * `"off"` or `0` - turn the rule off * `"warn"` or `1` - turn the rule on as a warning (doesn't affect exit code) * `"error"` or `2` - turn the rule on as an error (exit code will be 1) The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the [configuration docs](https://eslint.org/docs/user-guide/configuring)). ## <a name="code-of-conduct"></a>Code of Conduct ESLint adheres to the [JS Foundation Code of Conduct](https://eslint.org/conduct). ## <a name="filing-issues"></a>Filing Issues Before filing an issue, please be sure to read the guidelines for what you're reporting: * [Bug Report](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) * [Propose a New Rule](https://eslint.org/docs/developer-guide/contributing/new-rules) * [Proposing a Rule Change](https://eslint.org/docs/developer-guide/contributing/rule-changes) * [Request a Change](https://eslint.org/docs/developer-guide/contributing/changes) ## <a name="faq"></a>Frequently Asked Questions ### I'm using JSCS, should I migrate to ESLint? Yes. [JSCS has reached end of life](https://eslint.org/blog/2016/07/jscs-end-of-life) and is no longer supported. We have prepared a [migration guide](https://eslint.org/docs/user-guide/migrating-from-jscs) to help you convert your JSCS settings to an ESLint configuration. We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process. ### Does Prettier replace ESLint? No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors. ### Why can't ESLint find my plugins? * Make sure your plugins (and ESLint) are both in your project's `package.json` as devDependencies (or dependencies, if your project uses ESLint at runtime). * Make sure you have run `npm install` and all your dependencies are installed. * Make sure your plugins' peerDependencies have been installed as well. You can use `npm view eslint-plugin-myplugin peerDependencies` to see what peer dependencies `eslint-plugin-myplugin` has. ### Does ESLint support JSX? Yes, ESLint natively supports parsing JSX syntax (this must be enabled in [configuration](https://eslint.org/docs/user-guide/configuring)). Please note that supporting JSX syntax *is not* the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn't recognize. We recommend using [eslint-plugin-react](https://www.npmjs.com/package/eslint-plugin-react) if you are using React and want React semantics. ### What ECMAScript versions does ESLint support? ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through [configuration](https://eslint.org/docs/user-guide/configuring). ### What about experimental features? ESLint's parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis. In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the [babel-eslint](https://github.com/babel/babel-eslint) parser and [eslint-plugin-babel](https://github.com/babel/eslint-plugin-babel) to use any option available in Babel. Once a language feature has been adopted into the ECMAScript standard (stage 4 according to the [TC39 process](https://tc39.github.io/process-document/)), we will accept issues and pull requests related to the new feature, subject to our [contributing guidelines](https://eslint.org/docs/developer-guide/contributing). Until then, please use the appropriate parser and plugin(s) for your experimental feature. ### Where to ask for help? Join our [Mailing List](https://groups.google.com/group/eslint) or [Chatroom](https://eslint.org/chat). ### Why doesn't ESLint lock dependency versions? Lock files like `package-lock.json` are helpful for deployed applications. They ensure that dependencies are consistent between environments and across deployments. Packages like `eslint` that get published to the npm registry do not include lock files. `npm install eslint` as a user will respect version constraints in ESLint's `package.json`. ESLint and its dependencies will be included in the user's lock file if one exists, but ESLint's own lock file would not be used. We intentionally don't lock dependency versions so that we have the latest compatible dependency versions in development and CI that our users get when installing ESLint in a project. The Twilio blog has a [deeper dive](https://www.twilio.com/blog/lockfiles-nodejs) to learn more. ## <a name="releases"></a>Releases We have scheduled releases every two weeks on Friday or Saturday. You can follow a [release issue](https://github.com/eslint/eslint/issues?q=is%3Aopen+is%3Aissue+label%3Arelease) for updates about the scheduling of any particular release. ## <a name="security-policy"></a>Security Policy ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## <a name="semantic-versioning-policy"></a>Semantic Versioning Policy ESLint follows [semantic versioning](https://semver.org). However, due to the nature of ESLint as a code quality tool, it's not always clear when a minor or major version bump occurs. To help clarify this for everyone, we've defined the following semantic versioning policy for ESLint: * Patch release (intended to not break your lint build) * A bug fix in a rule that results in ESLint reporting fewer linting errors. * A bug fix to the CLI or core (including formatters). * Improvements to documentation. * Non-user-facing changes such as refactoring code, adding, deleting, or modifying tests, and increasing test coverage. * Re-releasing after a failed release (i.e., publishing a release that doesn't work for anyone). * Minor release (might break your lint build) * A bug fix in a rule that results in ESLint reporting more linting errors. * A new rule is created. * A new option to an existing rule that does not result in ESLint reporting more linting errors by default. * A new addition to an existing rule to support a newly-added language feature (within the last 12 months) that will result in ESLint reporting more linting errors by default. * An existing rule is deprecated. * A new CLI capability is created. * New capabilities to the public API are added (new classes, new methods, new arguments to existing methods, etc.). * A new formatter is created. * `eslint:recommended` is updated and will result in strictly fewer linting errors (e.g., rule removals). * Major release (likely to break your lint build) * `eslint:recommended` is updated and may result in new linting errors (e.g., rule additions, most rule option updates). * A new option to an existing rule that results in ESLint reporting more linting errors by default. * An existing formatter is removed. * Part of the public API is removed or changed in an incompatible way. The public API includes: * Rule schemas * Configuration schema * Command-line options * Node.js API * Rule, formatter, parser, plugin APIs According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (`~`) in `package.json` e.g. `"eslint": "~3.1.0"` to guarantee the results of your builds. ## <a name="stylistic-rule-updates"></a>Stylistic Rule Updates Stylistic rules are frozen according to [our policy](https://eslint.org/blog/2020/05/changes-to-rules-policies) on how we evaluate new rules and rule changes. This means: * **Bug fixes**: We will still fix bugs in stylistic rules. * **New ECMAScript features**: We will also make sure stylistic rules are compatible with new ECMAScript features. * **New options**: We will **not** add any new options to stylistic rules unless an option is the only way to fix a bug or support a newly-added ECMAScript feature. ## <a name="license"></a>License [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=large)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_large) ## <a name="team"></a>Team These folks keep the project moving and are resources for help. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--teamstart--> ### Technical Steering Committee (TSC) The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/nzakas"> <img src="https://github.com/nzakas.png?s=75" width="75" height="75"><br /> Nicholas C. Zakas </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/btmills"> <img src="https://github.com/btmills.png?s=75" width="75" height="75"><br /> Brandon Mills </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/mdjermanovic"> <img src="https://github.com/mdjermanovic.png?s=75" width="75" height="75"><br /> Milos Djermanovic </a> </td></tr></tbody></table> ### Reviewers The people who review and implement new features. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/mysticatea"> <img src="https://github.com/mysticatea.png?s=75" width="75" height="75"><br /> Toru Nagashima </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/aladdin-add"> <img src="https://github.com/aladdin-add.png?s=75" width="75" height="75"><br /> 薛定谔的猫 </a> </td></tr></tbody></table> ### Committers The people who review and fix bugs and help triage issues. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/brettz9"> <img src="https://github.com/brettz9.png?s=75" width="75" height="75"><br /> Brett Zamir </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/bmish"> <img src="https://github.com/bmish.png?s=75" width="75" height="75"><br /> Bryan Mishkin </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/g-plane"> <img src="https://github.com/g-plane.png?s=75" width="75" height="75"><br /> Pig Fang </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/anikethsaha"> <img src="https://github.com/anikethsaha.png?s=75" width="75" height="75"><br /> Anix </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/yeonjuan"> <img src="https://github.com/yeonjuan.png?s=75" width="75" height="75"><br /> YeonJuan </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/snitin315"> <img src="https://github.com/snitin315.png?s=75" width="75" height="75"><br /> Nitin Kumar </a> </td></tr></tbody></table> <!--teamend--> ## <a name="sponsors"></a>Sponsors The following companies, organizations, and individuals support ESLint's ongoing maintenance and development. [Become a Sponsor](https://opencollective.com/eslint) to get your logo on our README and website. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--sponsorsstart--> <h3>Platinum Sponsors</h3> <p><a href="https://automattic.com"><img src="https://images.opencollective.com/photomatt/d0ef3e1/logo.png" alt="Automattic" height="undefined"></a></p><h3>Gold Sponsors</h3> <p><a href="https://nx.dev"><img src="https://images.opencollective.com/nx/0efbe42/logo.png" alt="Nx (by Nrwl)" height="96"></a> <a href="https://google.com/chrome"><img src="https://images.opencollective.com/chrome/dc55bd4/logo.png" alt="Chrome's Web Framework & Tools Performance Fund" height="96"></a> <a href="https://www.salesforce.com"><img src="https://images.opencollective.com/salesforce/ca8f997/logo.png" alt="Salesforce" height="96"></a> <a href="https://www.airbnb.com/"><img src="https://images.opencollective.com/airbnb/d327d66/logo.png" alt="Airbnb" height="96"></a> <a href="https://coinbase.com"><img src="https://avatars.githubusercontent.com/u/1885080?v=4" alt="Coinbase" height="96"></a> <a href="https://substack.com/"><img src="https://avatars.githubusercontent.com/u/53023767?v=4" alt="Substack" height="96"></a></p><h3>Silver Sponsors</h3> <p><a href="https://retool.com/"><img src="https://images.opencollective.com/retool/98ea68e/logo.png" alt="Retool" height="64"></a> <a href="https://liftoff.io/"><img src="https://images.opencollective.com/liftoff/5c4fa84/logo.png" alt="Liftoff" height="64"></a></p><h3>Bronze Sponsors</h3> <p><a href="https://www.crosswordsolver.org/anagram-solver/"><img src="https://images.opencollective.com/anagram-solver/2666271/logo.png" alt="Anagram Solver" height="32"></a> <a href="null"><img src="https://images.opencollective.com/bugsnag-stability-monitoring/c2cef36/logo.png" alt="Bugsnag Stability Monitoring" height="32"></a> <a href="https://mixpanel.com"><img src="https://images.opencollective.com/mixpanel/cd682f7/logo.png" alt="Mixpanel" height="32"></a> <a href="https://www.vpsserver.com"><img src="https://images.opencollective.com/vpsservercom/logo.png" alt="VPS Server" height="32"></a> <a href="https://icons8.com"><img src="https://images.opencollective.com/icons8/7fa1641/logo.png" alt="Icons8: free icons, photos, illustrations, and music" height="32"></a> <a href="https://discord.com"><img src="https://images.opencollective.com/discordapp/f9645d9/logo.png" alt="Discord" height="32"></a> <a href="https://themeisle.com"><img src="https://images.opencollective.com/themeisle/d5592fe/logo.png" alt="ThemeIsle" height="32"></a> <a href="https://www.firesticktricks.com"><img src="https://images.opencollective.com/fire-stick-tricks/b8fbe2c/logo.png" alt="Fire Stick Tricks" height="32"></a> <a href="https://www.practiceignition.com"><img src="https://avatars.githubusercontent.com/u/5753491?v=4" alt="Practice Ignition" height="32"></a></p> <!--sponsorsend--> ## <a name="technology-sponsors"></a>Technology Sponsors * Site search ([eslint.org](https://eslint.org)) is sponsored by [Algolia](https://www.algolia.com) * Hosting for ([eslint.org](https://eslint.org)) is sponsored by [Netlify](https://www.netlify.com) * Password management is sponsored by [1Password](https://www.1password.com) # fast-json-stable-stringify Deterministic `JSON.stringify()` - a faster version of [@substack](https://github.com/substack)'s json-stable-strigify without [jsonify](https://github.com/substack/jsonify). You can also pass in a custom comparison function. [![Build Status](https://travis-ci.org/epoberezkin/fast-json-stable-stringify.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-json-stable-stringify) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-json-stable-stringify/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-json-stable-stringify?branch=master) # example ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; console.log(stringify(obj)); ``` output: ``` {"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8} ``` # methods ``` js var stringify = require('fast-json-stable-stringify') ``` ## var str = stringify(obj, opts) Return a deterministic stringified string `str` from the object `obj`. ## options ### cmp If `opts` is given, you can supply an `opts.cmp` to have a custom comparison function for object keys. Your function `opts.cmp` is called with these parameters: ``` js opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue }) ``` For example, to sort on the object key names in reverse order you could write: ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; var s = stringify(obj, function (a, b) { return a.key < b.key ? 1 : -1; }); console.log(s); ``` which results in the output string: ``` {"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3} ``` Or if you wanted to sort on the object values in reverse order, you could write: ``` var stringify = require('fast-json-stable-stringify'); var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 }; var s = stringify(obj, function (a, b) { return a.value < b.value ? 1 : -1; }); console.log(s); ``` which outputs: ``` {"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10} ``` ### cycles Pass `true` in `opts.cycles` to stringify circular property as `__cycle__` - the result will not be a valid JSON string in this case. TypeError will be thrown in case of circular object without this option. # install With [npm](https://npmjs.org) do: ``` npm install fast-json-stable-stringify ``` # benchmark To run benchmark (requires Node.js 6+): ``` node benchmark ``` Results: ``` fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled) json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled) fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled) faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled) The fastest is fast-stable-stringify ``` ## Enterprise support fast-json-stable-stringify package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-json-stable-stringify?utm_source=npm-fast-json-stable-stringify&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. # license [MIT](https://github.com/epoberezkin/fast-json-stable-stringify/blob/master/LICENSE) binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. # prelude.ls [![Build Status](https://travis-ci.org/gkz/prelude-ls.png?branch=master)](https://travis-ci.org/gkz/prelude-ls) is a functionally oriented utility library. It is powerful and flexible. Almost all of its functions are curried. It is written in, and is the recommended base library for, <a href="http://livescript.net">LiveScript</a>. See **[the prelude.ls site](http://preludels.com)** for examples, a reference, and more. You can install via npm `npm install prelude-ls` ### Development `make test` to test `make build` to build `lib` from `src` `make build-browser` to build browser versions ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) # lodash.truncate v4.4.2 The [lodash](https://lodash.com/) method `_.truncate` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.truncate ``` In Node.js: ```js var truncate = require('lodash.truncate'); ``` See the [documentation](https://lodash.com/docs#truncate) or [package source](https://github.com/lodash/lodash/blob/4.4.2-npm-packages/lodash.truncate) for more details. # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) ### Status: Work in progress Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum/assembly"; // Before 0.20.x // import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ <a name="table"></a> # Table > Produces a string that represents array data in a text table. [![Github action status](https://github.com/gajus/table/actions/workflows/main.yml/badge.svg)](https://github.com/gajus/table/actions) [![Coveralls](https://img.shields.io/coveralls/gajus/table.svg?style=flat-square)](https://coveralls.io/github/gajus/table) [![NPM version](http://img.shields.io/npm/v/table.svg?style=flat-square)](https://www.npmjs.org/package/table) [![Canonical Code Style](https://img.shields.io/badge/code%20style-canonical-blue.svg?style=flat-square)](https://github.com/gajus/canonical) [![Twitter Follow](https://img.shields.io/twitter/follow/kuizinas.svg?style=social&label=Follow)](https://twitter.com/kuizinas) * [Table](#table) * [Features](#table-features) * [Install](#table-install) * [Usage](#table-usage) * [API](#table-api) * [table](#table-api-table-1) * [createStream](#table-api-createstream) * [getBorderCharacters](#table-api-getbordercharacters) ![Demo of table displaying a list of missions to the Moon.](./.README/demo.png) <a name="table-features"></a> ## Features * Works with strings containing [fullwidth](https://en.wikipedia.org/wiki/Halfwidth_and_fullwidth_forms) characters. * Works with strings containing [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code). * Configurable border characters. * Configurable content alignment per column. * Configurable content padding per column. * Configurable column width. * Text wrapping. <a name="table-install"></a> ## Install ```bash npm install table ``` [![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/gajus) [![Become a Patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/gajus) <a name="table-usage"></a> ## Usage ```js import { table } from 'table'; // Using commonjs? // const { table } = require('table'); const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; console.log(table(data)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ ``` <a name="table-api"></a> ## API <a name="table-api-table-1"></a> ### table Returns the string in the table format **Parameters:** - **_data_:** The data to display - Type: `any[][]` - Required: `true` - **_config_:** Table configuration - Type: `object` - Required: `false` <a name="table-api-table-1-config-border"></a> ##### config.border Type: `{ [type: string]: string }`\ Default: `honeywell` [template](#getbordercharacters) Custom borders. The keys are any of: - `topLeft`, `topRight`, `topBody`,`topJoin` - `bottomLeft`, `bottomRight`, `bottomBody`, `bottomJoin` - `joinLeft`, `joinRight`, `joinBody`, `joinJoin` - `bodyLeft`, `bodyRight`, `bodyJoin` - `headerJoin` ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: { topBody: `─`, topJoin: `┬`, topLeft: `┌`, topRight: `┐`, bottomBody: `─`, bottomJoin: `┴`, bottomLeft: `└`, bottomRight: `┘`, bodyLeft: `│`, bodyRight: `│`, bodyJoin: `│`, joinBody: `─`, joinLeft: `├`, joinRight: `┤`, joinJoin: `┼` } }; console.log(table(data, config)); ``` ``` ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ ``` <a name="table-api-table-1-config-drawverticalline"></a> ##### config.drawVerticalLine Type: `(lineIndex: number, columnCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a vertical line. This callback is called for each vertical border of the table. If the table has `n` columns, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawVerticalLine: (lineIndex, columnCount) => { return lineIndex === 0 || lineIndex === columnCount; } }; console.log(table(data, config)); ``` ``` ╔════════════╗ ║ 0A 0B 0C ║ ╟────────────╢ ║ 1A 1B 1C ║ ╟────────────╢ ║ 2A 2B 2C ║ ╟────────────╢ ║ 3A 3B 3C ║ ╟────────────╢ ║ 4A 4B 4C ║ ╚════════════╝ ``` <a name="table-api-table-1-config-drawhorizontalline"></a> ##### config.drawHorizontalLine Type: `(lineIndex: number, rowCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a horizontal line. This callback is called for each horizontal border of the table. If the table has `n` rows, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. If the table has `n` rows and contains the header, then the range will be `[0, n+1]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawHorizontalLine: (lineIndex, rowCount) => { return lineIndex === 0 || lineIndex === 1 || lineIndex === rowCount - 1 || lineIndex === rowCount; } }; console.log(table(data, config)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ║ 2A │ 2B │ 2C ║ ║ 3A │ 3B │ 3C ║ ╟────┼────┼────╢ ║ 4A │ 4B │ 4C ║ ╚════╧════╧════╝ ``` <a name="table-api-table-1-config-singleline"></a> ##### config.singleLine Type: `boolean`\ Default: `false` If `true`, horizontal lines inside the table are not drawn. This option also overrides the `config.drawHorizontalLine` if specified. ```js const data = [ ['-rw-r--r--', '1', 'pandorym', 'staff', '1529', 'May 23 11:25', 'LICENSE'], ['-rw-r--r--', '1', 'pandorym', 'staff', '16327', 'May 23 11:58', 'README.md'], ['drwxr-xr-x', '76', 'pandorym', 'staff', '2432', 'May 23 12:02', 'dist'], ['drwxr-xr-x', '634', 'pandorym', 'staff', '20288', 'May 23 11:54', 'node_modules'], ['-rw-r--r--', '1,', 'pandorym', 'staff', '525688', 'May 23 11:52', 'package-lock.json'], ['-rw-r--r--@', '1', 'pandorym', 'staff', '2440', 'May 23 11:25', 'package.json'], ['drwxr-xr-x', '27', 'pandorym', 'staff', '864', 'May 23 11:25', 'src'], ['drwxr-xr-x', '20', 'pandorym', 'staff', '640', 'May 23 11:25', 'test'], ]; const config = { singleLine: true }; console.log(table(data, config)); ``` ``` ╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 1529 │ May 23 11:25 │ LICENSE ║ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 16327 │ May 23 11:58 │ README.md ║ ║ drwxr-xr-x │ 76 │ pandorym │ staff │ 2432 │ May 23 12:02 │ dist ║ ║ drwxr-xr-x │ 634 │ pandorym │ staff │ 20288 │ May 23 11:54 │ node_modules ║ ║ -rw-r--r-- │ 1, │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║ ║ -rw-r--r--@ │ 1 │ pandorym │ staff │ 2440 │ May 23 11:25 │ package.json ║ ║ drwxr-xr-x │ 27 │ pandorym │ staff │ 864 │ May 23 11:25 │ src ║ ║ drwxr-xr-x │ 20 │ pandorym │ staff │ 640 │ May 23 11:25 │ test ║ ╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝ ``` <a name="table-api-table-1-config-columns"></a> ##### config.columns Type: `Column[] | { [columnIndex: number]: Column }` Column specific configurations. <a name="table-api-table-1-config-columns-config-columns-width"></a> ###### config.columns[*].width Type: `number`\ Default: the maximum cell widths of the column Column width (excluding the paddings). ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: { 1: { width: 10 } } }; console.log(table(data, config)); ``` ``` ╔════╤════════════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────────────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────────────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════════════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-alignment"></a> ###### config.columns[*].alignment Type: `'center' | 'justify' | 'left' | 'right'`\ Default: `'left'` Cell content horizontal alignment ```js const data = [ ['0A', '0B', '0C', '0D 0E 0F'], ['1A', '1B', '1C', '1D 1E 1F'], ['2A', '2B', '2C', '2D 2E 2F'], ]; const config = { columnDefault: { width: 10, }, columns: [ { alignment: 'left' }, { alignment: 'center' }, { alignment: 'right' }, { alignment: 'justify' } ], }; console.log(table(data, config)); ``` ``` ╔════════════╤════════════╤════════════╤════════════╗ ║ 0A │ 0B │ 0C │ 0D 0E 0F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C │ 1D 1E 1F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C │ 2D 2E 2F ║ ╚════════════╧════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-verticalalignment"></a> ###### config.columns[*].verticalAlignment Type: `'top' | 'middle' | 'bottom'`\ Default: `'top'` Cell content vertical alignment ```js const data = [ ['A', 'B', 'C', 'DEF'], ]; const config = { columnDefault: { width: 1, }, columns: [ { verticalAlignment: 'top' }, { verticalAlignment: 'middle' }, { verticalAlignment: 'bottom' }, ], }; console.log(table(data, config)); ``` ``` ╔═══╤═══╤═══╤═══╗ ║ A │ │ │ D ║ ║ │ B │ │ E ║ ║ │ │ C │ F ║ ╚═══╧═══╧═══╧═══╝ ``` <a name="table-api-table-1-config-columns-config-columns-paddingleft"></a> ###### config.columns[*].paddingLeft Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the left. <a name="table-api-table-1-config-columns-config-columns-paddingright"></a> ###### config.columns[*].paddingRight Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the right. The `paddingLeft` and `paddingRight` options do not count on the column width. So the column has `width = 5`, `paddingLeft = 2` and `paddingRight = 2` will have the total width is `9`. ```js const data = [ ['0A', 'AABBCC', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: [ { paddingLeft: 3 }, { width: 2, paddingRight: 3 } ] }; console.log(table(data, config)); ``` ``` ╔══════╤══════╤════╗ ║ 0A │ AA │ 0C ║ ║ │ BB │ ║ ║ │ CC │ ║ ╟──────┼──────┼────╢ ║ 1A │ 1B │ 1C ║ ╟──────┼──────┼────╢ ║ 2A │ 2B │ 2C ║ ╚══════╧══════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-truncate"></a> ###### config.columns[*].truncate Type: `number`\ Default: `Infinity` The number of characters is which the content will be truncated. To handle a content that overflows the container width, `table` package implements [text wrapping](#config.columns[*].wrapWord). However, sometimes you may want to truncate content that is too long to be displayed in the table. ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20, truncate: 100 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convall… ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-wrapword"></a> ###### config.columns[*].wrapWord Type: `boolean`\ Default: `false` The `table` package implements auto text wrapping, i.e., text that has the width greater than the container width will be separated into multiple lines at the nearest space or one of the special characters: `\|/_.,;-`. When `wrapWord` is `false`: ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convallis ║ ║ dapibus. Nunc venena ║ ║ tis tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` When `wrapWord` is `true`: ``` ╔══════════════════════╗ ║ Lorem ipsum dolor ║ ║ sit amet, ║ ║ consectetur ║ ║ adipiscing elit. ║ ║ Phasellus pulvinar ║ ║ nibh sed mauris ║ ║ convallis dapibus. ║ ║ Nunc venenatis ║ ║ tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columndefault"></a> ##### config.columnDefault Type: `Column`\ Default: `{}` The default configuration for all columns. Column-specific settings will overwrite the default values. <a name="table-api-table-1-config-header"></a> ##### config.header Type: `object` Header configuration. *Deprecated in favor of the new spanning cells API.* The header configuration inherits the most of the column's, except: - `content` **{string}**: the header content. - `width:` calculate based on the content width automatically. - `alignment:` `center` be default. - `verticalAlignment:` is not supported. - `config.border.topJoin` will be `config.border.topBody` for prettier. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ]; const config = { columnDefault: { width: 10, }, header: { alignment: 'center', content: 'THE HEADER\nThis is the table about something', }, } console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════╗ ║ THE HEADER ║ ║ This is the table about something ║ ╟────────────┬────────────┬────────────╢ ║ 0A │ 0B │ 0C ║ ╟────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C ║ ╟────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C ║ ╚════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-spanningcells"></a> ##### config.spanningCells Type: `SpanningCellConfig[]` Spanning cells configuration. The configuration should be straightforward: just specify an array of minimal cell configurations including the position of top-left cell and the number of columns and/or rows will be expanded from it. The content of overlap cells will be ignored to make the `data` shape be consistent. By default, the configuration of column that the top-left cell belongs to will be applied to the whole spanning cell, except: * The `width` will be summed up of all spanning columns. * The `paddingRight` will be received from the right-most column intentionally. Advances customized column-like styles can be configurable to each spanning cell to overwrite the default behavior. ```js const data = [ ['Test Coverage Report', '', '', '', '', ''], ['Module', 'Component', 'Test Cases', 'Failures', 'Durations', 'Success Rate'], ['Services', 'User', '50', '30', '3m 7s', '60.0%'], ['', 'Payment', '100', '80', '7m 15s', '80.0%'], ['Subtotal', '', '150', '110', '10m 22s', '73.3%'], ['Controllers', 'User', '24', '18', '1m 30s', '75.0%'], ['', 'Payment', '30', '24', '50s', '80.0%'], ['Subtotal', '', '54', '42', '2m 20s', '77.8%'], ['Total', '', '204', '152', '12m 42s', '74.5%'], ]; const config = { columns: [ { alignment: 'center', width: 12 }, { alignment: 'center', width: 10 }, { alignment: 'right' }, { alignment: 'right' }, { alignment: 'right' }, { alignment: 'right' } ], spanningCells: [ { col: 0, row: 0, colSpan: 6 }, { col: 0, row: 2, rowSpan: 2, verticalAlignment: 'middle'}, { col: 0, row: 4, colSpan: 2, alignment: 'right'}, { col: 0, row: 5, rowSpan: 2, verticalAlignment: 'middle'}, { col: 0, row: 7, colSpan: 2, alignment: 'right' }, { col: 0, row: 8, colSpan: 2, alignment: 'right' } ], }; console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════════════════════════════════════════════╗ ║ Test Coverage Report ║ ╟──────────────┬────────────┬────────────┬──────────┬───────────┬──────────────╢ ║ Module │ Component │ Test Cases │ Failures │ Durations │ Success Rate ║ ╟──────────────┼────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ User │ 50 │ 30 │ 3m 7s │ 60.0% ║ ║ Services ├────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ Payment │ 100 │ 80 │ 7m 15s │ 80.0% ║ ╟──────────────┴────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ Subtotal │ 150 │ 110 │ 10m 22s │ 73.3% ║ ╟──────────────┬────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ User │ 24 │ 18 │ 1m 30s │ 75.0% ║ ║ Controllers ├────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ Payment │ 30 │ 24 │ 50s │ 80.0% ║ ╟──────────────┴────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ Subtotal │ 54 │ 42 │ 2m 20s │ 77.8% ║ ╟───────────────────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ Total │ 204 │ 152 │ 12m 42s │ 74.5% ║ ╚═══════════════════════════╧════════════╧══════════╧═══════════╧══════════════╝ ``` <a name="table-api-createstream"></a> ### createStream `table` package exports `createStream` function used to draw a table and append rows. **Parameter:** - _**config:**_ the same as `table`'s, except `config.columnDefault.width` and `config.columnCount` must be provided. ```js import { createStream } from 'table'; const config = { columnDefault: { width: 50 }, columnCount: 1 }; const stream = createStream(config); setInterval(() => { stream.write([new Date()]); }, 500); ``` ![Streaming current date.](./.README/api/stream/streaming.gif) `table` package uses ANSI escape codes to overwrite the output of the last line when a new row is printed. The underlying implementation is explained in this [Stack Overflow answer](http://stackoverflow.com/a/32938658/368691). Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g. ```js import { createStream } from 'table'; import _ from 'lodash'; const config = { columnDefault: { width: 50 }, columnCount: 3, columns: [ { width: 10, alignment: 'right' }, { alignment: 'center' }, { width: 10 } ] }; const stream = createStream(config); let i = 0; setInterval(() => { let random; random = _.sample('abcdefghijklmnopqrstuvwxyz', _.random(1, 30)).join(''); stream.write([i++, new Date(), random]); }, 500); ``` ![Streaming random data.](./.README/api/stream/streaming-random.gif) <a name="table-api-getbordercharacters"></a> ### getBorderCharacters **Parameter:** - **_template_** - Type: `'honeywell' | 'norc' | 'ramac' | 'void'` - Required: `true` You can load one of the predefined border templates using `getBorderCharacters` function. ```js import { table, getBorderCharacters } from 'table'; const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: getBorderCharacters(`name of the template`) }; console.log(table(data, config)); ``` ``` # honeywell ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ # norc ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ # ramac (ASCII; for use in terminals that do not support Unicode characters) +----+----+----+ | 0A | 0B | 0C | |----|----|----| | 1A | 1B | 1C | |----|----|----| | 2A | 2B | 2C | +----+----+----+ # void (no borders; see "borderless table" section of the documentation) 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` Raise [an issue](https://github.com/gajus/table/issues) if you'd like to contribute a new border template. <a name="table-api-getbordercharacters-borderless-table"></a> #### Borderless Table Simply using `void` border character template creates a table with a lot of unnecessary spacing. To create a more pleasant to the eye table, reset the padding and remove the joining rows, e.g. ```js const output = table(data, { border: getBorderCharacters('void'), columnDefault: { paddingLeft: 0, paddingRight: 1 }, drawHorizontalLine: () => false } ); console.log(output); ``` ``` 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/main/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/yargs-parser@19.0.0/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _: [], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _: [2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # fast-levenshtein - Levenshtein algorithm in Javascript [![Build Status](https://secure.travis-ci.org/hiddentao/fast-levenshtein.png)](http://travis-ci.org/hiddentao/fast-levenshtein) [![NPM module](https://badge.fury.io/js/fast-levenshtein.png)](https://badge.fury.io/js/fast-levenshtein) [![NPM downloads](https://img.shields.io/npm/dm/fast-levenshtein.svg?maxAge=2592000)](https://www.npmjs.com/package/fast-levenshtein) [![Follow on Twitter](https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/hiddentao) An efficient Javascript implementation of the [Levenshtein algorithm](http://en.wikipedia.org/wiki/Levenshtein_distance) with locale-specific collator support. ## Features * Works in node.js and in the browser. * Better performance than other implementations by not needing to store the whole matrix ([more info](http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm)). * Locale-sensitive string comparisions if needed. * Comprehensive test suite and performance benchmark. * Small: <1 KB minified and gzipped ## Installation ### node.js Install using [npm](http://npmjs.org/): ```bash $ npm install fast-levenshtein ``` ### Browser Using bower: ```bash $ bower install fast-levenshtein ``` If you are not using any module loader system then the API will then be accessible via the `window.Levenshtein` object. ## Examples **Default usage** ```javascript var levenshtein = require('fast-levenshtein'); var distance = levenshtein.get('back', 'book'); // 2 var distance = levenshtein.get('我愛你', '我叫你'); // 1 ``` **Locale-sensitive string comparisons** It supports using [Intl.Collator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Collator) for locale-sensitive string comparisons: ```javascript var levenshtein = require('fast-levenshtein'); levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true}); // 1 ``` ## Building and Testing To build the code and run the tests: ```bash $ npm install -g grunt-cli $ npm install $ npm run build ``` ## Performance _Thanks to [Titus Wormer](https://github.com/wooorm) for [encouraging me](https://github.com/hiddentao/fast-levenshtein/issues/1) to do this._ Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM): ```bash Running suite Implementation comparison [benchmark/speed.js]... >> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled) >> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled) >> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled) >> natural x 255 ops/sec ±0.76% (88 runs sampled) >> levenshtein x 180 ops/sec ±3.55% (86 runs sampled) >> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled) Benchmark done. Fastest test is fast-levenshtein at 4.2x faster than levenshtein-component ``` You can run this benchmark yourself by doing: ```bash $ npm install $ npm run build $ npm run benchmark ``` ## Contributing If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes. See [CONTRIBUTING.md](https://github.com/hiddentao/fast-levenshtein/blob/master/CONTRIBUTING.md) for details. ## License MIT - see [LICENSE.md](https://github.com/hiddentao/fast-levenshtein/blob/master/LICENSE.md) # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard ### Esrecurse [![Build Status](https://travis-ci.org/estools/esrecurse.svg?branch=master)](https://travis-ci.org/estools/esrecurse) Esrecurse ([esrecurse](https://github.com/estools/esrecurse)) is [ECMAScript](https://www.ecma-international.org/publications/standards/Ecma-262.htm) recursive traversing functionality. ### Example Usage The following code will output all variables declared at the root of a file. ```javascript esrecurse.visit(ast, { XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); ``` We can use `Visitor` instance. ```javascript var visitor = new esrecurse.Visitor({ XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); visitor.visit(ast); ``` We can inherit `Visitor` instance easily. ```javascript class Derived extends esrecurse.Visitor { constructor() { super(null); } XXXStatement(node) { } } ``` ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { this.visit(node.left); // do something... this.visit(node.right); }; ``` And you can invoke default visiting operation inside custom visit operation. ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { // do something... this.visitChildren(node); }; ``` The `childVisitorKeys` option does customize the behaviour of `this.visitChildren(node)`. We can use user-defined node types. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { // Extending the existing traversing rules. childVisitorKeys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } } ); ``` We can use the `fallback` option as well. If the `fallback` option is `"iteration"`, `esrecurse` would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: 'iteration' } ); ``` If the `fallback` option is a function, `esrecurse` calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: function (node) { return Object.keys(node).filter(function(key) { return key !== 'argument' }); } } ); ``` ### License Copyright (C) 2014 [Yusuke Suzuki](https://github.com/Constellation) (twitter: [@Constellation](https://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # URI.js URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI ([RFC 3987](http://www.ietf.org/rfc/rfc3987.txt)), IDNA ([RFC 5890](http://www.ietf.org/rfc/rfc5890.txt)), IPv6 Address ([RFC 5952](http://www.ietf.org/rfc/rfc5952.txt)), IPv6 Zone Identifier ([RFC 6874](http://www.ietf.org/rfc/rfc6874.txt)) specifications. URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated). ## API ### Parsing URI.parse("uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body"); //returns: //{ // scheme : "uri", // userinfo : "user:pass", // host : "example.com", // port : 123, // path : "/one/two.three", // query : "q1=a1&q2=a2", // fragment : "body" //} ### Serializing URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer" ### Resolving URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g" ### Normalizing URI.normalize("HTTP://ABC.com:80/%7Esmith/home.html") === "http://abc.com/~smith/home.html" ### Comparison URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true ### IP Support //IPv4 normalization URI.normalize("//192.068.001.000") === "//192.68.1.0" //IPv6 normalization URI.normalize("//[2001:0:0DB8::0:0001]") === "//[2001:0:db8::1]" //IPv6 zone identifier support URI.parse("//[2001:db8::7%25en1]"); //returns: //{ // host : "2001:db8::7%en1" //} ### IRI Support //convert IRI to URI URI.serialize(URI.parse("http://examplé.org/rosé")) === "http://xn--exampl-gva.org/ros%C3%A9" //convert URI to IRI URI.serialize(URI.parse("http://xn--exampl-gva.org/ros%C3%A9"), {iri:true}) === "http://examplé.org/rosé" ### Options All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties: * `scheme` (string) Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior. * `reference` (string) If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme. * `tolerant` (boolean, false) If set to `true`, the parser will relax URI resolving rules. * `absolutePath` (boolean, false) If set to `true`, the serializer will not resolve a relative `path` component. * `iri` (boolean, false) If set to `true`, the serializer will unescape non-ASCII characters as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `unicodeSupport` (boolean, false) If set to `true`, the parser will unescape non-ASCII characters in the parsed output as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `domainHost` (boolean, false) If set to `true`, the library will treat the `host` component as a domain name, and convert IDNs (International Domain Names) as per [RFC 5891](http://www.ietf.org/rfc/rfc5891.txt). ## Scheme Extendable URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. Currently, URI.js has built in support for the following schemes: * http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\] * https \[[RFC 2818](http://www.ietf.org/rfc/rfc2818.txt)\] * ws \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * wss \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * mailto \[[RFC 6068](http://www.ietf.org/rfc/rfc6068.txt)\] * urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\] * urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\] ### HTTP/HTTPS Support URI.equal("HTTP://ABC.COM:80", "http://abc.com/") === true URI.equal("https://abc.com", "HTTPS://ABC.COM:443/") === true ### WS/WSS Support URI.parse("wss://example.com/foo?bar=baz"); //returns: //{ // scheme : "wss", // host: "example.com", // resourceName: "/foo?bar=baz", // secure: true, //} URI.equal("WS://ABC.COM:80/chat#one", "ws://abc.com/chat") === true ### Mailto Support URI.parse("mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!"); //returns: //{ // scheme : "mailto", // to : ["alpha@example.com", "bravo@example.com"], // subject : "SUBSCRIBE", // body : "Sign me up!" //} URI.serialize({ scheme : "mailto", to : ["alpha@example.com"], subject : "REMOVE", body : "Please remove me", headers : { cc : "charlie@example.com" } }) === "mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me" ### URN Support URI.parse("urn:example:foo"); //returns: //{ // scheme : "urn", // nid : "example", // nss : "foo", //} #### URN UUID Support URI.parse("urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6"); //returns: //{ // scheme : "urn", // nid : "uuid", // uuid : "f81d4fae-7dec-11d0-a765-00a0c91e6bf6", //} ## Usage To load in a browser, use the following tag: <script type="text/javascript" src="uri-js/dist/es5/uri.all.min.js"></script> To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line: npm install uri-js # OR yarn add uri-js Then, in your code, load it using: const URI = require("uri-js"); If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using: import * as URI from "uri-js"; Or you can load just what you need using named exports: import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from "uri-js"; ## Breaking changes ### Breaking changes from 3.x URN parsing has been completely changed to better align with the specification. Scheme is now always `urn`, but has two new properties: `nid` which contains the Namspace Identifier, and `nss` which contains the Namespace Specific String. The `nss` property will be removed by higher order scheme handlers, such as the UUID URN scheme handler. The UUID of a URN can now be found in the `uuid` property. ### Breaking changes from 2.x URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful. ### Breaking changes from 1.x The `errors` array on parsed components is now an `error` string. # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # ESLint Scope ESLint Scope is the [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) scope analyzer used in ESLint. It is a fork of [escope](http://github.com/estools/escope). ## Usage Install: ``` npm i eslint-scope --save ``` Example: ```js var eslintScope = require('eslint-scope'); var espree = require('espree'); var estraverse = require('estraverse'); var ast = espree.parse(code); var scopeManager = eslintScope.analyze(ast); var currentScope = scopeManager.acquire(ast); // global scope estraverse.traverse(ast, { enter: function(node, parent) { // do stuff if (/Function/.test(node.type)) { currentScope = scopeManager.acquire(node); // get current function scope } }, leave: function(node, parent) { if (/Function/.test(node.type)) { currentScope = currentScope.upper; // set to parent scope } // do stuff } }); ``` ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/eslint-scope/issues). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting ## License ESLint Scope is licensed under a permissive BSD 2-clause license. discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 67 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo: [demo](https://estools.github.io/esquery/) The following selectors are supported: * AST node type: `ForStatement` * [wildcard](http://dev.w3.org/csswg/selectors4/#universal-selector): `*` * [attribute existence](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr]` * [attribute value](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr="foo"]` or `[attr=123]` * attribute regex: `[attr=/foo.*/]` or (with flags) `[attr=/foo.*/is]` * attribute conditions: `[attr!="foo"]`, `[attr>2]`, `[attr<3]`, `[attr>=2]`, or `[attr<=3]` * nested attribute: `[attr.level2="foo"]` * field: `FunctionDeclaration > Identifier.id` * [First](http://dev.w3.org/csswg/selectors4/#the-first-child-pseudo) or [last](http://dev.w3.org/csswg/selectors4/#the-last-child-pseudo) child: `:first-child` or `:last-child` * [nth-child](http://dev.w3.org/csswg/selectors4/#the-nth-child-pseudo) (no ax+b support): `:nth-child(2)` * [nth-last-child](http://dev.w3.org/csswg/selectors4/#the-nth-last-child-pseudo) (no ax+b support): `:nth-last-child(1)` * [descendant](http://dev.w3.org/csswg/selectors4/#descendant-combinators): `ancestor descendant` * [child](http://dev.w3.org/csswg/selectors4/#child-combinators): `parent > child` * [following sibling](http://dev.w3.org/csswg/selectors4/#general-sibling-combinators): `node ~ sibling` * [adjacent sibling](http://dev.w3.org/csswg/selectors4/#adjacent-sibling-combinators): `node + adjacent` * [negation](http://dev.w3.org/csswg/selectors4/#negation-pseudo): `:not(ForStatement)` * [has](https://drafts.csswg.org/selectors-4/#has-pseudo): `:has(ForStatement)` * [matches-any](http://dev.w3.org/csswg/selectors4/#matches): `:matches([attr] > :first-child, :last-child)` * [subject indicator](http://dev.w3.org/csswg/selectors4/#subject): `!IfStatement > [name="foo"]` * class of AST node: `:statement`, `:expression`, `:declaration`, `:function`, or `:pattern` [![Build Status](https://travis-ci.org/estools/esquery.png?branch=master)](https://travis-ci.org/estools/esquery) # lodash.merge v4.6.2 The [Lodash](https://lodash.com/) method `_.merge` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.merge ``` In Node.js: ```js var merge = require('lodash.merge'); ``` See the [documentation](https://lodash.com/docs#merge) or [package source](https://github.com/lodash/lodash/blob/4.6.2-npm-packages/lodash.merge) for more details. # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # sprintf.js **sprintf.js** is a complete open source JavaScript sprintf implementation for the *browser* and *node.js*. Its prototype is simple: string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]]) The placeholders in the format string are marked by `%` and are followed by one or more of these elements, in this order: * An optional number followed by a `$` sign that selects which argument index to use for the value. If not specified, arguments will be placed in the same order as the placeholders in the input string. * An optional `+` sign that forces to preceed the result with a plus or minus sign on numeric values. By default, only the `-` sign is used on negative numbers. * An optional padding specifier that says what character to use for padding (if specified). Possible values are `0` or any other character precedeed by a `'` (single quote). The default is to pad with *spaces*. * An optional `-` sign, that causes sprintf to left-align the result of this placeholder. The default is to right-align the result. * An optional number, that says how many characters the result should have. If the value to be returned is shorter than this number, the result will be padded. When used with the `j` (JSON) type specifier, the padding length specifies the tab size used for indentation. * An optional precision modifier, consisting of a `.` (dot) followed by a number, that says how many digits should be displayed for floating point numbers. When used with the `g` type specifier, it specifies the number of significant digits. When used on a string, it causes the result to be truncated. * A type specifier that can be any of: * `%` — yields a literal `%` character * `b` — yields an integer as a binary number * `c` — yields an integer as the character with that ASCII value * `d` or `i` — yields an integer as a signed decimal number * `e` — yields a float using scientific notation * `u` — yields an integer as an unsigned decimal number * `f` — yields a float as is; see notes on precision above * `g` — yields a float as is; see notes on precision above * `o` — yields an integer as an octal number * `s` — yields a string as is * `x` — yields an integer as a hexadecimal number (lower-case) * `X` — yields an integer as a hexadecimal number (upper-case) * `j` — yields a JavaScript object or array as a JSON encoded string ## JavaScript `vsprintf` `vsprintf` is the same as `sprintf` except that it accepts an array of arguments, rather than a variable number of arguments: vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) ## Argument swapping You can also swap the arguments. That is, the order of the placeholders doesn't have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to: sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") And, of course, you can repeat the placeholders without having to increase the number of arguments. ## Named arguments Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - `(` and `)` - and begin with a keyword that refers to a key: var user = { name: "Dolly" } sprintf("Hello %(name)s", user) // Hello Dolly Keywords in replacement fields can be optionally followed by any number of keywords or indexes: var users = [ {name: "Dolly"}, {name: "Molly"}, {name: "Polly"} ] sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly Note: mixing positional and named placeholders is not (yet) supported ## Computed values You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly. sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890 sprintf("Current date and time: %s", function() { return new Date().toString() }) # AngularJS You can now use `sprintf` and `vsprintf` (also aliased as `fmt` and `vfmt` respectively) in your AngularJS projects. See `demo/`. # Installation ## Via Bower bower install sprintf ## Or as a node.js module npm install sprintf-js ### Usage var sprintf = require("sprintf-js").sprintf, vsprintf = require("sprintf-js").vsprintf sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) # License **sprintf.js** is licensed under the terms of the 3-clause BSD license. # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Sponsors [![Scout APM](./sponsors/scout-apm.png)](https://scoutapm.com/) My Open Source work is supported by [Scout APM](https://scoutapm.com/) and [other sponsors](https://github.com/sponsors/indutny). ## Notation ### Prefixes There are several prefixes to instructions that affect the way they work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the end of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this trick one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime argparse ======== [![Build Status](https://secure.travis-ci.org/nodeca/argparse.svg?branch=master)](http://travis-ci.org/nodeca/argparse) [![NPM version](https://img.shields.io/npm/v/argparse.svg)](https://www.npmjs.org/package/argparse) CLI arguments parser for node.js. Javascript port of python's [argparse](http://docs.python.org/dev/library/argparse.html) module (original version 3.2). That's a full port, except some very rare options, recorded in issue tracker. **NB. Difference with original.** - Method names changed to camelCase. See [generated docs](http://nodeca.github.com/argparse/). - Use `defaultValue` instead of `default`. - Use `argparse.Const.REMAINDER` instead of `argparse.REMAINDER`, and similarly for constant values `OPTIONAL`, `ZERO_OR_MORE`, and `ONE_OR_MORE` (aliases for `nargs` values `'?'`, `'*'`, `'+'`, respectively), and `SUPPRESS`. Example ======= test.js file: ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse example' }); parser.addArgument( [ '-f', '--foo' ], { help: 'foo bar' } ); parser.addArgument( [ '-b', '--bar' ], { help: 'bar foo' } ); parser.addArgument( '--baz', { help: 'baz bar' } ); var args = parser.parseArgs(); console.dir(args); ``` Display help: ``` $ ./test.js -h usage: example.js [-h] [-v] [-f FOO] [-b BAR] [--baz BAZ] Argparse example Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -f FOO, --foo FOO foo bar -b BAR, --bar BAR bar foo --baz BAZ baz bar ``` Parse arguments: ``` $ ./test.js -f=3 --bar=4 --baz 5 { foo: '3', bar: '4', baz: '5' } ``` More [examples](https://github.com/nodeca/argparse/tree/master/examples). ArgumentParser objects ====================== ``` new ArgumentParser({parameters hash}); ``` Creates a new ArgumentParser object. **Supported params:** - ```description``` - Text to display before the argument help. - ```epilog``` - Text to display after the argument help. - ```addHelp``` - Add a -h/–help option to the parser. (default: true) - ```argumentDefault``` - Set the global default value for arguments. (default: null) - ```parents``` - A list of ArgumentParser objects whose arguments should also be included. - ```prefixChars``` - The set of characters that prefix optional arguments. (default: ‘-‘) - ```formatterClass``` - A class for customizing the help output. - ```prog``` - The name of the program (default: `path.basename(process.argv[1])`) - ```usage``` - The string describing the program usage (default: generated) - ```conflictHandler``` - Usually unnecessary, defines strategy for resolving conflicting optionals. **Not supported yet** - ```fromfilePrefixChars``` - The set of characters that prefix files from which additional arguments should be read. Details in [original ArgumentParser guide](http://docs.python.org/dev/library/argparse.html#argumentparser-objects) addArgument() method ==================== ``` ArgumentParser.addArgument(name or flag or [name] or [flags...], {options}) ``` Defines how a single command-line argument should be parsed. - ```name or flag or [name] or [flags...]``` - Either a positional name (e.g., `'foo'`), a single option (e.g., `'-f'` or `'--foo'`), an array of a single positional name (e.g., `['foo']`), or an array of options (e.g., `['-f', '--foo']`). Options: - ```action``` - The basic type of action to be taken when this argument is encountered at the command line. - ```nargs```- The number of command-line arguments that should be consumed. - ```constant``` - A constant value required by some action and nargs selections. - ```defaultValue``` - The value produced if the argument is absent from the command line. - ```type``` - The type to which the command-line argument should be converted. - ```choices``` - A container of the allowable values for the argument. - ```required``` - Whether or not the command-line option may be omitted (optionals only). - ```help``` - A brief description of what the argument does. - ```metavar``` - A name for the argument in usage messages. - ```dest``` - The name of the attribute to be added to the object returned by parseArgs(). Details in [original add_argument guide](http://docs.python.org/dev/library/argparse.html#the-add-argument-method) Action (some details) ================ ArgumentParser objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them, though most actions simply add an attribute to the object returned by parseArgs(). The action keyword argument specifies how the command-line arguments should be handled. The supported actions are: - ```store``` - Just stores the argument’s value. This is the default action. - ```storeConst``` - Stores value, specified by the const keyword argument. (Note that the const keyword argument defaults to the rather unhelpful None.) The 'storeConst' action is most commonly used with optional arguments, that specify some sort of flag. - ```storeTrue``` and ```storeFalse``` - Stores values True and False respectively. These are special cases of 'storeConst'. - ```append``` - Stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. - ```appendConst``` - Stores a list, and appends value, specified by the const keyword argument to the list. (Note, that the const keyword argument defaults is None.) The 'appendConst' action is typically used when multiple arguments need to store constants to the same list. - ```count``` - Counts the number of times a keyword argument occurs. For example, used for increasing verbosity levels. - ```help``` - Prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser. See ArgumentParser for details of how the output is created. - ```version``` - Prints version information and exit. Expects a `version=` keyword argument in the addArgument() call. Details in [original action guide](http://docs.python.org/dev/library/argparse.html#action) Sub-commands ============ ArgumentParser.addSubparsers() Many programs split their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands like `svn checkout`, `svn update`, and `svn commit`. Splitting up functionality this way can be a particularly good idea when a program performs several different functions which require different kinds of command-line arguments. `ArgumentParser` supports creation of such sub-commands with `addSubparsers()` method. The `addSubparsers()` method is normally called with no arguments and returns an special action object. This object has a single method `addParser()`, which takes a command name and any `ArgumentParser` constructor arguments, and returns an `ArgumentParser` object that can be modified as usual. Example: sub_commands.js ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse examples: sub-commands', }); var subparsers = parser.addSubparsers({ title:'subcommands', dest:"subcommand_name" }); var bar = subparsers.addParser('c1', {addHelp:true}); bar.addArgument( [ '-f', '--foo' ], { action: 'store', help: 'foo3 bar3' } ); var bar = subparsers.addParser( 'c2', {aliases:['co'], addHelp:true} ); bar.addArgument( [ '-b', '--bar' ], { action: 'store', type: 'int', help: 'foo3 bar3' } ); var args = parser.parseArgs(); console.dir(args); ``` Details in [original sub-commands guide](http://docs.python.org/dev/library/argparse.html#sub-commands) Contributors ============ - [Eugene Shkuropat](https://github.com/shkuropat) - [Paul Jacobson](https://github.com/hpaulj) [others](https://github.com/nodeca/argparse/graphs/contributors) License ======= Copyright (c) 2012 [Vitaly Puzrin](https://github.com/puzrin). Released under the MIT license. See [LICENSE](https://github.com/nodeca/argparse/blob/master/LICENSE) for details. Shims used when bundling asc for browser usage. <h1 align="center">Enquirer</h1> <p align="center"> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/v/enquirer.svg" alt="version"> </a> <a href="https://travis-ci.org/enquirer/enquirer"> <img src="https://img.shields.io/travis/enquirer/enquirer.svg" alt="travis"> </a> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/dm/enquirer.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <b>Stylish CLI prompts that are user-friendly, intuitive and easy to create.</b><br> <sub>>_ Prompts should be more like conversations than inquisitions▌</sub> </p> <br> <p align="center"> <sub>(Example shows Enquirer's <a href="#survey-prompt">Survey Prompt</a>)</a></sub> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"><br> <sub>The terminal in all examples is <a href="https://hyper.is/">Hyper</a>, theme is <a href="https://github.com/jonschlinkert/hyper-monokai-extended">hyper-monokai-extended</a>.</sub><br><br> <a href="#built-in-prompts"><strong>See more prompt examples</strong></a> </p> <br> <br> Created by [jonschlinkert](https://github.com/jonschlinkert) and [doowb](https://github.com/doowb), Enquirer is fast, easy to use, and lightweight enough for small projects, while also being powerful and customizable enough for the most advanced use cases. * **Fast** - [Loads in ~4ms](#-performance) (that's about _3-4 times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps_) * **Lightweight** - Only one dependency, the excellent [ansi-colors](https://github.com/doowb/ansi-colors) by [Brian Woodward](https://github.com/doowb). * **Easy to implement** - Uses promises and async/await and sensible defaults to make prompts easy to create and implement. * **Easy to use** - Thrill your users with a better experience! Navigating around input and choices is a breeze. You can even create [quizzes](examples/fun/countdown.js), or [record](examples/fun/record.js) and [playback](examples/fun/play.js) key bindings to aid with tutorials and videos. * **Intuitive** - Keypress combos are available to simplify usage. * **Flexible** - All prompts can be used standalone or chained together. * **Stylish** - Easily override semantic styles and symbols for any part of the prompt. * **Extensible** - Easily create and use custom prompts by extending Enquirer's built-in [prompts](#-prompts). * **Pluggable** - Add advanced features to Enquirer using plugins. * **Validation** - Optionally validate user input with any prompt. * **Well tested** - All prompts are well-tested, and tests are easy to create without having to use brittle, hacky solutions to spy on prompts or "inject" values. * **Examples** - There are numerous [examples](examples) available to help you get started. If you like Enquirer, please consider starring or tweeting about this project to show your support. Thanks! <br> <p align="center"> <b>>_ Ready to start making prompts your users will love? ▌</b><br> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/heartbeat.gif" alt="Enquirer Select Prompt with heartbeat example" width="750"> </p> <br> <br> ## ❯ Getting started Get started with Enquirer, the most powerful and easy-to-use Node.js library for creating interactive CLI prompts. * [Install](#-install) * [Usage](#-usage) * [Enquirer](#-enquirer) * [Prompts](#-prompts) - [Built-in Prompts](#-prompts) - [Custom Prompts](#-custom-prompts) * [Key Bindings](#-key-bindings) * [Options](#-options) * [Release History](#-release-history) * [Performance](#-performance) * [About](#-about) <br> ## ❯ Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install enquirer --save ``` Install with [yarn](https://yarnpkg.com/en/): ```sh $ yarn add enquirer ``` <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/npm-install.gif" alt="Install Enquirer with NPM" width="750"> </p> _(Requires Node.js 8.6 or higher. Please let us know if you need support for an earlier version by creating an [issue](../../issues/new).)_ <br> ## ❯ Usage ### Single prompt The easiest way to get started with enquirer is to pass a [question object](#prompt-options) to the `prompt` method. ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); // { username: 'jonschlinkert' } ``` _(Examples with `await` need to be run inside an `async` function)_ ### Multiple prompts Pass an array of ["question" objects](#prompt-options) to run a series of prompts. ```js const response = await prompt([ { type: 'input', name: 'name', message: 'What is your name?' }, { type: 'input', name: 'username', message: 'What is your username?' } ]); console.log(response); // { name: 'Edward Chan', username: 'edwardmchan' } ``` ### Different ways to run enquirer #### 1. By importing the specific `built-in prompt` ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Did you like enquirer?' }); prompt.run() .then(answer => console.log('Answer:', answer)); ``` #### 2. By passing the options to `prompt` ```js const { prompt } = require('enquirer'); prompt({ type: 'confirm', name: 'question', message: 'Did you like enquirer?' }) .then(answer => console.log('Answer:', answer)); ``` **Jump to**: [Getting Started](#-getting-started) · [Prompts](#-prompts) · [Options](#-options) · [Key Bindings](#-key-bindings) <br> ## ❯ Enquirer **Enquirer is a prompt runner** Add Enquirer to your JavaScript project with following line of code. ```js const Enquirer = require('enquirer'); ``` The main export of this library is the `Enquirer` class, which has methods and features designed to simplify running prompts. ```js const { prompt } = require('enquirer'); const question = [ { type: 'input', name: 'username', message: 'What is your username?' }, { type: 'password', name: 'password', message: 'What is your password?' } ]; let answers = await prompt(question); console.log(answers); ``` **Prompts control how values are rendered and returned** Each individual prompt is a class with special features and functionality for rendering the types of values you want to show users in the terminal, and subsequently returning the types of values you need to use in your application. **How can I customize prompts?** Below in this guide you will find information about creating [custom prompts](#-custom-prompts). For now, we'll focus on how to customize an existing prompt. All of the individual [prompt classes](#built-in-prompts) in this library are exposed as static properties on Enquirer. This allows them to be used directly without using `enquirer.prompt()`. Use this approach if you need to modify a prompt instance, or listen for events on the prompt. **Example** ```js const { Input } = require('enquirer'); const prompt = new Input({ name: 'username', message: 'What is your username?' }); prompt.run() .then(answer => console.log('Username:', answer)) .catch(console.error); ``` ### [Enquirer](index.js#L20) Create an instance of `Enquirer`. **Params** * `options` **{Object}**: (optional) Options to use with all prompts. * `answers` **{Object}**: (optional) Answers object to initialize with. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` ### [register()](index.js#L42) Register a custom prompt type. **Params** * `type` **{String}** * `fn` **{Function|Prompt}**: `Prompt` class, or a function that returns a `Prompt` class. * `returns` **{Object}**: Returns the Enquirer instance **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); enquirer.register('customType', require('./custom-prompt')); ``` ### [prompt()](index.js#L78) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const response = await enquirer.prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` ### [use()](index.js#L160) Use an enquirer plugin. **Params** * `plugin` **{Function}**: Plugin function that takes an instance of Enquirer. * `returns` **{Object}**: Returns the Enquirer instance. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const plugin = enquirer => { // do stuff to enquire instance }; enquirer.use(plugin); ``` ### [Enquirer#prompt](index.js#L210) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` <br> ## ❯ Prompts This section is about Enquirer's prompts: what they look like, how they work, how to run them, available options, and how to customize the prompts or create your own prompt concept. **Getting started with Enquirer's prompts** * [Prompt](#prompt) - The base `Prompt` class used by other prompts - [Prompt Options](#prompt-options) * [Built-in prompts](#built-in-prompts) * [Prompt Types](#prompt-types) - The base `Prompt` class used by other prompts * [Custom prompts](#%E2%9D%AF-custom-prompts) - Enquirer 2.0 introduced the concept of prompt "types", with the goal of making custom prompts easier than ever to create and use. ### Prompt The base `Prompt` class is used to create all other prompts. ```js const { Prompt } = require('enquirer'); class MyCustomPrompt extends Prompt {} ``` See the documentation for [creating custom prompts](#-custom-prompts) to learn more about how this works. #### Prompt Options Each prompt takes an options object (aka "question" object), that implements the following interface: ```js { // required type: string | function, name: string | function, message: string | function | async function, // optional skip: boolean | function | async function, initial: string | function | async function, format: function | async function, result: function | async function, validate: function | async function, } ``` Each property of the options object is described below: | **Property** | **Required?** | **Type** | **Description** | | ------------ | ------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `type` | yes | `string\|function` | Enquirer uses this value to determine the type of prompt to run, but it's optional when prompts are run directly. | | `name` | yes | `string\|function` | Used as the key for the answer on the returned values (answers) object. | | `message` | yes | `string\|function` | The message to display when the prompt is rendered in the terminal. | | `skip` | no | `boolean\|function` | If `true` it will not ask that prompt. | | `initial` | no | `string\|function` | The default value to return if the user does not supply a value. | | `format` | no | `function` | Function to format user input in the terminal. | | `result` | no | `function` | Function to format the final submitted value before it's returned. | | `validate` | no | `function` | Function to validate the submitted value before it's returned. This function may return a boolean or a string. If a string is returned it will be used as the validation error message. | **Example usage** ```js const { prompt } = require('enquirer'); const question = { type: 'input', name: 'username', message: 'What is your username?' }; prompt(question) .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` <br> ### Built-in prompts * [AutoComplete Prompt](#autocomplete-prompt) * [BasicAuth Prompt](#basicauth-prompt) * [Confirm Prompt](#confirm-prompt) * [Form Prompt](#form-prompt) * [Input Prompt](#input-prompt) * [Invisible Prompt](#invisible-prompt) * [List Prompt](#list-prompt) * [MultiSelect Prompt](#multiselect-prompt) * [Numeral Prompt](#numeral-prompt) * [Password Prompt](#password-prompt) * [Quiz Prompt](#quiz-prompt) * [Survey Prompt](#survey-prompt) * [Scale Prompt](#scale-prompt) * [Select Prompt](#select-prompt) * [Sort Prompt](#sort-prompt) * [Snippet Prompt](#snippet-prompt) * [Toggle Prompt](#toggle-prompt) ### AutoComplete Prompt Prompt that auto-completes as the user types, and returns the selected value as a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/autocomplete-prompt.gif" alt="Enquirer AutoComplete Prompt" width="750"> </p> **Example Usage** ```js const { AutoComplete } = require('enquirer'); const prompt = new AutoComplete({ name: 'flavor', message: 'Pick your favorite flavor', limit: 10, initial: 2, choices: [ 'Almond', 'Apple', 'Banana', 'Blackberry', 'Blueberry', 'Cherry', 'Chocolate', 'Cinnamon', 'Coconut', 'Cranberry', 'Grape', 'Nougat', 'Orange', 'Pear', 'Pineapple', 'Raspberry', 'Strawberry', 'Vanilla', 'Watermelon', 'Wintergreen' ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **AutoComplete Options** | Option | Type | Default | Description | | ----------- | ---------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `highlight` | `function` | `dim` version of primary style | The color to use when "highlighting" characters in the list that match user input. | | `multiple` | `boolean` | `false` | Allow multiple choices to be selected. | | `suggest` | `function` | Greedy match, returns true if choice message contains input string. | Function that filters choices. Takes user input and a choices array, and returns a list of matching choices. | | `initial` | `number` | 0 | Preselected item in the list of choices. | | `footer` | `function` | None | Function that displays [footer text](https://github.com/enquirer/enquirer/blob/6c2819518a1e2ed284242a99a685655fbaabfa28/examples/autocomplete/option-footer.js#L10) | **Related prompts** * [Select](#select-prompt) * [MultiSelect](#multiselect-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### BasicAuth Prompt Prompt that asks for username and password to authenticate the user. The default implementation of `authenticate` function in `BasicAuth` prompt is to compare the username and password with the values supplied while running the prompt. The implementer is expected to override the `authenticate` function with a custom logic such as making an API request to a server to authenticate the username and password entered and expect a token back. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61570485-7ffd9c00-aaaa-11e9-857a-d47dc7008284.gif" alt="Enquirer BasicAuth Prompt" width="750"> </p> **Example Usage** ```js const { BasicAuth } = require('enquirer'); const prompt = new BasicAuth({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '123', showPassword: true }); prompt .run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Confirm Prompt Prompt that returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/confirm-prompt.gif" alt="Enquirer Confirm Prompt" width="750"> </p> **Example Usage** ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Want to answer?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Form Prompt Prompt that allows the user to enter and submit multiple values on a single terminal screen. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/form-prompt.gif" alt="Enquirer Form Prompt" width="750"> </p> **Example Usage** ```js const { Form } = require('enquirer'); const prompt = new Form({ name: 'user', message: 'Please provide the following information:', choices: [ { name: 'firstname', message: 'First Name', initial: 'Jon' }, { name: 'lastname', message: 'Last Name', initial: 'Schlinkert' }, { name: 'username', message: 'GitHub username', initial: 'jonschlinkert' } ] }); prompt.run() .then(value => console.log('Answer:', value)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Input Prompt Prompt that takes user input and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/input-prompt.gif" alt="Enquirer Input Prompt" width="750"> </p> **Example Usage** ```js const { Input } = require('enquirer'); const prompt = new Input({ message: 'What is your username?', initial: 'jonschlinkert' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.log); ``` You can use [data-store](https://github.com/jonschlinkert/data-store) to store [input history](https://github.com/enquirer/enquirer/blob/master/examples/input/option-history.js) that the user can cycle through (see [source](https://github.com/enquirer/enquirer/blob/8407dc3579123df5e6e20215078e33bb605b0c37/lib/prompts/input.js)). **Related prompts** * [Confirm](#confirm-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Invisible Prompt Prompt that takes user input, hides it from the terminal, and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/invisible-prompt.gif" alt="Enquirer Invisible Prompt" width="750"> </p> **Example Usage** ```js const { Invisible } = require('enquirer'); const prompt = new Invisible({ name: 'secret', message: 'What is your secret?' }); prompt.run() .then(answer => console.log('Answer:', { secret: answer })) .catch(console.error); ``` **Related prompts** * [Password](#password-prompt) * [Input](#input-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### List Prompt Prompt that returns a list of values, created by splitting the user input. The default split character is `,` with optional trailing whitespace. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/list-prompt.gif" alt="Enquirer List Prompt" width="750"> </p> **Example Usage** ```js const { List } = require('enquirer'); const prompt = new List({ name: 'keywords', message: 'Type comma-separated keywords' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Sort](#sort-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### MultiSelect Prompt Prompt that allows the user to select multiple items from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/multiselect-prompt.gif" alt="Enquirer MultiSelect Prompt" width="750"> </p> **Example Usage** ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: ['aqua', 'blue', 'fuchsia'] ``` **Example key-value pairs** Optionally, pass a `result` function and use the `.map` method to return an object of key-value pairs of the selected names and values: [example](./examples/multiselect/option-result.js) ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ], result(names) { return this.map(names); } }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: { aqua: '#00ffff', blue: '#0000ff', fuchsia: '#ff00ff' } ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Numeral Prompt Prompt that takes a number as input. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/numeral-prompt.gif" alt="Enquirer Numeral Prompt" width="750"> </p> **Example Usage** ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ name: 'number', message: 'Please enter a number' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Confirm](#confirm-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Password Prompt Prompt that takes user input and masks it in the terminal. Also see the [invisible prompt](#invisible-prompt) <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/password-prompt.gif" alt="Enquirer Password Prompt" width="750"> </p> **Example Usage** ```js const { Password } = require('enquirer'); const prompt = new Password({ name: 'password', message: 'What is your password?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Invisible](#invisible-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Quiz Prompt Prompt that allows the user to play multiple-choice quiz questions. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61567561-891d4780-aa6f-11e9-9b09-3d504abd24ed.gif" alt="Enquirer Quiz Prompt" width="750"> </p> **Example Usage** ```js const { Quiz } = require('enquirer'); const prompt = new Quiz({ name: 'countries', message: 'How many countries are there in the world?', choices: ['165', '175', '185', '195', '205'], correctChoice: 3 }); prompt .run() .then(answer => { if (answer.correct) { console.log('Correct!'); } else { console.log(`Wrong! Correct answer is ${answer.correctAnswer}`); } }) .catch(console.error); ``` **Quiz Options** | Option | Type | Required | Description | | ----------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | | `choices` | `array` | Yes | The list of possible answers to the quiz question. | | `correctChoice`| `number` | Yes | Index of the correct choice from the `choices` array. | **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Survey Prompt Prompt that allows the user to provide feedback for a list of questions. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"> </p> **Example Usage** ```js const { Survey } = require('enquirer'); const prompt = new Survey({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.' }, { name: 'navigation', message: 'The website is easy to navigate.' }, { name: 'images', message: 'The website usually has good images.' }, { name: 'upload', message: 'The website makes it easy to upload images.' }, { name: 'colors', message: 'The website has a pleasing color palette.' } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [Scale](#scale-prompt) * [Snippet](#snippet-prompt) * [Select](#select-prompt) *** ### Scale Prompt A more compact version of the [Survey prompt](#survey-prompt), the Scale prompt allows the user to quickly provide feedback using a [Likert Scale](https://en.wikipedia.org/wiki/Likert_scale). <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/scale-prompt.gif" alt="Enquirer Scale Prompt" width="750"> </p> **Example Usage** ```js const { Scale } = require('enquirer'); const prompt = new Scale({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.', initial: 2 }, { name: 'navigation', message: 'The website is easy to navigate.', initial: 2 }, { name: 'images', message: 'The website usually has good images.', initial: 2 }, { name: 'upload', message: 'The website makes it easy to upload images.', initial: 2 }, { name: 'colors', message: 'The website has a pleasing color palette.', initial: 2 } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Select Prompt Prompt that allows the user to select from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/select-prompt.gif" alt="Enquirer Select Prompt" width="750"> </p> **Example Usage** ```js const { Select } = require('enquirer'); const prompt = new Select({ name: 'color', message: 'Pick a flavor', choices: ['apple', 'grape', 'watermelon', 'cherry', 'orange'] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [MultiSelect](#multiselect-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Sort Prompt Prompt that allows the user to sort items in a list. **Example** In this [example](https://github.com/enquirer/enquirer/raw/master/examples/sort/prompt.js), custom styling is applied to the returned values to make it easier to see what's happening. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/sort-prompt.gif" alt="Enquirer Sort Prompt" width="750"> </p> **Example Usage** ```js const colors = require('ansi-colors'); const { Sort } = require('enquirer'); const prompt = new Sort({ name: 'colors', message: 'Sort the colors in order of preference', hint: 'Top is best, bottom is worst', numbered: true, choices: ['red', 'white', 'green', 'cyan', 'yellow'].map(n => ({ name: n, message: colors[n](n) })) }); prompt.run() .then(function(answer = []) { console.log(answer); console.log('Your preferred order of colors is:'); console.log(answer.map(key => colors[key](key)).join('\n')); }) .catch(console.error); ``` **Related prompts** * [List](#list-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Snippet Prompt Prompt that allows the user to replace placeholders in a snippet of code or text. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/snippet-prompt.gif" alt="Prompts" width="750"> </p> **Example Usage** ```js const semver = require('semver'); const { Snippet } = require('enquirer'); const prompt = new Snippet({ name: 'username', message: 'Fill out the fields in package.json', required: true, fields: [ { name: 'author_name', message: 'Author Name' }, { name: 'version', validate(value, state, item, index) { if (item && item.name === 'version' && !semver.valid(value)) { return prompt.styles.danger('version should be a valid semver value'); } return true; } } ], template: `{ "name": "\${name}", "description": "\${description}", "version": "\${version}", "homepage": "https://github.com/\${username}/\${name}", "author": "\${author_name} (https://github.com/\${username})", "repository": "\${username}/\${name}", "license": "\${license:ISC}" } ` }); prompt.run() .then(answer => console.log('Answer:', answer.result)) .catch(console.error); ``` **Related prompts** * [Survey](#survey-prompt) * [AutoComplete](#autocomplete-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Toggle Prompt Prompt that allows the user to toggle between two values then returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/toggle-prompt.gif" alt="Enquirer Toggle Prompt" width="750"> </p> **Example Usage** ```js const { Toggle } = require('enquirer'); const prompt = new Toggle({ message: 'Want to answer?', enabled: 'Yep', disabled: 'Nope' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Confirm](#confirm-prompt) * [Input](#input-prompt) * [Sort](#sort-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Prompt Types There are 5 (soon to be 6!) type classes: * [ArrayPrompt](#arrayprompt) - [Options](#options) - [Properties](#properties) - [Methods](#methods) - [Choices](#choices) - [Defining choices](#defining-choices) - [Choice properties](#choice-properties) - [Related prompts](#related-prompts) * [AuthPrompt](#authprompt) * [BooleanPrompt](#booleanprompt) * DatePrompt (Coming Soon!) * [NumberPrompt](#numberprompt) * [StringPrompt](#stringprompt) Each type is a low-level class that may be used as a starting point for creating higher level prompts. Continue reading to learn how. ### ArrayPrompt The `ArrayPrompt` class is used for creating prompts that display a list of choices in the terminal. For example, Enquirer uses this class as the basis for the [Select](#select) and [Survey](#survey) prompts. #### Options In addition to the [options](#options) available to all prompts, Array prompts also support the following options. | **Option** | **Required?** | **Type** | **Description** | | ----------- | ------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------- | | `autofocus` | `no` | `string\|number` | The index or name of the choice that should have focus when the prompt loads. Only one choice may have focus at a time. | | | `stdin` | `no` | `stream` | The input stream to use for emitting keypress events. Defaults to `process.stdin`. | | `stdout` | `no` | `stream` | The output stream to use for writing the prompt to the terminal. Defaults to `process.stdout`. | | | #### Properties Array prompts have the following instance properties and getters. | **Property name** | **Type** | **Description** | | ----------------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `choices` | `array` | Array of choices that have been normalized from choices passed on the prompt options. | | `cursor` | `number` | Position of the cursor relative to the _user input (string)_. | | `enabled` | `array` | Returns an array of enabled choices. | | `focused` | `array` | Returns the currently selected choice in the visible list of choices. This is similar to the concept of focus in HTML and CSS. Focused choices are always visible (on-screen). When a list of choices is longer than the list of visible choices, and an off-screen choice is _focused_, the list will scroll to the focused choice and re-render. | | `focused` | Gets the currently selected choice. Equivalent to `prompt.choices[prompt.index]`. | | `index` | `number` | Position of the pointer in the _visible list (array) of choices_. | | `limit` | `number` | The number of choices to display on-screen. | | `selected` | `array` | Either a list of enabled choices (when `options.multiple` is true) or the currently focused choice. | | `visible` | `string` | | #### Methods | **Method** | **Description** | | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pointer()` | Returns the visual symbol to use to identify the choice that currently has focus. The `❯` symbol is often used for this. The pointer is not always visible, as with the `autocomplete` prompt. | | `indicator()` | Returns the visual symbol that indicates whether or not a choice is checked/enabled. | | `focus()` | Sets focus on a choice, if it can be focused. | #### Choices Array prompts support the `choices` option, which is the array of choices users will be able to select from when rendered in the terminal. **Type**: `string|object` **Example** ```js const { prompt } = require('enquirer'); const questions = [{ type: 'select', name: 'color', message: 'Favorite color?', initial: 1, choices: [ { name: 'red', message: 'Red', value: '#ff0000' }, //<= choice object { name: 'green', message: 'Green', value: '#00ff00' }, //<= choice object { name: 'blue', message: 'Blue', value: '#0000ff' } //<= choice object ] }]; let answers = await prompt(questions); console.log('Answer:', answers.color); ``` #### Defining choices Whether defined as a string or object, choices are normalized to the following interface: ```js { name: string; message: string | undefined; value: string | undefined; hint: string | undefined; disabled: boolean | string | undefined; } ``` **Example** ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: ['Apple', 'Orange', 'Raspberry'] }; ``` Normalizes to the following when the prompt is run: ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: [ { name: 'Apple', message: 'Apple', value: 'Apple' }, { name: 'Orange', message: 'Orange', value: 'Orange' }, { name: 'Raspberry', message: 'Raspberry', value: 'Raspberry' } ] }; ``` #### Choice properties The following properties are supported on `choice` objects. | **Option** | **Type** | **Description** | | ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `name` | `string` | The unique key to identify a choice | | `message` | `string` | The message to display in the terminal. `name` is used when this is undefined. | | `value` | `string` | Value to associate with the choice. Useful for creating key-value pairs from user choices. `name` is used when this is undefined. | | `choices` | `array` | Array of "child" choices. | | `hint` | `string` | Help message to display next to a choice. | | `role` | `string` | Determines how the choice will be displayed. Currently the only role supported is `separator`. Additional roles may be added in the future (like `heading`, etc). Please create a [feature request] | | `enabled` | `boolean` | Enabled a choice by default. This is only supported when `options.multiple` is true or on prompts that support multiple choices, like [MultiSelect](#-multiselect). | | `disabled` | `boolean\|string` | Disable a choice so that it cannot be selected. This value may either be `true`, `false`, or a message to display. | | `indicator` | `string\|function` | Custom indicator to render for a choice (like a check or radio button). | #### Related prompts * [AutoComplete](#autocomplete-prompt) * [Form](#form-prompt) * [MultiSelect](#multiselect-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) *** ### AuthPrompt The `AuthPrompt` is used to create prompts to log in user using any authentication method. For example, Enquirer uses this class as the basis for the [BasicAuth Prompt](#basicauth-prompt). You can also find prompt examples in `examples/auth/` folder that utilizes `AuthPrompt` to create OAuth based authentication prompt or a prompt that authenticates using time-based OTP, among others. `AuthPrompt` has a factory function that creates an instance of `AuthPrompt` class and it expects an `authenticate` function, as an argument, which overrides the `authenticate` function of the `AuthPrompt` class. #### Methods | **Method** | **Description** | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `authenticate()` | Contain all the authentication logic. This function should be overridden to implement custom authentication logic. The default `authenticate` function throws an error if no other function is provided. | #### Choices Auth prompt supports the `choices` option, which is the similar to the choices used in [Form Prompt](#form-prompt). **Example** ```js const { AuthPrompt } = require('enquirer'); function authenticate(value, state) { if (value.username === this.options.username && value.password === this.options.password) { return true; } return false; } const CustomAuthPrompt = AuthPrompt.create(authenticate); const prompt = new CustomAuthPrompt({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '1234567', choices: [ { name: 'username', message: 'username' }, { name: 'password', message: 'password' } ] }); prompt .run() .then(answer => console.log('Authenticated?', answer)) .catch(console.error); ``` #### Related prompts * [BasicAuth Prompt](#basicauth-prompt) *** ### BooleanPrompt The `BooleanPrompt` class is used for creating prompts that display and return a boolean value. ```js const { BooleanPrompt } = require('enquirer'); const prompt = new BooleanPrompt({ header: '========================', message: 'Do you love enquirer?', footer: '========================', }); prompt.run() .then(answer => console.log('Selected:', answer)) .catch(console.error); ``` **Returns**: `boolean` *** ### NumberPrompt The `NumberPrompt` class is used for creating prompts that display and return a numerical value. ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ header: '************************', message: 'Input the Numbers:', footer: '************************', }); prompt.run() .then(answer => console.log('Numbers are:', answer)) .catch(console.error); ``` **Returns**: `string|number` (number, or number formatted as a string) *** ### StringPrompt The `StringPrompt` class is used for creating prompts that display and return a string value. ```js const { StringPrompt } = require('enquirer'); const prompt = new StringPrompt({ header: '************************', message: 'Input the String:', footer: '************************' }); prompt.run() .then(answer => console.log('String is:', answer)) .catch(console.error); ``` **Returns**: `string` <br> ## ❯ Custom prompts With Enquirer 2.0, custom prompts are easier than ever to create and use. **How do I create a custom prompt?** Custom prompts are created by extending either: * Enquirer's `Prompt` class * one of the built-in [prompts](#-prompts), or * low-level [types](#-types). <!-- Example: HaiKarate Custom Prompt --> ```js const { Prompt } = require('enquirer'); class HaiKarate extends Prompt { constructor(options = {}) { super(options); this.value = options.initial || 0; this.cursorHide(); } up() { this.value++; this.render(); } down() { this.value--; this.render(); } render() { this.clear(); // clear previously rendered prompt from the terminal this.write(`${this.state.message}: ${this.value}`); } } // Use the prompt by creating an instance of your custom prompt class. const prompt = new HaiKarate({ message: 'How many sprays do you want?', initial: 10 }); prompt.run() .then(answer => console.log('Sprays:', answer)) .catch(console.error); ``` If you want to be able to specify your prompt by `type` so that it may be used alongside other prompts, you will need to first create an instance of `Enquirer`. ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` Then use the `.register()` method to add your custom prompt. ```js enquirer.register('haikarate', HaiKarate); ``` Now you can do the following when defining "questions". ```js let spritzer = require('cologne-drone'); let answers = await enquirer.prompt([ { type: 'haikarate', name: 'cologne', message: 'How many sprays do you need?', initial: 10, async onSubmit(name, value) { await spritzer.activate(value); //<= activate drone return value; } } ]); ``` <br> ## ❯ Key Bindings ### All prompts These key combinations may be used with all prompts. | **command** | **description** | | -------------------------------- | -------------------------------------- | | <kbd>ctrl</kbd> + <kbd>c</kbd> | Cancel the prompt. | | <kbd>ctrl</kbd> + <kbd>g</kbd> | Reset the prompt to its initial state. | <br> ### Move cursor These combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>left</kbd> | Move the cursor back one character. | | <kbd>right</kbd> | Move the cursor forward one character. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> ### Edit Input These key combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> | **command (Mac)** | **command (Windows)** | **description** | | ----------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | <kbd>delete</kbd> | <kbd>backspace</kbd> | Delete one character to the left. | | <kbd>fn</kbd> + <kbd>delete</kbd> | <kbd>delete</kbd> | Delete one character to the right. | | <kbd>option</kbd> + <kbd>up</kbd> | <kbd>alt</kbd> + <kbd>up</kbd> | Scroll to the previous item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | | <kbd>option</kbd> + <kbd>down</kbd> | <kbd>alt</kbd> + <kbd>down</kbd> | Scroll to the next item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | ### Select choices These key combinations may be used on prompts that support _multiple_ choices, such as the [multiselect prompt](#multiselect-prompt), or the [select prompt](#select-prompt) when the `multiple` options is true. | **command** | **description** | | ----------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>space</kbd> | Toggle the currently selected choice when `options.multiple` is true. | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>a</kbd> | Toggle all choices to be enabled or disabled. | | <kbd>i</kbd> | Invert the current selection of choices. | | <kbd>g</kbd> | Toggle the current choice group. | <br> ### Hide/show choices | **command** | **description** | | ------------------------------- | ---------------------------------------------- | | <kbd>fn</kbd> + <kbd>up</kbd> | Decrease the number of visible choices by one. | | <kbd>fn</kbd> + <kbd>down</kbd> | Increase the number of visible choices by one. | <br> ### Move/lock Pointer | **command** | **description** | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>up</kbd> | Move the pointer up. | | <kbd>down</kbd> | Move the pointer down. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move the pointer to the first _visible_ choice. | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move the pointer to the last _visible_ choice. | | <kbd>shift</kbd> + <kbd>up</kbd> | Scroll up one choice without changing pointer position (locks the pointer while scrolling). | | <kbd>shift</kbd> + <kbd>down</kbd> | Scroll down one choice without changing pointer position (locks the pointer while scrolling). | <br> | **command (Mac)** | **command (Windows)** | **description** | | -------------------------------- | --------------------- | ---------------------------------------------------------- | | <kbd>fn</kbd> + <kbd>left</kbd> | <kbd>home</kbd> | Move the pointer to the first choice in the choices array. | | <kbd>fn</kbd> + <kbd>right</kbd> | <kbd>end</kbd> | Move the pointer to the last choice in the choices array. | <br> ## ❯ Release History Please see [CHANGELOG.md](CHANGELOG.md). ## ❯ Performance ### System specs MacBook Pro, Intel Core i7, 2.5 GHz, 16 GB. ### Load time Time it takes for the module to load the first time (average of 3 runs): ``` enquirer: 4.013ms inquirer: 286.717ms ``` <br> ## ❯ About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Todo We're currently working on documentation for the following items. Please star and watch the repository for updates! * [ ] Customizing symbols * [ ] Customizing styles (palette) * [ ] Customizing rendered input * [ ] Customizing returned values * [ ] Customizing key bindings * [ ] Question validation * [ ] Choice validation * [ ] Skipping questions * [ ] Async choices * [ ] Async timers: loaders, spinners and other animations * [ ] Links to examples </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ```sh $ yarn && yarn test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> #### Contributors | **Commits** | **Contributor** | | --- | --- | | 283 | [jonschlinkert](https://github.com/jonschlinkert) | | 82 | [doowb](https://github.com/doowb) | | 32 | [rajat-sr](https://github.com/rajat-sr) | | 20 | [318097](https://github.com/318097) | | 15 | [g-plane](https://github.com/g-plane) | | 12 | [pixelass](https://github.com/pixelass) | | 5 | [adityavyas611](https://github.com/adityavyas611) | | 5 | [satotake](https://github.com/satotake) | | 3 | [tunnckoCore](https://github.com/tunnckoCore) | | 3 | [Ovyerus](https://github.com/Ovyerus) | | 3 | [sw-yx](https://github.com/sw-yx) | | 2 | [DanielRuf](https://github.com/DanielRuf) | | 2 | [GabeL7r](https://github.com/GabeL7r) | | 1 | [AlCalzone](https://github.com/AlCalzone) | | 1 | [hipstersmoothie](https://github.com/hipstersmoothie) | | 1 | [danieldelcore](https://github.com/danieldelcore) | | 1 | [ImgBotApp](https://github.com/ImgBotApp) | | 1 | [jsonkao](https://github.com/jsonkao) | | 1 | [knpwrs](https://github.com/knpwrs) | | 1 | [yeskunall](https://github.com/yeskunall) | | 1 | [mischah](https://github.com/mischah) | | 1 | [renarsvilnis](https://github.com/renarsvilnis) | | 1 | [sbugert](https://github.com/sbugert) | | 1 | [stephencweiss](https://github.com/stephencweiss) | | 1 | [skellock](https://github.com/skellock) | | 1 | [whxaxes](https://github.com/whxaxes) | #### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) #### Credit Thanks to [derhuerst](https://github.com/derhuerst), creator of prompt libraries such as [prompt-skeleton](https://github.com/derhuerst/prompt-skeleton), which influenced some of the concepts we used in our prompts. #### License Copyright © 2018-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support. [![Build Status](https://travis-ci.org/epoberezkin/fast-deep-equal.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-deep-equal) [![npm](https://img.shields.io/npm/v/fast-deep-equal.svg)](https://www.npmjs.com/package/fast-deep-equal) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-deep-equal/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-deep-equal?branch=master) ## Install ```bash npm install fast-deep-equal ``` ## Features - ES5 compatible - works in node.js (8+) and browsers (IE9+) - checks equality of Date and RegExp objects by value. ES6 equal (`require('fast-deep-equal/es6')`) also supports: - Maps - Sets - Typed arrays ## Usage ```javascript var equal = require('fast-deep-equal'); console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true ``` To support ES6 Maps, Sets and Typed arrays equality use: ```javascript var equal = require('fast-deep-equal/es6'); console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true ``` To use with React (avoiding the traversal of React elements' _owner property that contains circular references and is not needed when comparing the elements - borrowed from [react-fast-compare](https://github.com/FormidableLabs/react-fast-compare)): ```javascript var equal = require('fast-deep-equal/react'); var equal = require('fast-deep-equal/es6/react'); ``` ## Performance benchmark Node.js v12.6.0: ``` fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled) fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled) fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled) nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled) shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled) underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled) lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled) deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled) deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled) ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled) util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled) assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled) The fastest is fast-deep-equal ``` To run benchmark (requires node.js 6+): ```bash npm run benchmark ``` __Please note__: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application. ## Enterprise support fast-deep-equal package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-deep-equal?utm_source=npm-fast-deep-equal&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/fast-deep-equal/blob/master/LICENSE) [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). ### esutils [![Build Status](https://secure.travis-ci.org/estools/esutils.svg)](http://travis-ci.org/estools/esutils) esutils ([esutils](http://github.com/estools/esutils)) is utility box for ECMAScript language tools. ### API ### ast #### ast.isExpression(node) Returns true if `node` is an Expression as defined in ECMA262 edition 5.1 section [11](https://es5.github.io/#x11). #### ast.isStatement(node) Returns true if `node` is a Statement as defined in ECMA262 edition 5.1 section [12](https://es5.github.io/#x12). #### ast.isIterationStatement(node) Returns true if `node` is an IterationStatement as defined in ECMA262 edition 5.1 section [12.6](https://es5.github.io/#x12.6). #### ast.isSourceElement(node) Returns true if `node` is a SourceElement as defined in ECMA262 edition 5.1 section [14](https://es5.github.io/#x14). #### ast.trailingStatement(node) Returns `Statement?` if `node` has trailing `Statement`. ```js if (cond) consequent; ``` When taking this `IfStatement`, returns `consequent;` statement. #### ast.isProblematicIfStatement(node) Returns true if `node` is a problematic IfStatement. If `node` is a problematic `IfStatement`, `node` cannot be represented as an one on one JavaScript code. ```js { type: 'IfStatement', consequent: { type: 'WithStatement', body: { type: 'IfStatement', consequent: {type: 'EmptyStatement'} } }, alternate: {type: 'EmptyStatement'} } ``` The above node cannot be represented as a JavaScript code, since the top level `else` alternate belongs to an inner `IfStatement`. ### code #### code.isDecimalDigit(code) Return true if provided code is decimal digit. #### code.isHexDigit(code) Return true if provided code is hexadecimal digit. #### code.isOctalDigit(code) Return true if provided code is octal digit. #### code.isWhiteSpace(code) Return true if provided code is white space. White space characters are formally defined in ECMA262. #### code.isLineTerminator(code) Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262. #### code.isIdentifierStart(code) Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262. #### code.isIdentifierPart(code) Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262. ### keyword #### keyword.isKeywordES5(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections [7.6.1.1](http://es5.github.io/#x7.6.1.1) and [7.6.1.2](http://es5.github.io/#x7.6.1.2), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isKeywordES6(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections [11.6.2.1](http://ecma-international.org/ecma-262/6.0/#sec-keywords) and [11.6.2.2](http://ecma-international.org/ecma-262/6.0/#sec-future-reserved-words), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isReservedWordES5(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section [7.6.1](http://es5.github.io/#x7.6.1). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isReservedWordES6(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section [11.6.2](http://ecma-international.org/ecma-262/6.0/#sec-reserved-words). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isRestrictedWord(id) Returns `true` if provided identifier string is one of `eval` or `arguments`. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section [12.1.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers-static-semantics-early-errors). #### keyword.isIdentifierNameES5(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). #### keyword.isIdentifierNameES6(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section [11.6](http://ecma-international.org/ecma-262/6.0/#sec-names-and-keywords). #### keyword.isIdentifierES5(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. #### keyword.isIdentifierES6(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section [12.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. ### License Copyright (C) 2013 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # is-extglob [![NPM version](https://img.shields.io/npm/v/is-extglob.svg?style=flat)](https://www.npmjs.com/package/is-extglob) [![NPM downloads](https://img.shields.io/npm/dm/is-extglob.svg?style=flat)](https://npmjs.org/package/is-extglob) [![Build Status](https://img.shields.io/travis/jonschlinkert/is-extglob.svg?style=flat)](https://travis-ci.org/jonschlinkert/is-extglob) > Returns true if a string has an extglob. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-extglob ``` ## Usage ```js var isExtglob = require('is-extglob'); ``` **True** ```js isExtglob('?(abc)'); isExtglob('@(abc)'); isExtglob('!(abc)'); isExtglob('*(abc)'); isExtglob('+(abc)'); ``` **False** Escaped extglobs: ```js isExtglob('\\?(abc)'); isExtglob('\\@(abc)'); isExtglob('\\!(abc)'); isExtglob('\\*(abc)'); isExtglob('\\+(abc)'); ``` Everything else... ```js isExtglob('foo.js'); isExtglob('!foo.js'); isExtglob('*.js'); isExtglob('**/abc.js'); isExtglob('abc/*.js'); isExtglob('abc/(aaa|bbb).js'); isExtglob('abc/[a-z].js'); isExtglob('abc/{a,b}.js'); isExtglob('abc/?.js'); isExtglob('abc.js'); isExtglob('abc/def/ghi.js'); ``` ## History **v2.0** Adds support for escaping. Escaped exglobs no longer return true. ## About ### Related projects * [has-glob](https://www.npmjs.com/package/has-glob): Returns `true` if an array has a glob pattern. | [homepage](https://github.com/jonschlinkert/has-glob "Returns `true` if an array has a glob pattern.") * [is-glob](https://www.npmjs.com/package/is-glob): Returns `true` if the given string looks like a glob pattern or an extglob pattern… [more](https://github.com/jonschlinkert/is-glob) | [homepage](https://github.com/jonschlinkert/is-glob "Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a bet") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/jonschlinkert/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ### License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/is-extglob/blob/master/LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.1.31, on October 12, 2016._ The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. - `context` describes how many lines of context should be included. - `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. - `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](https://kpdecker.github.io/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value = <JSON.Value>valueOrNull; // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let innerString = (<JSON.Str>value).valueOf(); let jsonString = (<JSON.Str>value).stringify(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.stringify(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` [![Build Status](https://api.travis-ci.org/adaltas/node-csv-stringify.svg)](https://travis-ci.org/#!/adaltas/node-csv-stringify) [![NPM](https://img.shields.io/npm/dm/csv-stringify)](https://www.npmjs.com/package/csv-stringify) [![NPM](https://img.shields.io/npm/v/csv-stringify)](https://www.npmjs.com/package/csv-stringify) This package is a stringifier converting records into a CSV text and implementing the Node.js [`stream.Transform` API](https://nodejs.org/api/stream.html). It also provides the easier synchronous and callback-based APIs for conveniency. It is both extremely easy to use and powerful. It was first released in 2010 and is tested against big data sets by a large community. ## Documentation * [Project homepage](http://csv.js.org/stringify/) * [API](http://csv.js.org/stringify/api/) * [Options](http://csv.js.org/stringify/options/) * [Examples](http://csv.js.org/stringify/examples/) ## Main features * Follow the Node.js streaming API * Simplicity with the optional callback API * Support for custom formatters, delimiters, quotes, escape characters and header * Support big datasets * Complete test coverage and samples for inspiration * Only 1 external dependency * to be used conjointly with `csv-generate`, `csv-parse` and `stream-transform` * MIT License ## Usage The module is built on the Node.js Stream API. For the sake of simplicity, a simple callback API is also provided. To give you a quick look, here's an example of the callback API: ```javascript const stringify = require('csv-stringify') const assert = require('assert') // import stringify from 'csv-stringify' // import assert from 'assert/strict' const input = [ [ '1', '2', '3', '4' ], [ 'a', 'b', 'c', 'd' ] ] stringify(input, function(err, output) { const expected = '1,2,3,4\na,b,c,d\n' assert.strictEqual(output, expected, `output.should.eql ${expected}`) console.log("Passed.", output) }) ``` ## Development Tests are executed with mocha. To install it, run `npm install` followed by `npm test`. It will install mocha and its dependencies in your project "node_modules" directory and run the test suite. The tests run against the CoffeeScript source files. To generate the JavaScript files, run `npm run build`. The test suite is run online with [Travis](https://travis-ci.org/#!/adaltas/node-csv-stringify). See the [Travis definition file](https://github.com/adaltas/node-csv-stringify/blob/master/.travis.yml) to view the tested Node.js version. ## Contributors * David Worms: <https://github.com/wdavidw> [csv_home]: https://github.com/adaltas/node-csv [stream_transform]: http://nodejs.org/api/stream.html#stream_class_stream_transform [examples]: http://csv.js.org/stringify/examples/ [csv]: https://github.com/adaltas/node-csv ## Timezone support In order to provide support for timezones, without relying on the JavaScript host or any other time-zone aware environment, this library makes use of teh IANA Timezone Database directly: https://www.iana.org/time-zones The database files are parsed by the scripts in this folder, which emit AssemblyScript code which is used to process the various rules at runtime. semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` You can also just load the module for the function that you care about, if you'd like to minimize your footprint. ```js // load the whole API at once in a single object const semver = require('semver') // or just load the bits you need // all of them listed here, just pick and choose what you want // classes const SemVer = require('semver/classes/semver') const Comparator = require('semver/classes/comparator') const Range = require('semver/classes/range') // functions for working with versions const semverParse = require('semver/functions/parse') const semverValid = require('semver/functions/valid') const semverClean = require('semver/functions/clean') const semverInc = require('semver/functions/inc') const semverDiff = require('semver/functions/diff') const semverMajor = require('semver/functions/major') const semverMinor = require('semver/functions/minor') const semverPatch = require('semver/functions/patch') const semverPrerelease = require('semver/functions/prerelease') const semverCompare = require('semver/functions/compare') const semverRcompare = require('semver/functions/rcompare') const semverCompareLoose = require('semver/functions/compare-loose') const semverCompareBuild = require('semver/functions/compare-build') const semverSort = require('semver/functions/sort') const semverRsort = require('semver/functions/rsort') // low-level comparators between versions const semverGt = require('semver/functions/gt') const semverLt = require('semver/functions/lt') const semverEq = require('semver/functions/eq') const semverNeq = require('semver/functions/neq') const semverGte = require('semver/functions/gte') const semverLte = require('semver/functions/lte') const semverCmp = require('semver/functions/cmp') const semverCoerce = require('semver/functions/coerce') // working with ranges const semverSatisfies = require('semver/functions/satisfies') const semverMaxSatisfying = require('semver/ranges/max-satisfying') const semverMinSatisfying = require('semver/ranges/min-satisfying') const semverToComparators = require('semver/ranges/to-comparators') const semverMinVersion = require('semver/ranges/min-version') const semverValidRange = require('semver/ranges/valid') const semverOutside = require('semver/ranges/outside') const semverGtr = require('semver/ranges/gtr') const semverLtr = require('semver/ranges/ltr') const semverIntersects = require('semver/ranges/intersects') const simplifyRange = require('semver/ranges/simplify') const rangeSubset = require('semver/ranges/subset') ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0-0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0-0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any non-prerelease version satisfies, unless `includePrerelease` is specified, in which case any version at all satisfies) * `1.x` := `>=1.0.0 <2.0.0-0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0-0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0-0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0-0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0-0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0-0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0-0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0-0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0-0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0-0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0-0` * `^0.2.3` := `>=0.2.3 <0.3.0-0` * `^0.0.3` := `>=0.0.3 <0.0.4-0` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4-0` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0-0` * `^0.0.x` := `>=0.0.0 <0.1.0-0` * `^0.0` := `>=0.0.0 <0.1.0-0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0-0` * `^0.x` := `>=0.0.0 <1.0.0-0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect * `simplifyRange(versions, range)`: Return a "simplified" range that matches the same items in `versions` list as the range specified. Note that it does *not* guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with `||` programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned. * `subset(subRange, superRange)`: Return `true` if the `subRange` range is entirely contained by the `superRange` range. Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` ## Exported Modules <!-- TODO: Make sure that all of these items are documented (classes aren't, eg), and then pull the module name into the documentation for that specific thing. --> You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main `require('semver')` export uses getter functions to lazily load the parts of the API that are used. The following modules are available: * `require('semver')` * `require('semver/classes')` * `require('semver/classes/comparator')` * `require('semver/classes/range')` * `require('semver/classes/semver')` * `require('semver/functions/clean')` * `require('semver/functions/cmp')` * `require('semver/functions/coerce')` * `require('semver/functions/compare')` * `require('semver/functions/compare-build')` * `require('semver/functions/compare-loose')` * `require('semver/functions/diff')` * `require('semver/functions/eq')` * `require('semver/functions/gt')` * `require('semver/functions/gte')` * `require('semver/functions/inc')` * `require('semver/functions/lt')` * `require('semver/functions/lte')` * `require('semver/functions/major')` * `require('semver/functions/minor')` * `require('semver/functions/neq')` * `require('semver/functions/parse')` * `require('semver/functions/patch')` * `require('semver/functions/prerelease')` * `require('semver/functions/rcompare')` * `require('semver/functions/rsort')` * `require('semver/functions/satisfies')` * `require('semver/functions/sort')` * `require('semver/functions/valid')` * `require('semver/ranges/gtr')` * `require('semver/ranges/intersects')` * `require('semver/ranges/ltr')` * `require('semver/ranges/max-satisfying')` * `require('semver/ranges/min-satisfying')` * `require('semver/ranges/min-version')` * `require('semver/ranges/outside')` * `require('semver/ranges/to-comparators')` * `require('semver/ranges/valid')` # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports `pipe()`ing (including multi-`pipe()` and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap-parser) - [treport](http://npm.im/treport) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with node-core streams and intend to use Minipass streams in your programs. You can avoid most of these differences entirely (for a very small performance penalty) by setting `{async: true}` in the constructor options. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. Example: ```js const Minipass = require('minipass') const stream = new Minipass({ async: true }) stream.on('data', () => console.log('data event')) console.log('before write') stream.write('hello') console.log('after write') // output: // before write // data event // after write ``` ### Exception: Async Opt-In If you wish to have a Minipass stream with behavior that more closely mimics Node.js core streams, you can set the stream in async mode either by setting `async: true` in the constructor options, or by setting `stream.async = true` later on. ```js const Minipass = require('minipass') const asyncStream = new Minipass({ async: true }) asyncStream.on('data', () => console.log('data event')) console.log('before write') asyncStream.write('hello') console.log('after write') // output: // before write // after write // data event <-- this is deferred until the next tick ``` Switching _out_ of async mode is unsafe, as it could cause data corruption, and so is not enabled. Example: ```js const Minipass = require('minipass') const stream = new Minipass({ encoding: 'utf8' }) stream.on('data', chunk => console.log(chunk)) stream.async = true console.log('before writes') stream.write('hello') setStreamSyncAgainSomehow(stream) // <-- this doesn't actually exist! stream.write('world') console.log('after writes') // hypothetical output would be: // before writes // world // after writes // hello // NOT GOOD! ``` To avoid this problem, once set into async mode, any attempt to make the stream sync again will be ignored. ```js const Minipass = require('minipass') const stream = new Minipass({ encoding: 'utf8' }) stream.on('data', chunk => console.log(chunk)) stream.async = true console.log('before writes') stream.write('hello') stream.async = false // <-- no-op, stream already async stream.write('world') console.log('after writes') // actual output: // before writes // after writes // hello // world ``` ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. Since nothing is ever buffered unnecessarily, there is much less copying data, and less bookkeeping about buffer capacity levels. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. However, this is _usually_ not a problem because: ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Emit `error` When Asked The most recent error object passed to the `'error'` event is stored on the stream. If a new `'error'` event handler is added, and an error was previously emitted, then the event handler will be called immediately (or on `process.nextTick` in the case of async streams). This makes it much more difficult to end up trying to interact with a broken stream, if the error handler is added after an error was previously emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` One solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` All of the hazards in this section are avoided by setting `{ async: true }` in the Minipass constructor, or by setting `stream.async = true` afterwards. Note that this does add some overhead, so should only be done in cases where you are willing to lose a bit of performance in order to avoid having to refactor program logic. ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. * `async` Defaults to `false`. Set to `true` to defer data emission until next tick. This reduces performance slightly, but makes Minipass streams use timing behavior closer to Node core streams. See [Timing](#timing) for more details. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. When data is emitted, it is immediately written to any and all pipe destinations. (Or written on next tick in `async` mode.) * `unpipe(dest)` - Stop piping to the destination stream. This is immediate, meaning that any asynchronously queued data will _not_ make it to the destination when running in `async` mode. * `options.end` - Boolean, end the destination stream when the source stream ends. Default `true`. * `options.proxyErrors` - Boolean, proxy `error` events from the source stream to the destination stream. Note that errors are _not_ proxied after the pipeline terminates, either due to the source emitting `'end'` or manually unpiping with `src.unpipe(dest)`. Default `false`. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i-- > 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { try { // JSON.parse can throw, emit an error on that super.write(JSON.parse(jsonData[i])) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # file-entry-cache > Super simple cache for file metadata, useful for process that work o a given series of files > and that only need to repeat the job on the changed ones since the previous run of the process — Edit [![NPM Version](http://img.shields.io/npm/v/file-entry-cache.svg?style=flat)](https://npmjs.org/package/file-entry-cache) [![Build Status](http://img.shields.io/travis/royriojas/file-entry-cache.svg?style=flat)](https://travis-ci.org/royriojas/file-entry-cache) ## install ```bash npm i --save file-entry-cache ``` ## Usage The module exposes two functions `create` and `createFromFile`. ## `create(cacheName, [directory, useCheckSum])` - **cacheName**: the name of the cache to be created - **directory**: Optional the directory to load the cache from - **usecheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ## `createFromFile(pathToCache, [useCheckSum])` - **pathToCache**: the path to the cache file (this combines the cache name and directory) - **useCheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ```js // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var fileEntryCache = require('file-entry-cache'); var cache = fileEntryCache.create('testCache'); var files = expand('../fixtures/*.txt'); // the first time this method is called, will return all the files var oFiles = cache.getUpdatedFiles(files); // this will persist this to disk checking each file stats and // updating the meta attributes `size` and `mtime`. // custom fields could also be added to the meta object and will be persisted // in order to retrieve them later cache.reconcile(); // use this if you want the non visited file entries to be kept in the cache // for more than one execution // // cache.reconcile( true /* noPrune */) // on a second run var cache2 = fileEntryCache.create('testCache'); // will return now only the files that were modified or none // if no files were modified previous to the execution of this function var oFiles = cache.getUpdatedFiles(files); // if you want to prevent a file from being considered non modified // something useful if a file failed some sort of validation // you can then remove the entry from the cache doing cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles` // that will effectively make the file to appear again as modified until the validation is passed. In that // case you should not remove it from the cache // if you need all the files, so you can determine what to do with the changed ones // you can call var oFiles = cache.normalizeEntries(files); // oFiles will be an array of objects like the following entry = { key: 'some/name/file', the path to the file changed: true, // if the file was changed since previous run meta: { size: 3242, // the size of the file mtime: 231231231, // the modification time of the file data: {} // some extra field stored for this file (useful to save the result of a transformation on the file } } ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistence (write-back cache) in order to make a script that will beautify files with `esformatter` to execute only on the files that were changed since the last run. In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second. This module uses [flat-cache](https://www.npmjs.com/package/flat-cache) a super simple `key/value` cache storage with optional file persistance. The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for `getChangedFiles` will return only the files that were modified. Making the process to end faster. This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the `meta` field, of the entries. Anything added to the meta field will be persisted. Those processes won't need to call `getChangedFiles` they will instead call `normalizeEntries` that will return the entries with a `changed` field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed. In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed. ## Important notes - The values set on the meta attribute of the entries should be `stringify-able` ones if possible, flat-cache uses `circular-json` to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular values - All the changes to the cache state are done to memory first and only persisted after reconcile. ## License MIT ## Test Strategy - tests are copied from the [polyfill implementation](https://github.com/tc39/proposal-temporal/tree/main/polyfill/test) - tests should be removed if they relate to features that do not make sense for TS/AS, i.e. tests that validate the shape of an object do not make sense in a language with compile-time type checking - tests that fail because a feature has not been implemented yet should be left as failures. <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/img/ajv.svg"> &nbsp; # Ajv JSON schema validator The fastest JSON validator for Node.js and browser. Supports JSON Schema draft-04/06/07/2019-09/2020-12 ([draft-04 support](https://ajv.js.org/json-schema.html#draft-04) requires ajv-draft-04 package) and JSON Type Definition [RFC8927](https://datatracker.ietf.org/doc/rfc8927/). [![build](https://github.com/ajv-validator/ajv/workflows/build/badge.svg)](https://github.com/ajv-validator/ajv/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![SimpleX](https://img.shields.io/badge/chat-on%20SimpleX-%2307b4b9)](https://simplex.chat/contact#/?v=1&smp=smp%3A%2F%2Fu2dS9sG8nMNURyZwqASV4yROM28Er0luVTx5X1CsMrU%3D%40smp4.simplex.im%2Fap4lMFzfXF8Hzmh-Vz0WNxp_1jKiOa-h%23MCowBQYDK2VuAyEAcdefddRvDfI8iAuBpztm_J3qFucj8MDZoVs_2EcMTzU%3D) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv sponsors [<img src="https://ajv.js.org/img/mozilla.svg" width="45%" alt="Mozilla">](https://www.mozilla.org)<img src="https://ajv.js.org/img/gap.svg" width="9%">[<img src="https://ajv.js.org/img/reserved.svg" width="45%">](https://opencollective.com/ajv) [<img src="https://ajv.js.org/img/microsoft.png" width="31%" alt="Microsoft">](https://opensource.microsoft.com)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/reserved.svg" width="31%">](https://opencollective.com/ajv)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/reserved.svg" width="31%">](https://opencollective.com/ajv) [<img src="https://ajv.js.org/img/retool.svg" width="22.5%" alt="Retool">](https://retool.com/?utm_source=sponsor&utm_campaign=ajv)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/tidelift.svg" width="22.5%" alt="Tidelift">](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=enterprise)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/simplex.svg" width="22.5%" alt="SimpleX">](https://github.com/simplex-chat/simplex-chat)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/reserved.svg" width="22.5%">](https://opencollective.com/ajv) ## Contributing More than 100 people contributed to Ajv, and we would love to have you join the development. We welcome implementing new features that will benefit many users and ideas to improve our documentation. Please review [Contributing guidelines](./CONTRIBUTING.md) and [Code components](https://ajv.js.org/components.html). ## Documentation All documentation is available on the [Ajv website](https://ajv.js.org). Some useful site links: - [Getting started](https://ajv.js.org/guide/getting-started.html) - [JSON Schema vs JSON Type Definition](https://ajv.js.org/guide/schema-language.html) - [API reference](https://ajv.js.org/api.html) - [Strict mode](https://ajv.js.org/strict-mode.html) - [Standalone validation code](https://ajv.js.org/standalone.html) - [Security considerations](https://ajv.js.org/security.html) - [Command line interface](https://ajv.js.org/packages/ajv-cli.html) - [Frequently Asked Questions](https://ajv.js.org/faq.html) ## <a name="sponsors"></a>Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/10/website"><img src="https://opencollective.com/ajv/organization/10/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/11/website"><img src="https://opencollective.com/ajv/organization/11/avatar.svg"></a> ## Performance Ajv generates code to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=62,4,1&chs=600x416&chxl=-1:|ajv|@exodus&#x2F;schemasafe|is-my-json-valid|djv|@cfworker&#x2F;json-schema|jsonschema&chd=t:100,69.2,51.5,13.1,5.1,1.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements JSON Schema [draft-06/07/2019-09/2020-12](http://json-schema.org/) standards (draft-04 is supported in v6): - all validation keywords (see [JSON Schema validation keywords](https://ajv.js.org/json-schema.html)) - [OpenAPI](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md) extensions: - NEW: keyword [discriminator](https://ajv.js.org/json-schema.html#discriminator). - keyword [nullable](https://ajv.js.org/json-schema.html#nullable). - full support of remote references (remote schemas have to be added with `addSchema` or compiled to be available) - support of recursive references between schemas - correct string lengths for strings with unicode pairs - JSON Schema [formats](https://ajv.js.org/guide/formats.html) (with [ajv-formats](https://github.com/ajv-validator/ajv-formats) plugin). - [validates schemas against meta-schema](https://ajv.js.org/api.html#api-validateschema) - NEW: supports [JSON Type Definition](https://datatracker.ietf.org/doc/rfc8927/): - all keywords (see [JSON Type Definition schema forms](https://ajv.js.org/json-type-definition.html)) - meta-schema for JTD schemas - "union" keyword and user-defined keywords (can be used inside "metadata" member of the schema) - supports [browsers](https://ajv.js.org/guide/environments.html#browsers) and Node.js 10.x - current - [asynchronous loading](https://ajv.js.org/guide/managing-schemas.html#asynchronous-schema-loading) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](https://ajv.js.org/options.html#allerrors) - [error messages with parameters](https://ajv.js.org/api.html#validation-errors) describing error reasons to allow error message generation - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [removing-additional-properties](https://ajv.js.org/guide/modifying-data.html#removing-additional-properties) - [assigning defaults](https://ajv.js.org/guide/modifying-data.html#assigning-defaults) to missing properties and items - [coercing data](https://ajv.js.org/guide/modifying-data.html#coercing-data-types) to the types specified in `type` keywords - [user-defined keywords](https://ajv.js.org/guide/user-keywords.html) - additional extension keywords with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [\$data reference](https://ajv.js.org/guide/combining-schemas.html#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](https://ajv.js.org/guide/async-validation.html) of user-defined formats and keywords ## Install To install version 8: ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://runkit.com/npm/ajv In JavaScript: ```javascript // or ESM/TypeScript import import Ajv from "ajv" // Node.js require: const Ajv = require("ajv") const ajv = new Ajv() // options can be passed, e.g. {allErrors: true} const schema = { type: "object", properties: { foo: {type: "integer"}, bar: {type: "string"} }, required: ["foo"], additionalProperties: false, } const data = { foo: 1, bar: "abc" } const validate = ajv.compile(schema) const valid = validate(data) if (!valid) console.log(validate.errors) ``` Learn how to use Ajv and see more examples in the [Guide: getting started](https://ajv.js.org/guide/getting-started.html) ## Changes history See [https://github.com/ajv-validator/ajv/releases](https://github.com/ajv-validator/ajv/releases) **Please note**: [Changes in version 8.0.0](https://github.com/ajv-validator/ajv/releases/tag/v8.0.0) [Version 7.0.0](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](./CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](./LICENSE) # debug [![Build Status](https://travis-ci.org/debug-js/debug.svg?branch=master)](https://travis-ci.org/debug-js/debug) [![Coverage Status](https://coveralls.io/repos/github/debug-js/debug/badge.svg?branch=master)](https://coveralls.io/github/debug-js/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows command prompt notes ##### CMD On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Example: ```cmd set DEBUG=* & node app.js ``` ##### PowerShell (VS Code default) PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Example: ```cmd $env:DEBUG='app';node app.js ``` Then, run the program to be debugged as usual. npm script example: ```js "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js", ``` ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` In Chromium-based web browsers (e.g. Brave, Chrome, and Electron), the JavaScript console will—by default—only show messages logged by `debug` if the "Verbose" log level is _enabled_. <img width="647" src="https://user-images.githubusercontent.com/7143133/152083257-29034707-c42c-4959-8add-3cee850e6fcf.png"> ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Extend You can simply extend debugger ```js const log = require('debug')('auth'); //creates new debug instance with extended namespace const logSign = log.extend('sign'); const logLogin = log.extend('login'); log('hello'); // auth hello logSign('hello'); //auth:sign hello logLogin('hello'); //auth:login hello ``` ## Set dynamically You can also enable debug dynamically by calling the `enable()` method : ```js let debug = require('debug'); console.log(1, debug.enabled('test')); debug.enable('test'); console.log(2, debug.enabled('test')); debug.disable(); console.log(3, debug.enabled('test')); ``` print : ``` 1 false 2 true 3 false ``` Usage : `enable(namespaces)` `namespaces` can include modes separated by a colon and wildcards. Note that calling `enable()` completely overrides previously set DEBUG variable : ``` $ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))' => false ``` `disable()` Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with. For example: ```js let debug = require('debug'); debug.enable('foo:*,-foo:bar'); let namespaces = debug.disable(); debug.enable(namespaces); ``` Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical. ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Usage in child processes Due to the way `debug` detects if the output is a TTY or not, colors are not shown in child processes when `stderr` is piped. A solution is to pass the `DEBUG_COLORS=1` environment variable to the child process. For example: ```javascript worker = fork(WORKER_WRAP_PATH, [workerPath], { stdio: [ /* stdin: */ 0, /* stdout: */ 'pipe', /* stderr: */ 'pipe', 'ipc', ], env: Object.assign({}, process.env, { DEBUG_COLORS: 1 // without this settings, colors won't be shown }), }); worker.stderr.pipe(process.stderr, { end: false }); ``` ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne - Josh Junon ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Copyright (c) 2018-2021 Josh Junon Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. iMurmurHash.js ============== An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on [Gary Court's implementation](https://github.com/garycourt/murmurhash-js) with [kazuyukitanimura's modifications](https://github.com/kazuyukitanimura/murmurhash-js). This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing. Installation ------------ To use iMurmurHash in the browser, [download the latest version](https://raw.github.com/jensyt/imurmurhash-js/master/imurmurhash.min.js) and include it as a script on your site. ```html <script type="text/javascript" src="/scripts/imurmurhash.min.js"></script> <script> // Your code here, access iMurmurHash using the global object MurmurHash3 </script> ``` --- To use iMurmurHash in Node.js, install the module using NPM: ```bash npm install imurmurhash ``` Then simply include it in your scripts: ```javascript MurmurHash3 = require('imurmurhash'); ``` Quick Example ------------- ```javascript // Create the initial hash var hashState = MurmurHash3('string'); // Incrementally add text hashState.hash('more strings'); hashState.hash('even more strings'); // All calls can be chained if desired hashState.hash('and').hash('some').hash('more'); // Get a result hashState.result(); // returns 0xe4ccfe6b ``` Functions --------- ### MurmurHash3 ([string], [seed]) Get a hash state object, optionally initialized with the given _string_ and _seed_. _Seed_ must be a positive integer if provided. Calling this function without the `new` keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use `new` to create a new state object. For example: ```javascript // Use the cached object, calling the function again will return the same // object (but reset, so the current state would be lost) hashState = MurmurHash3(); ... // Create a new object that can be safely used however you wish. Calling the // function again will simply return a new state object, and no state loss // will occur, at the cost of creating more objects. hashState = new MurmurHash3(); ``` Both methods can be mixed however you like if you have different use cases. --- ### MurmurHash3.prototype.hash (string) Incrementally add _string_ to the hash. This can be called as many times as you want for the hash state object, including after a call to `result()`. Returns `this` so calls can be chained. --- ### MurmurHash3.prototype.result () Get the result of the hash as a 32-bit positive integer. This performs the tail and finalizer portions of the algorithm, but does not store the result in the state object. This means that it is perfectly safe to get results and then continue adding strings via `hash`. ```javascript // Do the whole string at once MurmurHash3('this is a test string').result(); // 0x70529328 // Do part of the string, get a result, then the other part var m = MurmurHash3('this is a'); m.result(); // 0xbfc4f834 m.hash(' test string').result(); // 0x70529328 (same as above) ``` --- ### MurmurHash3.prototype.reset ([seed]) Reset the state object for reuse, optionally using the given _seed_ (defaults to 0 like the constructor). Returns `this` so calls can be chained. --- License (MIT) ------------- Copyright (c) 2013 Gary Court, Jens Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # v8-compile-cache [![Build Status](https://travis-ci.org/zertosh/v8-compile-cache.svg?branch=master)](https://travis-ci.org/zertosh/v8-compile-cache) `v8-compile-cache` attaches a `require` hook to use [V8's code cache](https://v8project.blogspot.com/2015/07/code-caching.html) to speed up instantiation time. The "code cache" is the work of parsing and compiling done by V8. The ability to tap into V8 to produce/consume this cache was introduced in [Node v5.7.0](https://nodejs.org/en/blog/release/v5.7.0/). ## Usage 1. Add the dependency: ```sh $ npm install --save v8-compile-cache ``` 2. Then, in your entry module add: ```js require('v8-compile-cache'); ``` **Requiring `v8-compile-cache` in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by `v8-compile-cache`.** ## Options Set the environment variable `DISABLE_V8_COMPILE_CACHE=1` to disable the cache. Cache directory is defined by environment variable `V8_COMPILE_CACHE_CACHE_DIR` or defaults to `<os.tmpdir()>/v8-compile-cache-<V8_VERSION>`. ## Internals Cache files are suffixed `.BLOB` and `.MAP` corresponding to the entry module that required `v8-compile-cache`. The cache is _entry module specific_ because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis. ## Benchmarks See https://github.com/zertosh/v8-compile-cache/tree/master/bench. **Load Times:** | Module | Without Cache | With Cache | | ---------------- | -------------:| ----------:| | `babel-core` | `218ms` | `185ms` | | `yarn` | `153ms` | `113ms` | | `yarn` (bundled) | `228ms` | `105ms` | _^ Includes the overhead of loading the cache itself._ ## Acknowledgements * `FileSystemBlobStore` and `NativeCompileCache` are based on Atom's implementation of their v8 compile cache: - https://github.com/atom/atom/blob/b0d7a8a/src/file-system-blob-store.js - https://github.com/atom/atom/blob/b0d7a8a/src/native-compile-cache.js * `mkdirpSync` is based on: - https://github.com/substack/node-mkdirp/blob/f2003bb/index.js#L55-L98 # ansi-colors [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/ansi-colors.svg?style=flat)](https://www.npmjs.com/package/ansi-colors) [![NPM monthly downloads](https://img.shields.io/npm/dm/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![NPM total downloads](https://img.shields.io/npm/dt/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![Linux Build Status](https://img.shields.io/travis/doowb/ansi-colors.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/ansi-colors) > Easily add ANSI colors to your text and symbols in the terminal. A faster drop-in replacement for chalk, kleur and turbocolor (without the dependencies and rendering bugs). Please consider following this project's author, [Brian Woodward](https://github.com/doowb), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save ansi-colors ``` ![image](https://user-images.githubusercontent.com/383994/39635445-8a98a3a6-4f8b-11e8-89c1-068c45d4fff8.png) ## Why use this? ansi-colors is _the fastest Node.js library for terminal styling_. A more performant drop-in replacement for chalk, with no dependencies. * _Blazing fast_ - Fastest terminal styling library in node.js, 10-20x faster than chalk! * _Drop-in replacement_ for [chalk](https://github.com/chalk/chalk). * _No dependencies_ (Chalk has 7 dependencies in its tree!) * _Safe_ - Does not modify the `String.prototype` like [colors](https://github.com/Marak/colors.js). * Supports [nested colors](#nested-colors), **and does not have the [nested styling bug](#nested-styling-bug) that is present in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur)**. * Supports [chained colors](#chained-colors). * [Toggle color support](#toggle-color-support) on or off. ## Usage ```js const c = require('ansi-colors'); console.log(c.red('This is a red string!')); console.log(c.green('This is a red string!')); console.log(c.cyan('This is a cyan string!')); console.log(c.yellow('This is a yellow string!')); ``` ![image](https://user-images.githubusercontent.com/383994/39653848-a38e67da-4fc0-11e8-89ae-98c65ebe9dcf.png) ## Chained colors ```js console.log(c.bold.red('this is a bold red message')); console.log(c.bold.yellow.italic('this is a bold yellow italicized message')); console.log(c.green.bold.underline('this is a bold green underlined message')); ``` ![image](https://user-images.githubusercontent.com/383994/39635780-7617246a-4f8c-11e8-89e9-05216cc54e38.png) ## Nested colors ```js console.log(c.yellow(`foo ${c.red.bold('red')} bar ${c.cyan('cyan')} baz`)); ``` ![image](https://user-images.githubusercontent.com/383994/39635817-8ed93d44-4f8c-11e8-8afd-8c3ea35f5fbe.png) ### Nested styling bug `ansi-colors` does not have the nested styling bug found in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur). ```js const { bold, red } = require('ansi-styles'); console.log(bold(`foo ${red.dim('bar')} baz`)); const colorette = require('colorette'); console.log(colorette.bold(`foo ${colorette.red(colorette.dim('bar'))} baz`)); const kleur = require('kleur'); console.log(kleur.bold(`foo ${kleur.red.dim('bar')} baz`)); const chalk = require('chalk'); console.log(chalk.bold(`foo ${chalk.red.dim('bar')} baz`)); ``` **Results in the following** (sans icons and labels) ![image](https://user-images.githubusercontent.com/383994/47280326-d2ee0580-d5a3-11e8-9611-ea6010f0a253.png) ## Toggle color support Easily enable/disable colors. ```js const c = require('ansi-colors'); // disable colors manually c.enabled = false; // or use a library to automatically detect support c.enabled = require('color-support').hasBasic; console.log(c.red('I will only be colored red if the terminal supports colors')); ``` ## Strip ANSI codes Use the `.unstyle` method to strip ANSI codes from a string. ```js console.log(c.unstyle(c.blue.bold('foo bar baz'))); //=> 'foo bar baz' ``` ## Available styles **Note** that bright and bright-background colors are not always supported. | Colors | Background Colors | Bright Colors | Bright Background Colors | | ------- | ----------------- | ------------- | ------------------------ | | black | bgBlack | blackBright | bgBlackBright | | red | bgRed | redBright | bgRedBright | | green | bgGreen | greenBright | bgGreenBright | | yellow | bgYellow | yellowBright | bgYellowBright | | blue | bgBlue | blueBright | bgBlueBright | | magenta | bgMagenta | magentaBright | bgMagentaBright | | cyan | bgCyan | cyanBright | bgCyanBright | | white | bgWhite | whiteBright | bgWhiteBright | | gray | | | | | grey | | | | _(`gray` is the U.S. spelling, `grey` is more commonly used in the Canada and U.K.)_ ### Style modifiers * dim * **bold** * hidden * _italic_ * underline * inverse * ~~strikethrough~~ * reset ## Aliases Create custom aliases for styles. ```js const colors = require('ansi-colors'); colors.alias('primary', colors.yellow); colors.alias('secondary', colors.bold); console.log(colors.primary.secondary('Foo')); ``` ## Themes A theme is an object of custom aliases. ```js const colors = require('ansi-colors'); colors.theme({ danger: colors.red, dark: colors.dim.gray, disabled: colors.gray, em: colors.italic, heading: colors.bold.underline, info: colors.cyan, muted: colors.dim, primary: colors.blue, strong: colors.bold, success: colors.green, underline: colors.underline, warning: colors.yellow }); // Now, we can use our custom styles alongside the built-in styles! console.log(colors.danger.strong.em('Error!')); console.log(colors.warning('Heads up!')); console.log(colors.info('Did you know...')); console.log(colors.success.bold('It worked!')); ``` ## Performance **Libraries tested** * ansi-colors v3.0.4 * chalk v2.4.1 ### Mac > MacBook Pro, Intel Core i7, 2.3 GHz, 16 GB. **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.915ms` * chalk - `12.437ms` **Benchmarks** ``` # All Colors ansi-colors x 173,851 ops/sec ±0.42% (91 runs sampled) chalk x 9,944 ops/sec ±2.53% (81 runs sampled))) # Chained colors ansi-colors x 20,791 ops/sec ±0.60% (88 runs sampled) chalk x 2,111 ops/sec ±2.34% (83 runs sampled) # Nested colors ansi-colors x 59,304 ops/sec ±0.98% (92 runs sampled) chalk x 4,590 ops/sec ±2.08% (82 runs sampled) ``` ### Windows > Windows 10, Intel Core i7-7700k CPU @ 4.2 GHz, 32 GB **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.494ms` * chalk - `11.523ms` **Benchmarks** ``` # All Colors ansi-colors x 193,088 ops/sec ±0.51% (95 runs sampled)) chalk x 9,612 ops/sec ±3.31% (77 runs sampled))) # Chained colors ansi-colors x 26,093 ops/sec ±1.13% (94 runs sampled) chalk x 2,267 ops/sec ±2.88% (80 runs sampled)) # Nested colors ansi-colors x 67,747 ops/sec ±0.49% (93 runs sampled) chalk x 4,446 ops/sec ±3.01% (82 runs sampled)) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [ansi-wrap](https://www.npmjs.com/package/ansi-wrap): Create ansi colors by passing the open and close codes. | [homepage](https://github.com/jonschlinkert/ansi-wrap "Create ansi colors by passing the open and close codes.") * [strip-color](https://www.npmjs.com/package/strip-color): Strip ANSI color codes from a string. No dependencies. | [homepage](https://github.com/jonschlinkert/strip-color "Strip ANSI color codes from a string. No dependencies.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 48 | [jonschlinkert](https://github.com/jonschlinkert) | | 42 | [doowb](https://github.com/doowb) | | 6 | [lukeed](https://github.com/lukeed) | | 2 | [Silic0nS0ldier](https://github.com/Silic0nS0ldier) | | 1 | [dwieeb](https://github.com/dwieeb) | | 1 | [jorgebucaran](https://github.com/jorgebucaran) | | 1 | [madhavarshney](https://github.com/madhavarshney) | | 1 | [chapterjason](https://github.com/chapterjason) | ### Author **Brian Woodward** * [GitHub Profile](https://github.com/doowb) * [Twitter Profile](https://twitter.com/doowb) * [LinkedIn Profile](https://linkedin.com/in/woodwardbrian) ### License Copyright © 2019, [Brian Woodward](https://github.com/doowb). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on July 01, 2019._ # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # isobject [![NPM version](https://img.shields.io/npm/v/isobject.svg?style=flat)](https://www.npmjs.com/package/isobject) [![NPM downloads](https://img.shields.io/npm/dm/isobject.svg?style=flat)](https://npmjs.org/package/isobject) [![Build Status](https://img.shields.io/travis/jonschlinkert/isobject.svg?style=flat)](https://travis-ci.org/jonschlinkert/isobject) Returns true if the value is an object and not an array or null. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject --save ``` Use [is-plain-object](https://github.com/jonschlinkert/is-plain-object) if you want only objects that are created by the `Object` constructor. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject ``` Install with [bower](http://bower.io/) ```sh $ bower install isobject ``` ## Usage ```js var isObject = require('isobject'); ``` **True** All of the following return `true`: ```js isObject({}); isObject(Object.create({})); isObject(Object.create(Object.prototype)); isObject(Object.create(null)); isObject({}); isObject(new Foo); isObject(/foo/); ``` **False** All of the following return `false`: ```js isObject(); isObject(function () {}); isObject(1); isObject([]); isObject(undefined); isObject(null); ``` ## Related projects You might also be interested in these projects: [merge-deep](https://www.npmjs.com/package/merge-deep): Recursively merge values in a javascript object. | [homepage](https://github.com/jonschlinkert/merge-deep) * [extend-shallow](https://www.npmjs.com/package/extend-shallow): Extend an object with the properties of additional objects. node.js/javascript util. | [homepage](https://github.com/jonschlinkert/extend-shallow) * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object) * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of) ## Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](https://github.com/jonschlinkert/isobject/issues/new). ## Building docs Generate readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install verb && npm run docs ``` Or, if [verb](https://github.com/verbose/verb) is installed globally: ```sh $ verb ``` ## Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ## Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ## License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/isobject/blob/master/LICENSE). *** _This file was generated by [verb](https://github.com/verbose/verb), v0.9.0, on April 25, 2016._ # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. # is-glob [![NPM version](https://img.shields.io/npm/v/is-glob.svg?style=flat)](https://www.npmjs.com/package/is-glob) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![NPM total downloads](https://img.shields.io/npm/dt/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![Build Status](https://img.shields.io/github/workflow/status/micromatch/is-glob/dev)](https://github.com/micromatch/is-glob/actions) > Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-glob ``` You might also be interested in [is-valid-glob](https://github.com/jonschlinkert/is-valid-glob) and [has-glob](https://github.com/jonschlinkert/has-glob). ## Usage ```js var isGlob = require('is-glob'); ``` ### Default behavior **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js'); isGlob('*.js'); isGlob('**/abc.js'); isGlob('abc/*.js'); isGlob('abc/(aaa|bbb).js'); isGlob('abc/[a-z].js'); isGlob('abc/{a,b}.js'); //=> true ``` Extglobs ```js isGlob('abc/@(a).js'); isGlob('abc/!(a).js'); isGlob('abc/+(a).js'); isGlob('abc/*(a).js'); isGlob('abc/?(a).js'); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('abc/\\@(a).js'); isGlob('abc/\\!(a).js'); isGlob('abc/\\+(a).js'); isGlob('abc/\\*(a).js'); isGlob('abc/\\?(a).js'); isGlob('\\!foo.js'); isGlob('\\*.js'); isGlob('\\*\\*/abc.js'); isGlob('abc/\\*.js'); isGlob('abc/\\(aaa|bbb).js'); isGlob('abc/\\[a-z].js'); isGlob('abc/\\{a,b}.js'); //=> false ``` Patterns that do not have glob patterns return `false`: ```js isGlob('abc.js'); isGlob('abc/def/ghi.js'); isGlob('foo.js'); isGlob('abc/@.js'); isGlob('abc/+.js'); isGlob('abc/?.js'); isGlob(); isGlob(null); //=> false ``` Arrays are also `false` (If you want to check if an array has a glob pattern, use [has-glob](https://github.com/jonschlinkert/has-glob)): ```js isGlob(['**/*.js']); isGlob(['foo.js']); //=> false ``` ### Option strict When `options.strict === false` the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return `false` may return `true`. This is done so that matching libraries like [micromatch](https://github.com/micromatch/micromatch) have a chance at determining if the pattern is a glob or not. **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js', {strict: false}); isGlob('*.js', {strict: false}); isGlob('**/abc.js', {strict: false}); isGlob('abc/*.js', {strict: false}); isGlob('abc/(aaa|bbb).js', {strict: false}); isGlob('abc/[a-z].js', {strict: false}); isGlob('abc/{a,b}.js', {strict: false}); //=> true ``` Extglobs ```js isGlob('abc/@(a).js', {strict: false}); isGlob('abc/!(a).js', {strict: false}); isGlob('abc/+(a).js', {strict: false}); isGlob('abc/*(a).js', {strict: false}); isGlob('abc/?(a).js', {strict: false}); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('\\!foo.js', {strict: false}); isGlob('\\*.js', {strict: false}); isGlob('\\*\\*/abc.js', {strict: false}); isGlob('abc/\\*.js', {strict: false}); isGlob('abc/\\(aaa|bbb).js', {strict: false}); isGlob('abc/\\[a-z].js', {strict: false}); isGlob('abc/\\{a,b}.js', {strict: false}); //=> false ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [assemble](https://www.npmjs.com/package/assemble): Get the rocks out of your socks! Assemble makes you fast at creating web projects… [more](https://github.com/assemble/assemble) | [homepage](https://github.com/assemble/assemble "Get the rocks out of your socks! Assemble makes you fast at creating web projects. Assemble is used by thousands of projects for rapid prototyping, creating themes, scaffolds, boilerplates, e-books, UI components, API documentation, blogs, building websit") * [base](https://www.npmjs.com/package/base): Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks | [homepage](https://github.com/node-base/base "Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks") * [update](https://www.npmjs.com/package/update): Be scalable! Update is a new, open source developer framework and CLI for automating updates… [more](https://github.com/update/update) | [homepage](https://github.com/update/update "Be scalable! Update is a new, open source developer framework and CLI for automating updates of any kind in code projects.") * [verb](https://www.npmjs.com/package/verb): Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used… [more](https://github.com/verbose/verb) | [homepage](https://github.com/verbose/verb "Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used on hundreds of projects of all sizes to generate everything from API docs to readmes.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 47 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [doowb](https://github.com/doowb) | | 1 | [phated](https://github.com/phated) | | 1 | [danhper](https://github.com/danhper) | | 1 | [paulmillr](https://github.com/paulmillr) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 27, 2019._ # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![Build Status](https://travis-ci.org/epoberezkin/json-schema-traverse.svg?branch=master)](https://travis-ci.org/epoberezkin/json-schema-traverse) [![npm version](https://badge.fury.io/js/json-schema-traverse.svg)](https://www.npmjs.com/package/json-schema-traverse) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![a fun cartoon logo made of glob characters](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. * `fs` File-system object with Node's `fs` API. By default, the built-in `fs` module will be used. Set to a volume provided by a library like `memfs` to avoid using the "real" file-system. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) Overview [![Build Status](https://travis-ci.org/lydell/js-tokens.svg?branch=master)](https://travis-ci.org/lydell/js-tokens) ======== A regex that tokenizes JavaScript. ```js var jsTokens = require("js-tokens").default var jsString = "var foo=opts.foo;\n..." jsString.match(jsTokens) // ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...] ``` Installation ============ `npm install js-tokens` ```js import jsTokens from "js-tokens" // or: var jsTokens = require("js-tokens").default ``` Usage ===== ### `jsTokens` ### A regex with the `g` flag that matches JavaScript tokens. The regex _always_ matches, even invalid JavaScript and the empty string. The next match is always directly after the previous. ### `var token = matchToToken(match)` ### ```js import {matchToToken} from "js-tokens" // or: var matchToToken = require("js-tokens").matchToToken ``` Takes a `match` returned by `jsTokens.exec(string)`, and returns a `{type: String, value: String}` object. The following types are available: - string - comment - regex - number - name - punctuator - whitespace - invalid Multi-line comments and strings also have a `closed` property indicating if the token was closed or not (see below). Comments and strings both come in several flavors. To distinguish them, check if the token starts with `//`, `/*`, `'`, `"` or `` ` ``. Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use [is-keyword-js] to tell them apart. Whitespace includes both line terminators and other whitespace. [is-keyword-js]: https://github.com/crissdev/is-keyword-js ECMAScript support ================== The intention is to always support the latest ECMAScript version whose feature set has been finalized. If adding support for a newer version requires changes, a new version with a major verion bump will be released. Currently, ECMAScript 2018 is supported. Invalid code handling ===================== Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input. Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input. Unterminated regex literals are likely matched as division and whatever is inside the regex. Invalid ASCII characters have their own capturing group. Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the [ES2018](#es2018) section. Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple. Strings may contain invalid escape sequences. Limitations =========== Tokenizing JavaScript using regexes—in fact, _one single regex_—won’t be perfect. But that’s not the point either. You may compare jsTokens with [esprima] by using `esprima-compare.js`. See `npm run esprima-compare`! [esprima]: http://esprima.org/ ### Template string interpolation ### Template strings are matched as single tokens, from the starting `` ` `` to the ending `` ` ``, including interpolations (whose tokens are not matched individually). Matching template string interpolations requires recursive balancing of `{` and `}`—something that JavaScript regexes cannot do. Only one level of nesting is supported. ### Division and regex literals collision ### Consider this example: ```js var g = 9.82 var number = bar / 2/g var regex = / 2/g ``` A human can easily understand that in the `number` line we’re dealing with division, and in the `regex` line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the `/` characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the [ES2018](#es2018) section). When the `jsTokens` regex scans throught the above, it will see the following at the end of both the `number` and `regex` rows: ```js / 2/g ``` It is then impossible to know if that is a regex literal, or part of an expression dealing with division. Here is a similar case: ```js foo /= 2/g foo(/= 2/g) ``` The first line divides the `foo` variable with `2/g`. The second line calls the `foo` function with the regex literal `/= 2/g`. Again, since `jsTokens` only sees forwards, it cannot tell the two cases apart. There are some cases where we _can_ tell division and regex literals apart, though. First off, we have the simple cases where there’s only one slash in the line: ```js var foo = 2/g foo /= 2 ``` Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line. Secondly, not every character is a valid regex flag. ```js var number = bar / 2/e ``` The above example is also correctly identified as division, because `e` is not a valid regex flag. I initially wanted to future-proof by allowing `[a-zA-Z]*` (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard `g`, `m`, `i`, `y` and `u` flags are allowed. This means that the above example will be identified as division as long as you don’t rename the `e` variable to some permutation of `gmiyus` 1 to 6 characters long. Lastly, we can look _forward_ for information. - If the token following what looks like a regex literal is not valid after a regex literal, but is valid in a division expression, then the regex literal is treated as division instead. For example, a flagless regex cannot be followed by a string, number or name, but all of those three can be the denominator of a division. - Generally, if what looks like a regex literal is followed by an operator, the regex literal is treated as division instead. This is because regexes are seldomly used with operators (such as `+`, `*`, `&&` and `==`), but division could likely be part of such an expression. Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as: If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code). ### ES2018 ### ES2018 added some nice regex improvements to the language. - [Unicode property escapes] should allow telling names and invalid non-ASCII characters apart without blowing up the regex size. - [Lookbehind assertions] should allow matching telling division and regex literals apart in more cases. - [Named capture groups] might simplify some things. These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features. [Unicode property escapes]: http://2ality.com/2017/07/regexp-unicode-property-escapes.html [Lookbehind assertions]: http://2ality.com/2017/05/regexp-lookbehind-assertions.html [Named capture groups]: http://2ality.com/2017/05/regexp-named-capture-groups.html License ======= [MIT](LICENSE). # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` JS-YAML - YAML 1.2 parser / writer for JavaScript ================================================= [![Build Status](https://travis-ci.org/nodeca/js-yaml.svg?branch=master)](https://travis-ci.org/nodeca/js-yaml) [![NPM version](https://img.shields.io/npm/v/js-yaml.svg)](https://www.npmjs.org/package/js-yaml) __[Online Demo](http://nodeca.github.com/js-yaml/)__ This is an implementation of [YAML](http://yaml.org/), a human-friendly data serialization language. Started as [PyYAML](http://pyyaml.org/) port, it was completely rewritten from scratch. Now it's very fast, and supports 1.2 spec. Installation ------------ ### YAML module for node.js ``` npm install js-yaml ``` ### CLI executable If you want to inspect your YAML files from CLI, install js-yaml globally: ``` npm install -g js-yaml ``` #### Usage ``` usage: js-yaml [-h] [-v] [-c] [-t] file Positional arguments: file File with YAML document(s) Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -c, --compact Display errors in compact mode -t, --trace Show stack trace on error ``` ### Bundled YAML library for browsers ``` html <!-- esprima required only for !!js/function --> <script src="esprima.js"></script> <script src="js-yaml.min.js"></script> <script type="text/javascript"> var doc = jsyaml.load('greeting: hello\nname: world'); </script> ``` Browser support was done mostly for the online demo. If you find any errors - feel free to send pull requests with fixes. Also note, that IE and other old browsers needs [es5-shims](https://github.com/kriskowal/es5-shim) to operate. Notes: 1. We have no resources to support browserified version. Don't expect it to be well tested. Don't expect fast fixes if something goes wrong there. 2. `!!js/function` in browser bundle will not work by default. If you really need it - load `esprima` parser first (via amd or directly). 3. `!!bin` in browser will return `Array`, because browsers do not support node.js `Buffer` and adding Buffer shims is completely useless on practice. API --- Here we cover the most 'useful' methods. If you need advanced details (creating your own tags), see [wiki](https://github.com/nodeca/js-yaml/wiki) and [examples](https://github.com/nodeca/js-yaml/tree/master/examples) for more info. ``` javascript const yaml = require('js-yaml'); const fs = require('fs'); // Get document, or throw exception on error try { const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8')); console.log(doc); } catch (e) { console.log(e); } ``` ### safeLoad (string [ , options ]) **Recommended loading way.** Parses `string` as single YAML document. Returns either a plain object, a string or `undefined`, or throws `YAMLException` on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data. options: - `filename` _(default: null)_ - string to be used as a file path in error/warning messages. - `onWarning` _(default: null)_ - function to call on warning messages. Loader will call this function with an instance of `YAMLException` for each warning. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ - specifies a schema to use. - `FAILSAFE_SCHEMA` - only strings, arrays and plain objects: http://www.yaml.org/spec/1.2/spec.html#id2802346 - `JSON_SCHEMA` - all JSON-supported types: http://www.yaml.org/spec/1.2/spec.html#id2803231 - `CORE_SCHEMA` - same as `JSON_SCHEMA`: http://www.yaml.org/spec/1.2/spec.html#id2804923 - `DEFAULT_SAFE_SCHEMA` - all supported YAML types, without unsafe ones (`!!js/undefined`, `!!js/regexp` and `!!js/function`): http://yaml.org/type/ - `DEFAULT_FULL_SCHEMA` - all supported YAML types. - `json` _(default: false)_ - compatibility with JSON.parse behaviour. If true, then duplicate keys in a mapping will override values rather than throwing an error. NOTE: This function **does not** understand multi-document sources, it throws exception on those. NOTE: JS-YAML **does not** support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use `Null` and `NULL` as `null`, etc. The core schema also has no such restrictions. It allows binary notation for integers. ### load (string [ , options ]) **Use with care with untrusted sources**. The same as `safeLoad()` but uses `DEFAULT_FULL_SCHEMA` by default - adds some JavaScript-specific types: `!!js/function`, `!!js/regexp` and `!!js/undefined`. For untrusted sources, you must additionally validate object structure to avoid injections: ``` javascript const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"'; // I'm just converting that string, what could possibly go wrong? require('js-yaml').load(untrusted_code) + '' ``` ### safeLoadAll (string [, iterator] [, options ]) Same as `safeLoad()`, but understands multi-document sources. Applies `iterator` to each document if specified, or returns array of documents. ``` javascript const yaml = require('js-yaml'); yaml.safeLoadAll(data, function (doc) { console.log(doc); }); ``` ### loadAll (string [, iterator] [ , options ]) Same as `safeLoadAll()` but uses `DEFAULT_FULL_SCHEMA` by default. ### safeDump (object [ , options ]) Serializes `object` as a YAML document. Uses `DEFAULT_SAFE_SCHEMA`, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the `skipInvalid` option to `true`. options: - `indent` _(default: 2)_ - indentation width to use (in spaces). - `noArrayIndent` _(default: false)_ - when true, will not add an indentation level to array elements - `skipInvalid` _(default: false)_ - do not throw on invalid types (like function in the safe schema) and skip pairs and single values with such types. - `flowLevel` (default: -1) - specifies level of nesting, when to switch from block to flow style for collections. -1 means block style everwhere - `styles` - "tag" => "style" map. Each tag may have own set of styles. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ specifies a schema to use. - `sortKeys` _(default: `false`)_ - if `true`, sort keys when dumping YAML. If a function, use the function to sort the keys. - `lineWidth` _(default: `80`)_ - set max line width. - `noRefs` _(default: `false`)_ - if `true`, don't convert duplicate objects into references - `noCompatMode` _(default: `false`)_ - if `true` don't try to be compatible with older yaml versions. Currently: don't quote "yes", "no" and so on, as required for YAML 1.1 - `condenseFlow` _(default: `false`)_ - if `true` flow sequences will be condensed, omitting the space between `a, b`. Eg. `'[a,b]'`, and omitting the space between `key: value` and quoting the key. Eg. `'{"a":b}'` Can be useful when using yaml for pretty URL query params as spaces are %-encoded. The following table show availlable styles (e.g. "canonical", "binary"...) available for each tag (.e.g. !!null, !!int ...). Yaml output is shown on the right side after `=>` (default setting) or `->`: ``` none !!null "canonical" -> "~" "lowercase" => "null" "uppercase" -> "NULL" "camelcase" -> "Null" !!int "binary" -> "0b1", "0b101010", "0b1110001111010" "octal" -> "01", "052", "016172" "decimal" => "1", "42", "7290" "hexadecimal" -> "0x1", "0x2A", "0x1C7A" !!bool "lowercase" => "true", "false" "uppercase" -> "TRUE", "FALSE" "camelcase" -> "True", "False" !!float "lowercase" => ".nan", '.inf' "uppercase" -> ".NAN", '.INF' "camelcase" -> ".NaN", '.Inf' ``` Example: ``` javascript safeDump (object, { 'styles': { '!!null': 'canonical' // dump null as ~ }, 'sortKeys': true // sort object keys }); ``` ### dump (object [ , options ]) Same as `safeDump()` but without limits (uses `DEFAULT_FULL_SCHEMA` by default). Supported YAML types -------------------- The list of standard YAML tags and corresponding JavaScipt types. See also [YAML tag discussion](http://pyyaml.org/wiki/YAMLTagDiscussion) and [YAML types repository](http://yaml.org/type/). ``` !!null '' # null !!bool 'yes' # bool !!int '3...' # number !!float '3.14...' # number !!binary '...base64...' # buffer !!timestamp 'YYYY-...' # date !!omap [ ... ] # array of key-value pairs !!pairs [ ... ] # array or array pairs !!set { ... } # array of objects with given keys and null values !!str '...' # string !!seq [ ... ] # array !!map { ... } # object ``` **JavaScript-specific tags** ``` !!js/regexp /pattern/gim # RegExp !!js/undefined '' # Undefined !!js/function 'function () {...}' # Function ``` Caveats ------- Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling `toString()` method) them at the moment of adding them. ``` yaml --- ? [ foo, bar ] : - baz ? { foo: bar } : - baz - baz ``` ``` javascript { "foo,bar": ["baz"], "[object Object]": ["baz", "baz"] } ``` Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded. ``` yaml &anchor foo: foo: bar *anchor: duplicate key baz: bat *anchor: duplicate key ``` js-yaml for enterprise ---------------------- Available as part of the Tidelift Subscription The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-js-yaml?utm_source=npm-js-yaml&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) <table><thead> <tr> <th>Linux</th> <th>OS X</th> <th>Windows</th> <th>Coverage</th> <th>Downloads</th> </tr> </thead><tbody><tr> <td colspan="2" align="center"> <a href="https://travis-ci.org/kaelzhang/node-ignore"> <img src="https://travis-ci.org/kaelzhang/node-ignore.svg?branch=master" alt="Build Status" /></a> </td> <td align="center"> <a href="https://ci.appveyor.com/project/kaelzhang/node-ignore"> <img src="https://ci.appveyor.com/api/projects/status/github/kaelzhang/node-ignore?branch=master&svg=true" alt="Windows Build Status" /></a> </td> <td align="center"> <a href="https://codecov.io/gh/kaelzhang/node-ignore"> <img src="https://codecov.io/gh/kaelzhang/node-ignore/branch/master/graph/badge.svg" alt="Coverage Status" /></a> </td> <td align="center"> <a href="https://www.npmjs.org/package/ignore"> <img src="http://img.shields.io/npm/dm/ignore.svg" alt="npm module downloads per month" /></a> </td> </tr></tbody></table> # ignore `ignore` is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore [spec](http://git-scm.com/docs/gitignore). Pay attention that [`minimatch`](https://www.npmjs.org/package/minimatch) does not work in the gitignore way. To filter filenames according to .gitignore file, I recommend this module. ##### Tested on - Linux + Node: `0.8` - `7.x` - Windows + Node: `0.10` - `7.x`, node < `0.10` is not tested due to the lack of support of appveyor. Actually, `ignore` does not rely on any versions of node specially. Since `4.0.0`, ignore will no longer support `node < 6` by default, to use in node < 6, `require('ignore/legacy')`. For details, see [CHANGELOG](https://github.com/kaelzhang/node-ignore/blob/master/CHANGELOG.md). ## Table Of Main Contents - [Usage](#usage) - [`Pathname` Conventions](#pathname-conventions) - [Guide for 2.x -> 3.x](#upgrade-2x---3x) - [Guide for 3.x -> 4.x](#upgrade-3x---4x) - See Also: - [`glob-gitignore`](https://www.npmjs.com/package/glob-gitignore) matches files using patterns and filters them according to gitignore rules. ## Usage ```js import ignore from 'ignore' const ig = ignore().add(['.abc/*', '!.abc/d/']) ``` ### Filter the given paths ```js const paths = [ '.abc/a.js', // filtered out '.abc/d/e.js' // included ] ig.filter(paths) // ['.abc/d/e.js'] ig.ignores('.abc/a.js') // true ``` ### As the filter function ```js paths.filter(ig.createFilter()); // ['.abc/d/e.js'] ``` ### Win32 paths will be handled ```js ig.filter(['.abc\\a.js', '.abc\\d\\e.js']) // if the code above runs on windows, the result will be // ['.abc\\d\\e.js'] ``` ## Why another ignore? - `ignore` is a standalone module, and is much simpler so that it could easy work with other programs, unlike [isaacs](https://npmjs.org/~isaacs)'s [fstream-ignore](https://npmjs.org/package/fstream-ignore) which must work with the modules of the fstream family. - `ignore` only contains utility methods to filter paths according to the specified ignore rules, so - `ignore` never try to find out ignore rules by traversing directories or fetching from git configurations. - `ignore` don't cares about sub-modules of git projects. - Exactly according to [gitignore man page](http://git-scm.com/docs/gitignore), fixes some known matching issues of fstream-ignore, such as: - '`/*.js`' should only match '`a.js`', but not '`abc/a.js`'. - '`**/foo`' should match '`foo`' anywhere. - Prevent re-including a file if a parent directory of that file is excluded. - Handle trailing whitespaces: - `'a '`(one space) should not match `'a '`(two spaces). - `'a \ '` matches `'a '` - All test cases are verified with the result of `git check-ignore`. # Methods ## .add(pattern: string | Ignore): this ## .add(patterns: Array<string | Ignore>): this - **pattern** `String | Ignore` An ignore pattern string, or the `Ignore` instance - **patterns** `Array<String | Ignore>` Array of ignore patterns. Adds a rule or several rules to the current manager. Returns `this` Notice that a line starting with `'#'`(hash) is treated as a comment. Put a backslash (`'\'`) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename. ```js ignore().add('#abc').ignores('#abc') // false ignore().add('\#abc').ignores('#abc') // true ``` `pattern` could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just `ignore().add()` the content of a ignore file: ```js ignore() .add(fs.readFileSync(filenameOfGitignore).toString()) .filter(filenames) ``` `pattern` could also be an `ignore` instance, so that we could easily inherit the rules of another `Ignore` instance. ## <strike>.addIgnoreFile(path)</strike> REMOVED in `3.x` for now. To upgrade `ignore@2.x` up to `3.x`, use ```js import fs from 'fs' if (fs.existsSync(filename)) { ignore().add(fs.readFileSync(filename).toString()) } ``` instead. ## .filter(paths: Array<Pathname>): Array<Pathname> ```ts type Pathname = string ``` Filters the given array of pathnames, and returns the filtered array. - **paths** `Array.<Pathname>` The array of `pathname`s to be filtered. ### `Pathname` Conventions: #### 1. `Pathname` should be a `path.relative()`d pathname `Pathname` should be a string that have been `path.join()`ed, or the return value of `path.relative()` to the current directory. ```js // WRONG ig.ignores('./abc') // WRONG, for it will never happen. // If the gitignore rule locates at the root directory, // `'/abc'` should be changed to `'abc'`. // ``` // path.relative('/', '/abc') -> 'abc' // ``` ig.ignores('/abc') // Right ig.ignores('abc') // Right ig.ignores(path.join('./abc')) // path.join('./abc') -> 'abc' ``` In other words, each `Pathname` here should be a relative path to the directory of the gitignore rules. Suppose the dir structure is: ``` /path/to/your/repo |-- a | |-- a.js | |-- .b | |-- .c |-- .DS_store ``` Then the `paths` might be like this: ```js [ 'a/a.js' '.b', '.c/.DS_store' ] ``` Usually, you could use [`glob`](http://npmjs.org/package/glob) with `option.mark = true` to fetch the structure of the current directory: ```js import glob from 'glob' glob('**', { // Adds a / character to directory matches. mark: true }, (err, files) => { if (err) { return console.error(err) } let filtered = ignore().add(patterns).filter(files) console.log(filtered) }) ``` #### 2. filenames and dirnames `node-ignore` does NO `fs.stat` during path matching, so for the example below: ```js ig.add('config/') // `ig` does NOT know if 'config' is a normal file, directory or something ig.ignores('config') // And it returns `false` ig.ignores('config/') // returns `true` ``` Specially for people who develop some library based on `node-ignore`, it is important to understand that. ## .ignores(pathname: Pathname): boolean > new in 3.2.0 Returns `Boolean` whether `pathname` should be ignored. ```js ig.ignores('.abc/a.js') // true ``` ## .createFilter() Creates a filter function which could filter an array of paths with `Array.prototype.filter`. Returns `function(path)` the filter function. ## `options.ignorecase` since 4.0.0 Similar as the `core.ignorecase` option of [git-config](https://git-scm.com/docs/git-config), `node-ignore` will be case insensitive if `options.ignorecase` is set to `true` (default value), otherwise case sensitive. ```js const ig = ignore({ ignorecase: false }) ig.add('*.png') ig.ignores('*.PNG') // false ``` **** # Upgrade Guide ## Upgrade 2.x -> 3.x - All `options` of 2.x are unnecessary and removed, so just remove them. - `ignore()` instance is no longer an [`EventEmitter`](nodejs.org/api/events.html), and all events are unnecessary and removed. - `.addIgnoreFile()` is removed, see the [.addIgnoreFile](#addignorefilepath) section for details. ## Upgrade 3.x -> 4.x Since `4.0.0`, `ignore` will no longer support node < 6, to use `ignore` in node < 6: ```js var ignore = require('ignore/legacy') ``` **** # Collaborators - [@whitecolor](https://github.com/whitecolor) *Alex* - [@SamyPesse](https://github.com/SamyPesse) *Samy Pessé* - [@azproduction](https://github.com/azproduction) *Mikhail Davydov* - [@TrySound](https://github.com/TrySound) *Bogdan Chadkin* - [@JanMattner](https://github.com/JanMattner) *Jan Mattner* - [@ntwb](https://github.com/ntwb) *Stephen Edgar* - [@kasperisager](https://github.com/kasperisager) *Kasper Isager* - [@sandersn](https://github.com/sandersn) *Nathan Shively-Sanders*
nearwatch_nearnftbot
.env README.md bot.js db .txt near.js package.json
## Introduction NEAR NFT viewer bot <a href="https://t.me/nearnftbot">@nearnftbot</a> helps you quickly access NFTs of your own and someone else's wallets. Inline mode allows you to share any NFT via Telegram. You can also transfer your wallet NFTs to another account. ### Supports - <a href="https://pluminite.com/#/">PLUMINITE</a> - <a href="https://www.mintbase.io">MINTBASE</a> - <a href="https://paras.id">PARAS.ID</a> - <a href="https://near.watch">Near.Watch</a> - Other of <a href="https://nomicon.io/Standards/NonFungibleToken/Enumeration.html">NEP-181</a> ### Installation ``` $ npm install ``` ### Usage ``` $ node bot.js ``` Send Near wallet address to the bot for view it's NFTs. Type "@nearnftbot yourwallet.near" in any chats for view and send NFTs in inline mode. Send NFT image to the bot for search it's owners Reply to NFT message with recipient's address for NFT transfer (see video below). ### Video https://youtu.be/f3FscQ-HVfE https://youtu.be/5FV-3Y2wDd4 ### Support <a href="https://t.me/nearwatch">Near.Watch technical support group (telegram)</a>
gagdiez_loyalty-program-with-ft
.eslintrc.yml .github ISSUE_TEMPLATE 01_BUG_REPORT.md 02_FEATURE_REQUEST.md 03_CODEBASE_IMPROVEMENT.md 04_SUPPORT_QUESTION.md config.yml PULL_REQUEST_TEMPLATE.md labels.yml workflows codeql.yml labels.yml lock.yml pr-labels.yml stale.yml .gitpod.yml README.md contracts Cargo.toml build.sh deploy.sh fungible-token Cargo.toml src lib.rs manager-contract Cargo.toml src lib.rs reward-factory Cargo.toml src deploy.rs lib.rs docs CODE_OF_CONDUCT.md CONTRIBUTING.md SECURITY.md frontend App.js assets global.css logo-black.svg logo-white.svg index.html index.js near-ft-factory.js near-ft-token.js near-interface.js near-wallet.js package-lock.json package.json start.sh ui-components.js integration-tests package-lock.json package.json src main.ava.ts package-lock.json package.json
<h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._
jdnichollsc_my-first-near
README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts counter.spec.ts example.spec.ts as_types.d.ts counter.ts index.ts tsconfig.json package.json
# NEAR CLI - [Setup](https://github.com/near/near-cli#setup) - [Examples from NEAR, Inc.](https://github.com/near-examples) - [NEAR Examples](https://examples.near.org/) - [NEAR Protocol Specification](https://nomicon.io/) - [NEAR Explorer](https://explorer.testnet.near.org/) - Explore the NEAR Blockchain ## Commands - `npm i near-cli -g` - `near login` - Account ID: jdnichollsc.testnet - `near state jdnichollsc.testnet` - Get info of an account - `near keys jdnichollsc.testnet` - Get keys of an account (Check if the account is FullAccess, etc) - `yarn asb` - Compile code from **assembly** folder to WebAssembly - `yarn asb --wat` - Generate **.wat** file to see the binary files - `yarn asp --init` - Create library with unit tests - `yarn asp` - Run unit tests ## Packages - `aspect` - Framework para testing my-first-near-nft Smart Contract ================== A [smart contract] written in [AssemblyScript] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install [Node.js] ≥ 12 Exploring The Code ================== 1. The main smart contract code lives in `assembly/index.ts`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard AssemblyScript tests using [as-pect]. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [AssemblyScript]: https://www.assemblyscript.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [as-pect]: https://www.npmjs.com/package/@as-pect/cli my-first-near-nft ================== This [React] app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `my-first-near-nft.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `my-first-near-nft.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account my-first-near-nft.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'my-first-near-nft.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages
mrpejker_nearspring1
.gitpod.yml README.md contract Cargo.toml README.md compile.js src lib.rs package.json src assets logo-black.svg logo-white.svg config.js global.css index.html index.js main.test.js utils.js wallet login index.html
nearspring1 Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html nearspring1 ================== This app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/src` folder. `/src/index.html` is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `nearspring1.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `nearspring1.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account nearspring1.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'nearspring1.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages
mint-nguyen_near-transaction-example
package.json send-tokens-deconstructed.js send-tokens-easy.js
mlibre_cheat-sheet
.github workflows npm.yml .vscode settings.json Contents Health.md Lovely Tools.md ai generative ai.md langchain.md lobe-chat.md prompt.md readme.md blockchain Bitcoin btc.svg readme.md Cryptography readme.md Ethereum CLI.md Quorum readme.md assets eth.svg readme.md Hyperledger Getting Start.md readme.md LBRY readme.md NEAR CLI.md SDK.md readme.md simple-exchange MLB1-contract Cargo.toml README.md build.bat build.sh ft Cargo.toml src lib.rs rustfmt.toml test-contract-defi Cargo.toml src lib.rs tests sim main.rs no_macros.rs utils.rs with_macros.rs exchange-contract Cargo.toml src lib.rs readme.md web-ui babel.config.js package.json src App.js assets logo-black.svg logo-white.svg config.js global.css index.html index.js utils.js wallet login index.html Polygon PoS Bridge .eslintrc.js erc-1155-pos-brdige.js erc-20-pos-brdige.js erc-721-pos-brdige.js erc1155-pos-bridge.md erc20-pos-bridge.md erc721-pos-bridge.md package.json Smart Contracts .eslintrc.js bin Voter_abi.json main.js package.json readme.md readme.md readme.md docusaurus.md linux access.md automation.md disk-file.md log-monitoring.md multimedia.md other.md processes.md readme.md shell-scripting.md systemd.md text.md tools.md network basic.md dns.md other.md readme.md ssh.md vpn.md raspberry pi.md readme.md vscode.md docs 404.html Health index.html Lovely Tools index.html ai generative ai index.html index.html langchain index.html lobe-chat index.html prompt index.html assets css styles.f97fa848.css js 00386a24.dc9b2acb.js 0267278e.b3620519.js 05bd16ad.34fa1e5b.js 06c16bc1.2ee1aa9a.js 06def0a1.7278a8b5.js 0814b3ee.baefd90d.js 089ea00c.ec32d2ed.js 0d3f1f56.76f326e9.js 0d726ec9.1a73199b.js 11cdba9e.92543471.js 12c9fd0c.5756c226.js 12d34978.f67d2fbf.js 144286ec.4bd49ecc.js 14802848.6bdad7e2.js 16e94c2c.6d5c89c1.js 17896441.4486dbd3.js 182cc002.38f061ad.js 1b6a7de4.d38a9c60.js 1be4aa9f.e9baf098.js 1be78505.914d05d6.js 1f6c1e16.cc488c01.js 20ac2874.f8c500e3.js 217e7ab6.c40fd1d4.js 2572.40e853bf.js 26445734.b364d30b.js 28d9a6fb.41fab222.js 2b9b2b35.b2ce0b08.js 2c31e1fc.a17cf360.js 2d53f62c.40b6676a.js 2f46f4f6.fdc2225e.js 3328306b.8c4ea2b2.js 3720c009.25aff8eb.js 3a026971.d87f055f.js 3b33162a.de5789ec.js 3b3f9a17.2f6d3d7a.js 3c604bc8.d70a0a0d.js 3c7a1985.e2763ab9.js 439794d8.e69d3cd9.js 442b48ab.098d5081.js 44579dd5.84eab808.js 4611.77298230.js 4611.77298230.js.LICENSE.txt 4972.eae2a020.js 4c027927.458208ec.js 4f2b7581.28ab9649.js 4fae5413.cca50e5e.js 4fbaae46.9968eba8.js 52bb9c04.ce0b01ea.js 52e2a80b.11b3d258.js 53d607c0.ea6fb117.js 53f55093.73fda190.js 5447f8a1.c4bbb299.js 54510f97.58200f6f.js 549b1afd.4e541fc6.js 55960ee5.ab8df0d7.js 55d4b6a5.d43316e9.js 5618cd27.41942bab.js 5684.6c114feb.js 587bab49.d89d3f1a.js 5a97b260.ddccc8ab.js 5cd2cf3b.93fefbbe.js 5dee6bcf.e9ac0bd8.js 5eb7d76c.513964a7.js 600268cd.9fa3a9af.js 606da10e.97fb9b30.js 6731f580.0666951e.js 69d14787.86d53be9.js 6a3dfb58.e42899e7.js 6eac3654.1de59877.js 6fb182c9.5dc7674f.js 72c8d2e5.23d02825.js 73573f06.35703694.js 73dd2e7d.2cd46610.js 83f4e8b1.2ed19318.js 8467b7f5.e63714f6.js 8503b981.5155703f.js 866b8020.8de69c31.js 89b22097.4f607f1d.js 8d678e06.b21af63f.js 8f172175.e3897a35.js 8f2cd53c.239cc140.js 90371975.3121b66e.js 914beddb.79e03eb7.js 935512d6.eb975b05.js 935f2afb.51e7e5a3.js 94693fb1.d22608f8.js 96518a57.33df2d71.js 990145a0.110d3d9a.js 9ae8dec8.f6d028d6.js 9c5129ac.abb0499a.js 9ce8b3a5.788b4267.js 9d3a9e09.62cfbebb.js 9ea2d7c6.57b25d72.js 9fa3b5e9.a2307fdb.js a049fff7.891194c2.js a4ffaf38.c88505bf.js aa10d896.121a5d9f.js ad65d7aa.d1c6d769.js ad95a979.49fddc3c.js b49cb379.5b8c202d.js b731a8e8.f3e35356.js be697916.20cca151.js c4a8a0c6.155b1406.js c6461e50.475da86b.js cd5e1f2e.31549fbc.js ce00cef5.ad8d6f81.js ce09d5eb.6bbd23f3.js cf6f78a3.4301d528.js d73a239c.83b48252.js dc43b967.254ed92a.js dc48c437.39f4ac62.js df203c0f.9d93ac9c.js df684998.e50f1ec2.js e004505d.955905b1.js e3960513.9c906a5b.js e49b4f37.d17f6226.js e7c96db3.9f376a9f.js e8795368.5879001c.js eb9e3663.91b3fa55.js edcfdff1.3e0839eb.js ef4e2d6b.d0e69faf.js f116b37b.f49f5bef.js f4b21e4b.d170b1c3.js f7359c4a.c99bf356.js fab5a811.79a78897.js fc661e0a.4af9d01e.js fcf30ac0.ad40e181.js ff64dfed.f3c86078.js main.69116ce9.js main.69116ce9.js.LICENSE.txt runtime~main.7c845cc5.js blockchain Bitcoin index.html Cryptography index.html Ethereum CLI index.html Quorum index.html index.html Hyperledger Getting Start index.html index.html LBRY index.html NEAR CLI index.html SDK index.html index.html simple-exchange MLB1-contract index.html index.html Polygon PoS Bridge erc1155-pos-bridge index.html erc20-pos-bridge index.html erc721-pos-bridge index.html Smart Contracts index.html index.html index.html docusaurus index.html img logo.svg index.html linux access index.html automation index.html disk-file index.html index.html log-monitoring index.html multimedia index.html other index.html processes index.html shell-scripting index.html systemd index.html text index.html tools index.html network basic index.html dns index.html index.html other index.html ssh index.html raspberry pi index.html search-doc-1715577786063.json search-doc.json sitemap.xml tags access-control index.html ai index.html automatic index.html automation index.html backup index.html bash index.html basic index.html blockchain index.html cat index.html cheat index.html dd index.html dex index.html disk index.html dns index.html docusaurus index.html editor index.html fix index.html game index.html graphic index.html grep index.html gui index.html health index.html index.html ipv-6 index.html journalctl index.html langchain index.html less index.html linux index.html ln index.html log index.html manjaro index.html mlibre index.html monitor index.html monitoring index.html mount index.html near index.html network index.html open-vpn index.html permissions index.html port-forwarding index.html process index.html prompt index.html raspberry-pi index.html repair index.html restore index.html ring-buffer index.html rsync index.html script index.html service index.html sheet index.html shell index.html shutdown index.html socks index.html split index.html ssh index.html startup index.html swap index.html syslog index.html systemd index.html text index.html tools index.html tutorial index.html vpn index.html vscode index.html vulkan index.html windows-11 index.html wisdom-hub index.html xdg index.html zsh index.html vscode index.html docusaurus babel.config.js docs Health.md Lovely Tools.md ai generative ai.md langchain.md lobe-chat.md prompt.md readme.md blockchain Bitcoin btc.svg readme.md Cryptography readme.md Ethereum CLI.md Quorum readme.md assets eth.svg readme.md Hyperledger Getting Start.md readme.md LBRY readme.md NEAR CLI.md SDK.md readme.md simple-exchange MLB1-contract Cargo.toml README.md build.bat build.sh ft Cargo.toml src lib.rs rustfmt.toml test-contract-defi Cargo.toml src lib.rs tests sim main.rs no_macros.rs utils.rs with_macros.rs exchange-contract Cargo.toml src lib.rs readme.md web-ui babel.config.js package.json src App.js assets logo-black.svg logo-white.svg config.js global.css index.html index.js utils.js wallet login index.html Polygon PoS Bridge .eslintrc.js erc-1155-pos-brdige.js erc-20-pos-brdige.js erc-721-pos-brdige.js erc1155-pos-bridge.md erc20-pos-bridge.md erc721-pos-bridge.md package.json Smart Contracts .eslintrc.js bin Voter_abi.json main.js package.json readme.md readme.md readme.md docusaurus.md linux access.md automation.md disk-file.md log-monitoring.md multimedia.md other.md processes.md readme.md shell-scripting.md systemd.md text.md tools.md network basic.md dns.md other.md readme.md ssh.md vpn.md raspberry pi.md readme.md vscode.md docusaurus.config.js package-lock.json package.json sidebars.js src css custom.css static img logo.svg readme.md
Fungible Token (FT) =================== Example implementation of a [Fungible Token] contract which uses [near-contract-standards] and [simulation] tests. This is a contract-only example. [Fungible Token]: https://nomicon.io/Standards/FungibleToken/Core.html [near-contract-standards]: https://github.com/near/near-sdk-rs/tree/master/near-contract-standards [simulation]: https://github.com/near/near-sdk-rs/tree/master/near-sdk-sim Prerequisites ============= If you're using Gitpod, you can skip this step. 1. Make sure Rust is installed per the prerequisites in [`near-sdk-rs`](https://github.com/near/near-sdk-rs#pre-requisites) 2. Ensure `near-cli` is installed by running `near --version`. If not installed, install with: `npm install -g near-cli` ## Building To build run: ```bash ./build.sh ``` Using this contract =================== ### Quickest deploy You can build and deploy this smart contract to a development account. [Dev Accounts](https://docs.near.org/docs/concepts/account#dev-accounts) are auto-generated accounts to assist in developing and testing smart contracts. Please see the [Standard deploy](#standard-deploy) section for creating a more personalized account to deploy to. ```bash near dev-deploy --wasmFile res/fungible_token.wasm --helperUrl https://near-contract-helper.onrender.com ``` Behind the scenes, this is creating an account and deploying a contract to it. On the console, notice a message like: >Done deploying to dev-1234567890123 In this instance, the account is `dev-1234567890123`. A file has been created containing a key pair to the account, located at `neardev/dev-account`. To make the next few steps easier, we're going to set an environment variable containing this development account id and use that when copy/pasting commands. Run this command to the environment variable: ```bash source neardev/dev-account.env ``` You can tell if the environment variable is set correctly if your command line prints the account name after this command: ```bash echo $CONTRACT_NAME ``` The next command will initialize the contract using the `new` method: ```bash near call $CONTRACT_NAME new '{"owner_id": "'$CONTRACT_NAME'", "total_supply": "1000000000000000", "metadata": { "spec": "ft-1.0.0", "name": "Example Token Name", "symbol": "EXLT", "decimals": 8 }}' --accountId $CONTRACT_NAME ``` To get the fungible token metadata: ```bash near view $CONTRACT_NAME ft_metadata ``` ### Standard deploy This smart contract will get deployed to your NEAR account. For this example, please create a new NEAR account. Because NEAR allows the ability to upgrade contracts on the same account, initialization functions must be cleared. If you'd like to run this example on a NEAR account that has had prior contracts deployed, please use the `near-cli` command `near delete`, and then recreate it in Wallet. To create (or recreate) an account, please follow the directions on [NEAR Wallet](https://wallet.near.org/). Switch to `mainnet`. You can skip this step to use `testnet` as a default network. export NEAR_ENV=mainnet In the project root, log in to your newly created account with `near-cli` by following the instructions after this command: near login To make this tutorial easier to copy/paste, we're going to set an environment variable for your account id. In the below command, replace `MY_ACCOUNT_NAME` with the account name you just logged in with, including the `.near`: ID=MY_ACCOUNT_NAME You can tell if the environment variable is set correctly if your command line prints the account name after this command: echo $ID Now we can deploy the compiled contract in this example to your account: near deploy --wasmFile res/fungible_token.wasm --accountId $ID FT contract should be initialized before usage. You can read more about metadata at ['nomicon.io'](https://nomicon.io/Standards/FungibleToken/Metadata.html#reference-level-explanation). Modify the parameters and create a token: near call $ID new '{"owner_id": "'$ID'", "total_supply": "1000000000000000", "metadata": { "spec": "ft-1.0.0", "name": "Example Token Name", "symbol": "EXLT", "decimals": 8 }}' --accountId $ID Get metadata: near view $ID ft_metadata Transfer Example --------------- Let's set up an account to transfer some tokens to. These account will be a sub-account of the NEAR account you logged in with. near create-account bob.$ID --masterAccount $ID --initialBalance 1 Add storage deposit for Bob's account: near call $ID storage_deposit '' --accountId bob.$ID --amount 0.00125 Check balance of Bob's account, it should be `0` for now: near view $ID ft_balance_of '{"account_id": "'bob.$ID'"}' Transfer tokens to Bob from the contract that minted these fungible tokens, exactly 1 yoctoNEAR of deposit should be attached: near call $ID ft_transfer '{"receiver_id": "'bob.$ID'", "amount": "19"}' --accountId $ID --amount 0.000000000000000000000001 Check the balance of Bob again with the command from before and it will now return `19`. ## Testing As with many Rust libraries and contracts, there are tests in the main fungible token implementation at `ft/src/lib.rs`. Additionally, this project has [simulation] tests in `tests/sim`. Simulation tests allow testing cross-contract calls, which is crucial to ensuring that the `ft_transfer_call` function works properly. These simulation tests are the reason this project has the file structure it does. Note that the root project has a `Cargo.toml` which sets it up as a workspace. `ft` and `test-contract-defi` are both small & focused contract projects, the latter only existing for simulation tests. The root project imports `near-sdk-sim` and tests interaction between these contracts. You can run all these tests with one command: ```bash cargo test ``` If you want to run only simulation tests, you can use `cargo test simulate`, since all the simulation tests include "simulate" in their names. ## Notes - The maximum balance value is limited by U128 (`2**128 - 1`). - JSON calls should pass U128 as a base-10 string. E.g. "100". - This does not include escrow functionality, as `ft_transfer_call` provides a superior approach. An escrow system can, of course, be added as a separate contract or additional functionality within this contract. ## No AssemblyScript? [near-contract-standards] is currently Rust-only. We strongly suggest using this library to create your own Fungible Token contract to ensure it works as expected. Someday NEAR core or community contributors may provide a similar library for AssemblyScript, at which point this example will be updated to include both a Rust and AssemblyScript version. ## Contributing When making changes to the files in `ft` or `test-contract-defi`, remember to use `./build.sh` to compile all contracts and copy the output to the `res` folder. If you forget this, **the simulation tests will not use the latest versions**. Note that if the `rust-toolchain` file in this repository changes, please make sure to update the `.gitpod.Dockerfile` to explicitly specify using that as default as well. Fungible Token (FT) =================== Example implementation of a [Fungible Token] contract which uses [near-contract-standards] and [simulation] tests. This is a contract-only example. [Fungible Token]: https://nomicon.io/Standards/FungibleToken/Core.html [near-contract-standards]: https://github.com/near/near-sdk-rs/tree/master/near-contract-standards [simulation]: https://github.com/near/near-sdk-rs/tree/master/near-sdk-sim Prerequisites ============= If you're using Gitpod, you can skip this step. 1. Make sure Rust is installed per the prerequisites in [`near-sdk-rs`](https://github.com/near/near-sdk-rs#pre-requisites) 2. Ensure `near-cli` is installed by running `near --version`. If not installed, install with: `npm install -g near-cli` ## Building To build run: ```bash ./build.sh ``` Using this contract =================== ### Quickest deploy You can build and deploy this smart contract to a development account. [Dev Accounts](https://docs.near.org/docs/concepts/account#dev-accounts) are auto-generated accounts to assist in developing and testing smart contracts. Please see the [Standard deploy](#standard-deploy) section for creating a more personalized account to deploy to. ```bash near dev-deploy --wasmFile res/fungible_token.wasm --helperUrl https://near-contract-helper.onrender.com ``` Behind the scenes, this is creating an account and deploying a contract to it. On the console, notice a message like: >Done deploying to dev-1234567890123 In this instance, the account is `dev-1234567890123`. A file has been created containing a key pair to the account, located at `neardev/dev-account`. To make the next few steps easier, we're going to set an environment variable containing this development account id and use that when copy/pasting commands. Run this command to the environment variable: ```bash source neardev/dev-account.env ``` You can tell if the environment variable is set correctly if your command line prints the account name after this command: ```bash echo $CONTRACT_NAME ``` The next command will initialize the contract using the `new` method: ```bash near call $CONTRACT_NAME new '{"owner_id": "'$CONTRACT_NAME'", "total_supply": "1000000000000000", "metadata": { "spec": "ft-1.0.0", "name": "Example Token Name", "symbol": "EXLT", "decimals": 8 }}' --accountId $CONTRACT_NAME ``` To get the fungible token metadata: ```bash near view $CONTRACT_NAME ft_metadata ``` ### Standard deploy This smart contract will get deployed to your NEAR account. For this example, please create a new NEAR account. Because NEAR allows the ability to upgrade contracts on the same account, initialization functions must be cleared. If you'd like to run this example on a NEAR account that has had prior contracts deployed, please use the `near-cli` command `near delete`, and then recreate it in Wallet. To create (or recreate) an account, please follow the directions on [NEAR Wallet](https://wallet.near.org/). Switch to `mainnet`. You can skip this step to use `testnet` as a default network. export NEAR_ENV=mainnet In the project root, log in to your newly created account with `near-cli` by following the instructions after this command: near login To make this tutorial easier to copy/paste, we're going to set an environment variable for your account id. In the below command, replace `MY_ACCOUNT_NAME` with the account name you just logged in with, including the `.near`: ID=MY_ACCOUNT_NAME You can tell if the environment variable is set correctly if your command line prints the account name after this command: echo $ID Now we can deploy the compiled contract in this example to your account: near deploy --wasmFile res/fungible_token.wasm --accountId $ID FT contract should be initialized before usage. You can read more about metadata at ['nomicon.io'](https://nomicon.io/Standards/FungibleToken/Metadata.html#reference-level-explanation). Modify the parameters and create a token: near call $ID new '{"owner_id": "'$ID'", "total_supply": "1000000000000000", "metadata": { "spec": "ft-1.0.0", "name": "Example Token Name", "symbol": "EXLT", "decimals": 8 }}' --accountId $ID Get metadata: near view $ID ft_metadata Transfer Example --------------- Let's set up an account to transfer some tokens to. These account will be a sub-account of the NEAR account you logged in with. near create-account bob.$ID --masterAccount $ID --initialBalance 1 Add storage deposit for Bob's account: near call $ID storage_deposit '' --accountId bob.$ID --amount 0.00125 Check balance of Bob's account, it should be `0` for now: near view $ID ft_balance_of '{"account_id": "'bob.$ID'"}' Transfer tokens to Bob from the contract that minted these fungible tokens, exactly 1 yoctoNEAR of deposit should be attached: near call $ID ft_transfer '{"receiver_id": "'bob.$ID'", "amount": "19"}' --accountId $ID --amount 0.000000000000000000000001 Check the balance of Bob again with the command from before and it will now return `19`. ## Testing As with many Rust libraries and contracts, there are tests in the main fungible token implementation at `ft/src/lib.rs`. Additionally, this project has [simulation] tests in `tests/sim`. Simulation tests allow testing cross-contract calls, which is crucial to ensuring that the `ft_transfer_call` function works properly. These simulation tests are the reason this project has the file structure it does. Note that the root project has a `Cargo.toml` which sets it up as a workspace. `ft` and `test-contract-defi` are both small & focused contract projects, the latter only existing for simulation tests. The root project imports `near-sdk-sim` and tests interaction between these contracts. You can run all these tests with one command: ```bash cargo test ``` If you want to run only simulation tests, you can use `cargo test simulate`, since all the simulation tests include "simulate" in their names. ## Notes - The maximum balance value is limited by U128 (`2**128 - 1`). - JSON calls should pass U128 as a base-10 string. E.g. "100". - This does not include escrow functionality, as `ft_transfer_call` provides a superior approach. An escrow system can, of course, be added as a separate contract or additional functionality within this contract. ## No AssemblyScript? [near-contract-standards] is currently Rust-only. We strongly suggest using this library to create your own Fungible Token contract to ensure it works as expected. Someday NEAR core or community contributors may provide a similar library for AssemblyScript, at which point this example will be updated to include both a Rust and AssemblyScript version. ## Contributing When making changes to the files in `ft` or `test-contract-defi`, remember to use `./build.sh` to compile all contracts and copy the output to the `res` folder. If you forget this, **the simulation tests will not use the latest versions**. Note that if the `rust-toolchain` file in this repository changes, please make sure to update the `.gitpod.Dockerfile` to explicitly specify using that as default as well.
Hsien-HsiuLiao_near-crossword-ex
.gitpod.yml Cargo.toml README.md build.sh frontend README.md package.json public index.html src App.css App.js config.js hardcoded-data.js index.css index.js utils.js src lib.rs
https://www.near-sdk.io/zero-to-hero/basics/overview https://stackoverflow.com/questions/56227766/why-must-a-wasm-library-in-rust-set-the-crate-type-to-cdylib `chmod 744 build.sh` Run the build script and expect to see the compiled Wasm file copied to the res folder, instead of buried in the default folder structure Rust sets up. `./build.sh` `npm install react-crossword-near` in front end folder `npm install js-sha256` `npm install near-api-js` env CONTRACT_NAME=nearcrossword.hliao.testnet npm run start # Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify)
ilblackdragon_near-contribute
Cargo.toml README.md build.sh contract Cargo.toml src contribution.rs contributor.rs dec_serde.rs entity.rs events.rs lib.rs dirty_deploy.sh seed.sh seed Cargo.toml src main.rs widgets README.md package-lock.json package.json types.d.ts
## Web3 Combinator UI # NEAR Contribute This smart contract serves as an extension to SocialDB to maintain the entity <> contributor relations. One can relate this to the LinkedIn graph of employment in Web2. ## Project structure This repository contains multiple key directories, namely: - **contract:** this is where the smart contract code lives - **seed:** this is a script for ingesting seed data into the deployed smart contract - **widgets:** this is where the front-end/BOS widget code lives ## Specification The `Entity` is the core object, that augments the information in `SocialDB` with the specific context that this AccountId represents an entity. `EntityKind` represents what kind of entity this is: - Project - A Web3 project, that can exist independently of legal organizations - Organization - A legal organization - DAO - Something between a project and an organization, managed by people Methods: | Function | Description | Permissions | | - | - | - | | `set_moderator(moderator_id: AccountId)` | Sets new moderator account | Moderator | | `set_entity(account_id: AccountId, entity: Entity)` | Sets full information about entity for given account | Moderator | | `add_entity(account_id: AccountId, kind: EntityKind, start_date: Timestamp)` | Add new entity of given kind (project, DAO, organization) and start date. Automatically adds the creator as contributor will full permissions to edit | Anyone | | `admin_add_entity(account_id: AccountId, founder_id: AccountId, name: String, kind: EntityKind, start_date: Timestamp)` | Adds a new entity like the previous function, but instead of using the predecessor account as founder, uses `founder_id` | Moderator | | `get_entities(from: Option<U64>, limit: Option<U64>)` | Fetches all the entities from the state. (Optionaly paginates if params given) | Anyone | | `get_entity(account_id: AccountId)` | Gets details about a specific entity with a given account ID | Anyone | | `get_admin_entities(account_id: AccountId)` | Fetches all the entities that a given account ID is admin of | Anyone | | `check_is_entity(account_id: AccountId)` | Checks if the given account ID has a entity registered to it | Anyone | | `invite_contributor(entity_id: AccountId, contributor_id: AccountId, description: String, contribution_type: ContributionType, start_date: U64, permissions: HashSet<Permission>)` | Invites a contributor to a entity | Permission::Manager or above | | `accept_invite(account_id: AccountId)` | Accept the invite for contributing to entity with given account ID | Contributor who the invite is sent to | | `reject_invite(account_id: AccountId)` | Reject the invite for contributing to entity with given account ID | Contributor who the invite is sent to | | `get_entity_invites(account_id: AccountId)` | Fetches all the invites sent by the entity with given account ID | Anyone | | `get_contributor_invites(account_id: AccountId)` | Fetches all the invites sent to the contributor with given account ID | Anyone | | `get_invite(entity_id: AccountId, contributor_id: AccountId)` | Gets details about a specific invite with a given entity and contributor IDs | Anyone | | `request_contribution(entity_id: AccountId, description: String)` | Request to contribute to given entity. | Anyone | | `register(contribution_types: HashSet<ContributionType>, skills: HashSet<String>, resume: String)` | Register as a contributor using the provided details | Anyone | | `edit_contributor(contributor: Contributor)` | Edit your contributor profile with all the details | Anyone | | `get_contributors()` | Fetch all the contributors stored in the state | Anyone | | `check_is_contributor(account_id: AccountId)` | Check if the given account ID is registered as a contributor | Anyone | | `get_contributor(account_id: AccountId)` | Get the details of a contributor with the given account ID | Anyone | | `get_contribution_types()` | List out all the contribution types available in the contract | Anyone | | `post_contribution_need(entity_id: AccountId, description: String, contribution_type: ContributionType)` | Create a new need for given entity with a description and type | Permission::Manager or above | | `set_contribution_need(entity_id: AccountId, cid: String, need: ContributionNeed)` | Update a need for given entity | Permission::Manager or above | | `get_contribution_needs()` | Fetch all contribution needs | Anyone | | `get_entity_contribution_needs(account_id: AccountId)` | Fetch all contribution needs from the given entity | Anyone | | `get_admin_contribution_needs(account_id: AccountId)` | Fetch all contribution needs the given account can manage | Anyone | | `get_contribution_need(account_id: AccountId, cid: String)` | Get the details about the given need | Anyone | | `check_if_need_proposed(account_id: AccountId, cid: String)` | Check if the given need has a proposal from the predecessor account | Anyone | | `request_contribution(entity_id: AccountId, description: String, contribution_type: ContributionType, need: Option<String>)` | Propose a contribution to a entity as a contributor | Anyone | | `accept_contribution(entity_id: AccountId, contributor_id: AccountId, description: Option<String>, start_date: Option<U64>)` | Accept a contribution proposal/request. (Optionaly update description and start date) | Permission::Manager or above | | `reject_contribution(entity_id: AccountId, contributor_id: AccountId)` | Reject a contribution proposal/request | Permission::Manager or above | | `finish_contribution(entity_id: AccountId, contributor_id: AccountId, end_date: U64)` | Mark a contribution as ended and add a end date | Permission::Manager or above | | `get_conrtibutor_contributions(account_id: AccountId)` | Fetch all the contributions this contributor is participating in | Anyone | | `get_entity_contributions(account_id: AccountId)` | Fetch all the contributions this entity is participating in | Anyone | | `get_need_contributions(account_id: AccountId, cid: String)` | Fetch all contributions for the given need | Anyone | | `get_contribution(entity_id: AccountId, contributor_id: AccountId)` | Get the details about the given contribution | Anyone | | `get_entity_contribution_requests(account_id: AccountId)` | Fetch all the contribution requests | Anyone | | `get_contributor_contribution_requests(account_id: AccountId)` | Get the details about the given request | Anyone | | `get_admin_contribution_requests(account_id: AccountId)` | Fetch all the contribution requests the given account can manage | Anyone | | `get_need_contribution_requests(account_id: AccountId, cid: String)` | Fetch all contribution requests for the given need | Anyone | | `get_conrtibution_request(entity_id: AccountId, contributor_id: AccountId)` | Get the details about the given request | Anyone |
iOchando_fork_musicfeast_front
README.md assets sources icons account-mobile.svg account.svg advance-track.svg back-track.svg cancel.svg chats copy.svg chats-active.svg chats.svg close.svg faq copy.svg faq-active.svg faq.svg info.svg instagram.svg lupa.svg market-active.svg market.svg menu-active copy.svg menu-active-sidebar copy.svg menu-active-sidebar.svg menu-active.svg menu.svg pause-track.svg play-track.svg records.svg settings-active.svg settings.svg stats copy.svg stats-active.svg stats.svg success.svg twitch.svg twitter.svg logos discord.svg instagram.svg logo-mobile.svg logo.svg my-near-wallet-icon.svg near-orange.svg near-wallet-icon.svg near.svg ramper.svg sender-icon.svg twitter.svg youtube.svg i18n en.js es.js jsconfig.json middleware authenticated.js route-validator.js mixins computeds.js isMobile.js styles.js nuxt.config.js package.json plugins apexchart.js axios.js directives.js google-maps.js injects.js polyfills.js vue-debounce.js youtube.client.js serverMiddleware botdiscord config dataThegraph.js postgres.js index.js firstTest index.js services near-api.js ramper-api.js static fonts bebas_neue About.md store README.md index.js
# Music Feast ## Build Setup ```bash # install dependencies $ npm install # serve with hot reload at localhost:3000 $ npm run dev # build for production and launch server $ npm run build $ npm run start # generate static project $ npm run generate ``` For detailed explanation on how things work, check out the [documentation](https://nuxtjs.org). ## Special Directories You can create the following extra directories, some of which have special behaviors. Only `pages` is required; you can delete them if you don't want to use their functionality. ### `assets` The assets directory contains your uncompiled assets such as Stylus or Sass files, images, or fonts. More information about the usage of this directory in [the documentation](https://nuxtjs.org/docs/2.x/directory-structure/assets). ### `components` The components directory contains your Vue.js components. Components make up the different parts of your page and can be reused and imported into your pages, layouts and even other components. More information about the usage of this directory in [the documentation](https://nuxtjs.org/docs/2.x/directory-structure/components). ### `layouts` Layouts are a great help when you want to change the look and feel of your Nuxt app, whether you want to include a sidebar or have distinct layouts for mobile and desktop. More information about the usage of this directory in [the documentation](https://nuxtjs.org/docs/2.x/directory-structure/layouts). ### `pages` This directory contains your application views and routes. Nuxt will read all the `*.vue` files inside this directory and setup Vue Router automatically. More information about the usage of this directory in [the documentation](https://nuxtjs.org/docs/2.x/get-started/routing). ### `plugins` The plugins directory contains JavaScript plugins that you want to run before instantiating the root Vue.js Application. This is the place to add Vue plugins and to inject functions or constants. Every time you need to use `Vue.use()`, you should create a file in `plugins/` and add its path to plugins in `nuxt.config.js`. More information about the usage of this directory in [the documentation](https://nuxtjs.org/docs/2.x/directory-structure/plugins). ### `static` This directory contains your static files. Each file inside this directory is mapped to `/`. Example: `/static/robots.txt` is mapped as `/robots.txt`. More information about the usage of this directory in [the documentation](https://nuxtjs.org/docs/2.x/directory-structure/static). ### `store` This directory contains your Vuex store files. Creating a file in this directory automatically activates Vuex. More information about the usage of this directory in [the documentation](https://nuxtjs.org/docs/2.x/directory-structure/store). # STORE **This directory is not required, you can delete it if you don't want to use it.** This directory contains your Vuex Store files. Vuex Store option is implemented in the Nuxt.js framework. Creating a file in this directory automatically activates the option in the framework. More information about the usage of this directory in [the documentation](https://nuxtjs.org/guide/vuex-store).
mhmtacikel_ShortFilmFunding
App.js README.md as-pect.config.js asconfig.json assembly as_types.d.ts index.ts model.ts tsconfig.json config.ts index.html neardev dev-account.env node_modules .bin acorn.cmd asb.cmd asbuild.cmd asc.cmd asinit.cmd asp.cmd aspect.cmd assemblyscript-build.cmd detect-libc.cmd eslint.cmd esparse.cmd esvalidate.cmd hid-showdevices.cmd is-ci.cmd is-docker.cmd js-yaml.cmd mkdirp.cmd mustache.cmd ncp.cmd near-vm-as.cmd near-vm.cmd near.cmd nearley-railroad.cmd nearley-test.cmd nearley-unparse.cmd nearleyc.cmd node-gyp-build-optional.cmd node-gyp-build-test.cmd node-gyp-build.cmd node-which.cmd prebuild-install.cmd rc.cmd rimraf.cmd semver.cmd sha.js sha.js.cmd uuid.cmd wasm-opt.cmd @as-covers assembly CONTRIBUTING.md README.md index.ts package.json tsconfig.json core CONTRIBUTING.md README.md package.json glue README.md lib index.d.ts index.js package.json transform README.md lib index.d.ts index.js util.d.ts util.js node_modules assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json node_modules .bin wasm-opt.cmd package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js toString.d.ts toString.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformRange.d.ts transformRange.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js node_modules .bin asc.cmd asinit.cmd package.json tsconfig.json package.json @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js node_modules .bin nearley-railroad.cmd nearley-test.cmd nearley-unparse.cmd nearleyc.cmd package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json @babel code-frame README.md lib index.js package.json helper-validator-identifier README.md lib identifier.js index.js keyword.js package.json scripts generate-identifier-regex.js highlight README.md lib index.js node_modules ansi-styles index.js package.json readme.md chalk index.js package.json readme.md templates.js types index.d.ts color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name .eslintrc.json README.md index.js package.json test.js escape-string-regexp index.js package.json readme.md has-flag index.js package.json readme.md supports-color browser.js index.js package.json readme.md package.json @eslint eslintrc CHANGELOG.md README.md conf config-schema.js environments.js eslint-all.js eslint-recommended.js lib cascading-config-array-factory.js config-array-factory.js config-array config-array.js config-dependency.js extracted-config.js ignore-pattern.js index.js override-tester.js flat-compat.js index.js shared ajv.js config-ops.js config-validator.js deprecation-warnings.js naming.js relative-module-resolver.js types.js node_modules .bin js-yaml.cmd package.json @humanwhocodes config-array README.md api.js package.json object-schema .eslintrc.js .github workflows nodejs-test.yml release-please.yml CHANGELOG.md README.md package.json src index.js merge-strategy.js object-schema.js validation-strategy.js tests merge-strategy.js object-schema.js validation-strategy.js @jest environment build index.d.ts index.js package.json fake-timers build index.d.ts index.js legacyFakeTimers.d.ts legacyFakeTimers.js modernFakeTimers.d.ts modernFakeTimers.js package.json types build Circus.d.ts Circus.js Config.d.ts Config.js Global.d.ts Global.js TestResult.d.ts TestResult.js Transform.d.ts Transform.js index.d.ts index.js package.json @ledgerhq devices README.md lib-es ble receiveAPDU.js sendAPDU.js hid-framing.js index.js scrambling.js lib ble receiveAPDU.js sendAPDU.js hid-framing.js index.js scrambling.js node_modules .bin semver.cmd package.json src ble receiveAPDU.js sendAPDU.js hid-framing.js index.js scrambling.js errors README.md dist helpers.d.ts index.cjs.js index.d.ts index.js package.json src helpers.ts index.ts hw-transport-node-hid-noevents README.md lib-es TransportNodeHid.d.ts TransportNodeHid.js lib TransportNodeHid.d.ts TransportNodeHid.js node_modules .bin hid-showdevices.cmd @ledgerhq devices README.md lib-es ble receiveAPDU.d.ts receiveAPDU.js sendAPDU.d.ts sendAPDU.js hid-framing.d.ts hid-framing.js index.d.ts index.js scrambling.d.ts scrambling.js lib ble receiveAPDU.d.ts receiveAPDU.js sendAPDU.d.ts sendAPDU.js hid-framing.d.ts hid-framing.js index.d.ts index.js scrambling.d.ts scrambling.js node_modules .bin semver.cmd package.json src ble receiveAPDU.ts sendAPDU.ts hid-framing.ts index.ts scrambling.ts tests identifyTargetId.test.js tsconfig.json errors README.md lib-es helpers.d.ts helpers.js index.d.ts index.js lib helpers.d.ts helpers.js index.d.ts index.js package.json src helpers.ts index.ts tsconfig.json hw-transport README.md lib-es Transport.d.ts Transport.js lib Transport.d.ts Transport.js package.json src Transport.ts tsconfig.json logs README.md lib-es index.d.ts index.js lib index.d.ts index.js package.json src index.ts tsconfig.json package.json src TransportNodeHid.ts tsconfig.json hw-transport-node-hid README.md lib-es TransportNodeHid.d.ts TransportNodeHid.js listenDevices.d.ts listenDevices.js lib TransportNodeHid.d.ts TransportNodeHid.js listenDevices.d.ts listenDevices.js node_modules .bin hid-showdevices.cmd @ledgerhq devices README.md lib-es ble receiveAPDU.d.ts receiveAPDU.js sendAPDU.d.ts sendAPDU.js hid-framing.d.ts hid-framing.js index.d.ts index.js scrambling.d.ts scrambling.js lib ble receiveAPDU.d.ts receiveAPDU.js sendAPDU.d.ts sendAPDU.js hid-framing.d.ts hid-framing.js index.d.ts index.js scrambling.d.ts scrambling.js node_modules .bin semver.cmd package.json src ble receiveAPDU.ts sendAPDU.ts hid-framing.ts index.ts scrambling.ts tests identifyTargetId.test.js tsconfig.json errors README.md lib-es helpers.d.ts helpers.js index.d.ts index.js lib helpers.d.ts helpers.js index.d.ts index.js package.json src helpers.ts index.ts tsconfig.json hw-transport README.md lib-es Transport.d.ts Transport.js lib Transport.d.ts Transport.js package.json src Transport.ts tsconfig.json logs README.md lib-es index.d.ts index.js lib index.d.ts index.js package.json src index.ts tsconfig.json package.json src TransportNodeHid.ts listenDevices.ts tsconfig.json hw-transport-u2f README.md lib-es TransportU2F.js lib TransportU2F.js package.json src TransportU2F.js hw-transport-webhid README.md flow webhid.js lib-es TransportWebHID.js lib TransportWebHID.js package.json src TransportWebHID.js hw-transport-webusb README.md flow webusb.js lib-es TransportWebUSB.js webusb.js lib TransportWebUSB.js webusb.js package.json src TransportWebUSB.js webusb.js hw-transport README.md lib-es Transport.js lib Transport.js package.json src Transport.js logs README.md lib-es index.js lib index.js package.json src index.js @sindresorhus is dist index.d.ts index.js package.json readme.md @sinonjs commons CHANGES.md README.md lib called-in-order.js called-in-order.test.js class-name.js class-name.test.js deprecated.js deprecated.test.js every.js every.test.js function-name.js function-name.test.js global.js global.test.js index.js index.test.js order-by-first-call.js order-by-first-call.test.js prototypes README.md array.js copy-prototype.js function.js index.js index.test.js map.js object.js set.js string.js type-of.js type-of.test.js value-to-string.js value-to-string.test.js package.json types called-in-order.d.ts class-name.d.ts deprecated.d.ts every.d.ts function-name.d.ts global.d.ts index.d.ts order-by-first-call.d.ts prototypes array.d.ts copy-prototype.d.ts function.d.ts index.d.ts map.d.ts object.d.ts set.d.ts string.d.ts type-of.d.ts value-to-string.d.ts fake-timers CHANGELOG.md README.md package.json src fake-timers-src.js @szmarczak http-timer README.md package.json source index.js @types istanbul-lib-coverage README.md index.d.ts package.json istanbul-lib-report README.md index.d.ts package.json istanbul-reports README.md index.d.ts package.json node README.md assert.d.ts assert strict.d.ts async_hooks.d.ts buffer.d.ts child_process.d.ts cluster.d.ts console.d.ts constants.d.ts crypto.d.ts dgram.d.ts diagnostics_channel.d.ts dns.d.ts dns promises.d.ts domain.d.ts events.d.ts fs.d.ts fs promises.d.ts globals.d.ts globals.global.d.ts http.d.ts http2.d.ts https.d.ts index.d.ts inspector.d.ts module.d.ts net.d.ts os.d.ts package.json path.d.ts perf_hooks.d.ts process.d.ts punycode.d.ts querystring.d.ts readline.d.ts repl.d.ts stream.d.ts stream consumers.d.ts promises.d.ts web.d.ts string_decoder.d.ts timers.d.ts timers promises.d.ts tls.d.ts trace_events.d.ts tty.d.ts url.d.ts util.d.ts v8.d.ts vm.d.ts wasi.d.ts worker_threads.d.ts zlib.d.ts stack-utils README.md index.d.ts package.json yargs-parser README.md index.d.ts package.json yargs README.md helpers.d.ts index.d.ts package.json yargs.d.ts acorn-jsx README.md index.d.ts index.js node_modules .bin acorn.cmd package.json xhtml.js acorn CHANGELOG.md README.md dist acorn.d.ts acorn.js acorn.mjs.d.ts bin.js package.json agent-base README.md dist src index.d.ts index.js promisify.d.ts promisify.js package.json src index.ts promisify.ts ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js ansi-align CHANGELOG.md README.md index.js package.json ansi-colors README.md index.js package.json symbols.js types index.d.ts ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md aproba README.md index.js package.json are-we-there-yet CHANGES.md README.md index.js node_modules readable-stream .travis.yml CONTRIBUTING.md GOVERNANCE.md README.md doc wg-meetings 2015-01-30.md duplex-browser.js duplex.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams BufferList.js destroy.js stream-browser.js stream.js package.json passthrough.js readable-browser.js readable.js transform.js writable-browser.js writable.js safe-buffer README.md index.d.ts index.js package.json string_decoder .travis.yml README.md lib string_decoder.js package.json package.json tracker-base.js tracker-group.js tracker-stream.js tracker.js argparse CHANGELOG.md README.md index.js lib action.js action append.js append constant.js count.js help.js store.js store constant.js false.js true.js subparsers.js version.js action_container.js argparse.js argument error.js exclusive.js group.js argument_parser.js const.js help added_formatters.js formatter.js namespace.js utils.js package.json as-bignum README.md assembly __tests__ as-pect.d.ts i128.spec.as.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts package.json asbuild README.md dist cli.d.ts cli.js commands build.d.ts build.js fmt.d.ts fmt.js index.d.ts index.js init cmd.d.ts cmd.js files asconfigJson.d.ts asconfigJson.js aspecConfig.d.ts aspecConfig.js assembly_files.d.ts assembly_files.js eslintConfig.d.ts eslintConfig.js gitignores.d.ts gitignores.js index.d.ts index.js indexJs.d.ts indexJs.js packageJson.d.ts packageJson.js test_files.d.ts test_files.js index.d.ts index.js interfaces.d.ts interfaces.js run.d.ts run.js test.d.ts test.js index.d.ts index.js main.d.ts main.js utils.d.ts utils.js index.js node_modules .bin asc.cmd asinit.cmd asp.cmd aspect.cmd eslint.cmd @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json node_modules .bin wasm-opt.cmd package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts camelcase index.d.ts index.js package.json readme.md cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json ascii-table .travis.yml ascii-table.js ascii-table.min.js bower.json example simple.js simple.txt index.js package.json readme.md test.js assemblyscript-json .eslintrc.js .travis.yml README.md assembly JSON.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts index.js package.json temp-docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md assemblyscript-regex .eslintrc.js .github workflows benchmark.yml release.yml test.yml README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __spec_tests__ generated.spec.ts __tests__ alterations.spec.ts as-pect.d.ts boundary-assertions.spec.ts capture-group.spec.ts character-classes.spec.ts character-sets.spec.ts characters.ts empty.ts quantifiers.spec.ts range-quantifiers.spec.ts regex.spec.ts utils.ts char.ts env.ts index.ts nfa matcher.ts nfa.ts types.ts walker.ts parser node.ts parser.ts string-iterator.ts walker.ts regexp.ts tsconfig.json util.ts benchmark benchmark.js package.json spec test-generator.js ts index.ts tsconfig.json assemblyscript-temporal .github workflows node.js.yml release.yml .vscode launch.json README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __tests__ README.md as-pect.d.ts date.spec.ts duration.spec.ts empty.ts plaindate.spec.ts plaindatetime.spec.ts plainmonthday.spec.ts plaintime.spec.ts plainyearmonth.spec.ts timezone.spec.ts zoneddatetime.spec.ts constants.ts date.ts duration.ts enums.ts env.ts index.ts instant.ts now.ts plaindate.ts plaindatetime.ts plainmonthday.ts plaintime.ts plainyearmonth.ts timezone.ts tsconfig.json tz __tests__ index.spec.ts rule.spec.ts zone.spec.ts iana.ts index.ts rule.ts zone.ts utils.ts zoneddatetime.ts development.md package.json tzdb README.md iana theory.html zoneinfo2tdf.pl assemblyscript README.md bin asc.js asinit.js dist asc.d.ts asc.generated.d.ts asc.js assemblyscript.d.ts assemblyscript.generated.d.ts assemblyscript.js transform.d.ts transform.js web.html lib README.md binaryen.d.ts binaryen.js node_modules .bin wasm-opt.cmd wasm2js.cmd long README.md index.d.ts index.js package.json umd index.d.ts index.js package.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings asyncify.ts dom.ts node.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts performance.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts runtime.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util bytes.ts casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json util README.md browser fs.js module.js path.js process.js url.js cpu.d.ts cpu.js find.d.ts find.js node.d.ts node.js options.d.ts options.js terminal.d.ts terminal.js text.d.ts text.js tsconfig.json web.d.ts web.js astral-regex index.d.ts index.js package.json readme.md axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match .github FUNDING.yml LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js base64-js README.md base64js.min.js index.d.ts index.js package.json binary-install README.md example binary.js package.json run.js index.js node_modules .bin mkdirp.cmd rimraf.cmd package.json src binary.js binaryen README.md bin package.json index.d.ts package.json bindings LICENSE.md README.md bindings.js package.json bip39-light .travis.yml README.md index.js package.json test index.js readme.js vectors.json wordlist.json wordlists english.json bip39 CHANGELOG.md README.md node_modules @types node README.md assert.d.ts async_hooks.d.ts base.d.ts buffer.d.ts child_process.d.ts cluster.d.ts console.d.ts constants.d.ts crypto.d.ts dgram.d.ts dns.d.ts domain.d.ts events.d.ts fs.d.ts globals.d.ts http.d.ts http2.d.ts https.d.ts index.d.ts inspector.d.ts module.d.ts net.d.ts os.d.ts package.json path.d.ts perf_hooks.d.ts process.d.ts punycode.d.ts querystring.d.ts readline.d.ts repl.d.ts stream.d.ts string_decoder.d.ts timers.d.ts tls.d.ts trace_events.d.ts ts3.2 globals.d.ts index.d.ts util.d.ts tty.d.ts url.d.ts util.d.ts v8.d.ts vm.d.ts worker_threads.d.ts zlib.d.ts package.json src _wordlists.js index.js wordlists chinese_simplified.json chinese_traditional.json english.json french.json italian.json japanese.json korean.json spanish.json types _wordlists.d.ts index.d.ts wordlists.d.ts bl .travis.yml BufferList.js LICENSE.md README.md bl.js package.json test convert.js indexOf.js isBufferList.js test.js bn.js CHANGELOG.md README.md lib bn.js package.json borsh .eslintrc.yml .travis.yml LICENSE-MIT.txt README.md borsh-ts .eslintrc.yml index.ts test .eslintrc.yml fuzz borsh-roundtrip.js transaction-example enums.d.ts enums.js key_pair.d.ts key_pair.js serialize.d.ts serialize.js signer.d.ts signer.js transaction.d.ts transaction.js serialize.test.js lib index.d.ts index.js package.json tsconfig.json boxen index.d.ts index.js package.json readme.md brace-expansion README.md index.js package.json braces CHANGELOG.md README.md index.js lib compile.js constants.js expand.js parse.js stringify.js utils.js package.json bs58 CHANGELOG.md README.md index.js package.json buffer AUTHORS.md README.md index.d.ts index.js package.json cacheable-request README.md node_modules get-stream buffer-stream.js index.d.ts index.js package.json readme.md lowercase-keys index.d.ts index.js package.json readme.md package.json src index.js callsites index.d.ts index.js package.json readme.md camelcase index.d.ts index.js package.json readme.md capability Array.prototype.forEach.js Array.prototype.map.js Error.captureStackTrace.js Error.prototype.stack.js Function.prototype.bind.js Object.create.js Object.defineProperties.js Object.defineProperty.js Object.prototype.hasOwnProperty.js README.md arguments.callee.caller.js es5.js index.js lib CapabilityDetector.js definitions.js index.js package.json strict mode.js chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json ci-info CHANGELOG.md README.md index.js package.json vendors.json cipher-base .travis.yml README.md index.js package.json test.js cli-boxes boxes.json index.d.ts index.js package.json readme.md cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json clone-response README.md package.json src index.js code-point-at index.js package.json readme.md color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js configstore index.js package.json readme.md console-control-strings README.md index.js package.json core-util-is README.md lib util.js package.json create-hash .travis.yml README.md browser.js index.js md5.js node_modules .bin sha.js sha.js.cmd package.json test.js create-hmac README.md browser.js index.js legacy.js node_modules .bin sha.js sha.js.cmd package.json cross-spawn CHANGELOG.md README.md index.js lib enoent.js parse.js util escape.js readShebang.js resolveCommand.js node_modules .bin node-which.cmd package.json crypto-random-string index.d.ts index.js package.json readme.md csv-stringify README.md lib browser index.js sync.js es5 index.d.ts index.js sync.d.ts sync.js index.d.ts index.js sync.d.ts sync.js package.json debug README.md package.json src browser.js common.js index.js node.js decamelize index.js package.json readme.md decompress-response index.js package.json readme.md deep-extend CHANGELOG.md README.md index.js lib deep-extend.js package.json deep-is .travis.yml example cmp.js index.js package.json test NaN.js cmp.js neg-vs-pos-0.js defer-to-connect README.md dist index.d.ts index.js package.json define-lazy-prop index.d.ts index.js package.json readme.md delegates History.md Readme.md index.js package.json test index.js depd History.md Readme.md index.js lib browser index.js package.json detect-libc README.md bin detect-libc.js lib detect-libc.js package.json diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js doctrine CHANGELOG.md README.md lib doctrine.js typed.js utility.js package.json dot-prop index.d.ts index.js package.json readme.md duplexer3 LICENSE.md README.md index.js package.json emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js end-of-stream README.md index.js package.json enquirer CHANGELOG.md README.md index.d.ts index.js lib ansi.js combos.js completer.js interpolate.js keypress.js placeholder.js prompt.js prompts autocomplete.js basicauth.js confirm.js editable.js form.js index.js input.js invisible.js list.js multiselect.js numeral.js password.js quiz.js scale.js select.js snippet.js sort.js survey.js text.js toggle.js render.js roles.js state.js styles.js symbols.js theme.js timer.js types array.js auth.js boolean.js index.js number.js string.js utils.js package.json env-paths index.d.ts index.js package.json readme.md error-polyfill README.md index.js lib index.js non-v8 Frame.js FrameStringParser.js FrameStringSource.js index.js prepareStackTrace.js unsupported.js v8.js package.json escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js escape-goat index.d.ts index.js package.json readme.md escape-string-regexp index.d.ts index.js package.json readme.md eslint-scope CHANGELOG.md README.md lib definition.js index.js pattern-visitor.js reference.js referencer.js scope-manager.js scope.js variable.js node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json eslint-utils README.md index.js package.json eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json eslint CHANGELOG.md README.md bin eslint.js conf category-list.json config-schema.js default-cli-options.js eslint-all.js eslint-recommended.js replacements.json lib api.js cli-engine cli-engine.js file-enumerator.js formatters checkstyle.js codeframe.js compact.js html.js jslint-xml.js json-with-metadata.js json.js junit.js stylish.js table.js tap.js unix.js visualstudio.js hash.js index.js lint-result-cache.js load-rules.js xml-escape.js cli.js config default-config.js flat-config-array.js flat-config-schema.js rule-validator.js eslint eslint.js index.js init autoconfig.js config-file.js config-initializer.js config-rule.js npm-utils.js source-code-utils.js linter apply-disable-directives.js code-path-analysis code-path-analyzer.js code-path-segment.js code-path-state.js code-path.js debug-helpers.js fork-context.js id-generator.js config-comment-parser.js index.js interpolate.js linter.js node-event-generator.js report-translator.js rule-fixer.js rules.js safe-emitter.js source-code-fixer.js timing.js options.js rule-tester index.js rule-tester.js rules accessor-pairs.js array-bracket-newline.js array-bracket-spacing.js array-callback-return.js array-element-newline.js arrow-body-style.js arrow-parens.js arrow-spacing.js block-scoped-var.js block-spacing.js brace-style.js callback-return.js camelcase.js capitalized-comments.js class-methods-use-this.js comma-dangle.js comma-spacing.js comma-style.js complexity.js computed-property-spacing.js consistent-return.js consistent-this.js constructor-super.js curly.js default-case-last.js default-case.js default-param-last.js dot-location.js dot-notation.js eol-last.js eqeqeq.js for-direction.js func-call-spacing.js func-name-matching.js func-names.js func-style.js function-call-argument-newline.js function-paren-newline.js generator-star-spacing.js getter-return.js global-require.js grouped-accessor-pairs.js guard-for-in.js handle-callback-err.js id-blacklist.js id-denylist.js id-length.js id-match.js implicit-arrow-linebreak.js indent-legacy.js indent.js index.js init-declarations.js jsx-quotes.js key-spacing.js keyword-spacing.js line-comment-position.js linebreak-style.js lines-around-comment.js lines-around-directive.js lines-between-class-members.js max-classes-per-file.js max-depth.js max-len.js max-lines-per-function.js max-lines.js max-nested-callbacks.js max-params.js max-statements-per-line.js max-statements.js multiline-comment-style.js multiline-ternary.js new-cap.js new-parens.js newline-after-var.js newline-before-return.js newline-per-chained-call.js no-alert.js no-array-constructor.js no-async-promise-executor.js no-await-in-loop.js no-bitwise.js no-buffer-constructor.js no-caller.js no-case-declarations.js no-catch-shadow.js no-class-assign.js no-compare-neg-zero.js no-cond-assign.js no-confusing-arrow.js no-console.js no-const-assign.js no-constant-condition.js no-constructor-return.js no-continue.js no-control-regex.js no-debugger.js no-delete-var.js no-div-regex.js no-dupe-args.js no-dupe-class-members.js no-dupe-else-if.js no-dupe-keys.js no-duplicate-case.js no-duplicate-imports.js no-else-return.js no-empty-character-class.js no-empty-function.js no-empty-pattern.js no-empty.js no-eq-null.js no-eval.js no-ex-assign.js no-extend-native.js no-extra-bind.js no-extra-boolean-cast.js no-extra-label.js no-extra-parens.js no-extra-semi.js no-fallthrough.js no-floating-decimal.js no-func-assign.js no-global-assign.js no-implicit-coercion.js no-implicit-globals.js no-implied-eval.js no-import-assign.js no-inline-comments.js no-inner-declarations.js no-invalid-regexp.js no-invalid-this.js no-irregular-whitespace.js no-iterator.js no-label-var.js no-labels.js no-lone-blocks.js no-lonely-if.js no-loop-func.js no-loss-of-precision.js no-magic-numbers.js no-misleading-character-class.js no-mixed-operators.js no-mixed-requires.js no-mixed-spaces-and-tabs.js no-multi-assign.js no-multi-spaces.js no-multi-str.js no-multiple-empty-lines.js no-native-reassign.js no-negated-condition.js no-negated-in-lhs.js no-nested-ternary.js no-new-func.js no-new-object.js no-new-require.js no-new-symbol.js no-new-wrappers.js no-new.js no-nonoctal-decimal-escape.js no-obj-calls.js no-octal-escape.js no-octal.js no-param-reassign.js no-path-concat.js no-plusplus.js no-process-env.js no-process-exit.js no-promise-executor-return.js no-proto.js no-prototype-builtins.js no-redeclare.js no-regex-spaces.js no-restricted-exports.js no-restricted-globals.js no-restricted-imports.js no-restricted-modules.js no-restricted-properties.js no-restricted-syntax.js no-return-assign.js no-return-await.js no-script-url.js no-self-assign.js no-self-compare.js no-sequences.js no-setter-return.js no-shadow-restricted-names.js no-shadow.js no-spaced-func.js no-sparse-arrays.js no-sync.js no-tabs.js no-template-curly-in-string.js no-ternary.js no-this-before-super.js no-throw-literal.js no-trailing-spaces.js no-undef-init.js no-undef.js no-undefined.js no-underscore-dangle.js no-unexpected-multiline.js no-unmodified-loop-condition.js no-unneeded-ternary.js no-unreachable-loop.js no-unreachable.js no-unsafe-finally.js no-unsafe-negation.js no-unsafe-optional-chaining.js no-unused-expressions.js no-unused-labels.js no-unused-vars.js no-use-before-define.js no-useless-backreference.js no-useless-call.js no-useless-catch.js no-useless-computed-key.js no-useless-concat.js no-useless-constructor.js no-useless-escape.js no-useless-rename.js no-useless-return.js no-var.js no-void.js no-warning-comments.js no-whitespace-before-property.js no-with.js nonblock-statement-body-position.js object-curly-newline.js object-curly-spacing.js object-property-newline.js object-shorthand.js one-var-declaration-per-line.js one-var.js operator-assignment.js operator-linebreak.js padded-blocks.js padding-line-between-statements.js prefer-arrow-callback.js prefer-const.js prefer-destructuring.js prefer-exponentiation-operator.js prefer-named-capture-group.js prefer-numeric-literals.js prefer-object-spread.js prefer-promise-reject-errors.js prefer-reflect.js prefer-regex-literals.js prefer-rest-params.js prefer-spread.js prefer-template.js quote-props.js quotes.js radix.js require-atomic-updates.js require-await.js require-jsdoc.js require-unicode-regexp.js require-yield.js rest-spread-spacing.js semi-spacing.js semi-style.js semi.js sort-imports.js sort-keys.js sort-vars.js space-before-blocks.js space-before-function-paren.js space-in-parens.js space-infix-ops.js space-unary-ops.js spaced-comment.js strict.js switch-colon-spacing.js symbol-description.js template-curly-spacing.js template-tag-spacing.js unicode-bom.js use-isnan.js utils ast-utils.js fix-tracker.js keywords.js lazy-loading-rule-map.js patterns letters.js unicode index.js is-combining-character.js is-emoji-modifier.js is-regional-indicator-symbol.js is-surrogate-pair.js valid-jsdoc.js valid-typeof.js vars-on-top.js wrap-iife.js wrap-regex.js yield-star-spacing.js yoda.js shared ajv.js ast-utils.js config-validator.js deprecation-warnings.js logging.js relative-module-resolver.js runtime-info.js string-utils.js traverser.js types.js source-code index.js source-code.js token-store backward-token-comment-cursor.js backward-token-cursor.js cursor.js cursors.js decorative-cursor.js filter-cursor.js forward-token-comment-cursor.js forward-token-cursor.js index.js limit-cursor.js padded-token-cursor.js skip-cursor.js utils.js messages all-files-ignored.js extend-config-missing.js failed-to-read-json.js file-not-found.js no-config-found.js plugin-conflict.js plugin-invalid.js plugin-missing.js print-config-with-directory-path.js whitespace-found.js node_modules .bin js-yaml.cmd semver.cmd eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json espree CHANGELOG.md README.md espree.js lib ast-node-types.js espree.js features.js options.js token-translator.js visitor-keys.js node_modules .bin acorn.cmd package.json esprima README.md bin esparse.js esvalidate.js dist esprima.js package.json esquery README.md dist esquery.esm.js esquery.esm.min.js esquery.js esquery.lite.js esquery.lite.min.js esquery.min.js license.txt package.json parser.js esrecurse README.md esrecurse.js gulpfile.babel.js package.json estraverse README.md estraverse.js gulpfile.js package.json esutils README.md lib ast.js code.js keyword.js utils.js package.json events .airtap.yml .github FUNDING.yml .travis.yml History.md Readme.md events.js package.json security.md tests add-listeners.js check-listener-leaks.js common.js errors.js events-list.js events-once.js index.js legacy-compat.js listener-count.js listeners-side-effects.js listeners.js max-listeners.js method-names.js modify-in-emit.js num-args.js once.js prepend.js remove-all-listeners.js remove-listeners.js set-max-listeners-side-effects.js special-event-names.js subclass.js symbols.js expand-template .travis.yml README.md index.js package.json test.js fast-deep-equal README.md es6 index.d.ts index.js react.d.ts react.js index.d.ts index.js package.json react.d.ts react.js fast-json-stable-stringify .eslintrc.yml .github FUNDING.yml .travis.yml README.md benchmark index.js test.json example key_cmp.js nested.js str.js value_cmp.js index.d.ts index.js package.json test cmp.js nested.js str.js to-json.js fast-levenshtein LICENSE.md README.md levenshtein.js package.json file-entry-cache README.md cache.js changelog.md package.json file-uri-to-path .travis.yml History.md README.md index.d.ts index.js package.json test test.js tests.json fill-range README.md index.js package.json find-up index.d.ts index.js package.json readme.md flagged-respawn README.md index.js lib is-v8flags.js remover.js reorder.js respawn.js package.json flat-cache README.md changelog.md node_modules .bin rimraf.cmd package.json src cache.js del.js utils.js flatted .github FUNDING.yml workflows node.js.yml README.md SPECS.md cjs index.js package.json es.js esm index.js index.js min.js package.json php flatted.php types.d.ts follow-redirects README.md http.js https.js index.js node_modules debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js ms index.js license.md package.json readme.md package.json fs-constants README.md browser.js index.js package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json functional-red-black-tree README.md bench test.js package.json rbtree.js test test.js gauge CHANGELOG.md README.md base-theme.js error.js has-color.js index.js node_modules ansi-regex index.js package.json readme.md is-fullwidth-code-point index.js package.json readme.md string-width index.js package.json readme.md strip-ansi index.js package.json readme.md package.json plumbing.js process.js progress-bar.js render-template.js set-immediate.js set-interval.js spin.js template-item.js theme-set.js themes.js wide-truncate.js get-caller-file LICENSE.md README.md index.d.ts index.js package.json get-stream buffer-stream.js index.js package.json readme.md github-from-package .travis.yml example package.json url.js index.js package.json test a.json b.json c.json d.json e.json url.js glob-parent CHANGELOG.md README.md index.js package.json glob README.md common.js glob.js package.json sync.js global-dirs index.d.ts index.js package.json readme.md globals globals.json index.d.ts index.js package.json readme.md got package.json readme.md source as-promise.js as-stream.js create.js errors.js get-response.js index.js known-hook-events.js merge.js normalize-arguments.js progress.js request-as-event-emitter.js utils deep-freeze.js get-body-size.js is-form-data.js timed-out.js url-to-options.js graceful-fs README.md clone.js graceful-fs.js legacy-streams.js package.json polyfills.js has-flag index.d.ts index.js package.json readme.md has-unicode README.md index.js package.json has-yarn index.d.ts index.js package.json readme.md hash-base README.md index.js package.json hasurl README.md index.js package.json homedir-polyfill README.md index.js package.json polyfill.js http-cache-semantics README.md index.js package.json http-errors HISTORY.md README.md index.js node_modules depd History.md Readme.md index.js lib browser index.js compat callsite-tostring.js event-listener-count.js index.js package.json package.json https-proxy-agent README.md dist agent.d.ts agent.js index.d.ts index.js parse-proxy-response.d.ts parse-proxy-response.js package.json ieee754 README.md index.d.ts index.js package.json ignore CHANGELOG.md README.md index.d.ts index.js legacy.js package.json import-fresh index.d.ts index.js package.json readme.md import-lazy index.js package.json readme.md imurmurhash README.md imurmurhash.js imurmurhash.min.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json ini README.md ini.js package.json ip-regex index.d.ts index.js package.json readme.md is-ci CHANGELOG.md README.md bin.js index.js package.json is-docker cli.js index.d.ts index.js package.json readme.md is-extglob README.md index.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md is-glob README.md index.js package.json is-installed-globally index.d.ts index.js package.json readme.md is-npm index.d.ts index.js package.json readme.md is-number README.md index.js package.json is-obj index.d.ts index.js package.json readme.md is-path-inside index.d.ts index.js package.json readme.md is-typedarray LICENSE.md README.md index.js package.json test.js is-url .travis.yml History.md Readme.md index.js package.json test index.js is-wsl index.d.ts index.js node_modules .bin is-docker.cmd package.json readme.md is-yarn-global .travis.yml README.md index.js package.json is2 README.md index.js package.json tests.js isarray .travis.yml README.md component.json index.js package.json test.js isexe README.md index.js mode.js package.json test basic.js windows.js isobject README.md index.js package.json jest-environment-node build index.d.ts index.js package.json jest-message-util build index.d.ts index.js types.d.ts types.js node_modules @babel code-frame README.md lib index.js package.json package.json jest-mock README.md build index.d.ts index.js package.json jest-util build ErrorWithStack.d.ts ErrorWithStack.js clearLine.d.ts clearLine.js convertDescriptorToString.d.ts convertDescriptorToString.js createDirectory.d.ts createDirectory.js createProcessObject.d.ts createProcessObject.js deepCyclicCopy.d.ts deepCyclicCopy.js formatTime.d.ts formatTime.js globsToMatcher.d.ts globsToMatcher.js index.d.ts index.js installCommonGlobals.d.ts installCommonGlobals.js interopRequireDefault.d.ts interopRequireDefault.js isInteractive.d.ts isInteractive.js isPromise.d.ts isPromise.js pluralize.d.ts pluralize.js preRunMessage.d.ts preRunMessage.js replacePathSepForGlob.d.ts replacePathSepForGlob.js requireOrImportModule.d.ts requireOrImportModule.js setGlobal.d.ts setGlobal.js specialChars.d.ts specialChars.js testPathPatternToRegExp.d.ts testPathPatternToRegExp.js tryRealpath.d.ts tryRealpath.js node_modules ci-info CHANGELOG.md README.md index.d.ts index.js package.json vendors.json package.json js-base64 LICENSE.md README.md base64.d.ts base64.js package.json js-sha256 CHANGELOG.md LICENSE.txt README.md build sha256.min.js index.d.ts package.json src sha256.js js-tokens CHANGELOG.md README.md index.js package.json js-yaml CHANGELOG.md README.md bin js-yaml.js dist js-yaml.js js-yaml.min.js index.js lib js-yaml.js js-yaml common.js dumper.js exception.js loader.js mark.js schema.js schema core.js default_full.js default_safe.js failsafe.js json.js type.js type binary.js bool.js float.js int.js js function.js regexp.js undefined.js map.js merge.js null.js omap.js pairs.js seq.js set.js str.js timestamp.js node_modules .bin esparse.cmd esvalidate.cmd package.json json-buffer .travis.yml README.md index.js package.json test index.js json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js json-stable-stringify-without-jsonify .travis.yml example key_cmp.js nested.js str.js value_cmp.js index.js package.json test cmp.js nested.js replacer.js space.js str.js to-json.js keyv README.md package.json src index.js latest-version index.d.ts index.js package.json readme.md levn README.md lib cast.js index.js parse-string.js package.json line-column README.md lib line-column.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.merge README.md index.js package.json lodash.sortby README.md index.js package.json lodash.truncate README.md index.js package.json lodash README.md _DataView.js _Hash.js _LazyWrapper.js _ListCache.js _LodashWrapper.js _Map.js _MapCache.js _Promise.js _Set.js _SetCache.js _Stack.js _Symbol.js _Uint8Array.js _WeakMap.js _apply.js _arrayAggregator.js _arrayEach.js _arrayEachRight.js _arrayEvery.js _arrayFilter.js _arrayIncludes.js _arrayIncludesWith.js _arrayLikeKeys.js _arrayMap.js _arrayPush.js _arrayReduce.js _arrayReduceRight.js _arraySample.js _arraySampleSize.js _arrayShuffle.js _arraySome.js _asciiSize.js _asciiToArray.js _asciiWords.js _assignMergeValue.js _assignValue.js _assocIndexOf.js _baseAggregator.js _baseAssign.js _baseAssignIn.js _baseAssignValue.js _baseAt.js _baseClamp.js _baseClone.js _baseConforms.js _baseConformsTo.js _baseCreate.js _baseDelay.js _baseDifference.js _baseEach.js _baseEachRight.js _baseEvery.js _baseExtremum.js _baseFill.js _baseFilter.js _baseFindIndex.js _baseFindKey.js _baseFlatten.js _baseFor.js _baseForOwn.js _baseForOwnRight.js _baseForRight.js _baseFunctions.js _baseGet.js _baseGetAllKeys.js _baseGetTag.js _baseGt.js _baseHas.js _baseHasIn.js _baseInRange.js _baseIndexOf.js _baseIndexOfWith.js _baseIntersection.js _baseInverter.js _baseInvoke.js _baseIsArguments.js _baseIsArrayBuffer.js _baseIsDate.js _baseIsEqual.js _baseIsEqualDeep.js _baseIsMap.js _baseIsMatch.js _baseIsNaN.js _baseIsNative.js _baseIsRegExp.js _baseIsSet.js _baseIsTypedArray.js _baseIteratee.js _baseKeys.js _baseKeysIn.js _baseLodash.js _baseLt.js _baseMap.js _baseMatches.js _baseMatchesProperty.js _baseMean.js _baseMerge.js _baseMergeDeep.js _baseNth.js _baseOrderBy.js _basePick.js _basePickBy.js _baseProperty.js _basePropertyDeep.js _basePropertyOf.js _basePullAll.js _basePullAt.js _baseRandom.js _baseRange.js _baseReduce.js _baseRepeat.js _baseRest.js _baseSample.js _baseSampleSize.js _baseSet.js _baseSetData.js _baseSetToString.js _baseShuffle.js _baseSlice.js _baseSome.js _baseSortBy.js _baseSortedIndex.js _baseSortedIndexBy.js _baseSortedUniq.js _baseSum.js _baseTimes.js _baseToNumber.js _baseToPairs.js _baseToString.js _baseTrim.js _baseUnary.js _baseUniq.js _baseUnset.js _baseUpdate.js _baseValues.js _baseWhile.js _baseWrapperValue.js _baseXor.js _baseZipObject.js _cacheHas.js _castArrayLikeObject.js _castFunction.js _castPath.js _castRest.js _castSlice.js _charsEndIndex.js _charsStartIndex.js _cloneArrayBuffer.js _cloneBuffer.js _cloneDataView.js _cloneRegExp.js _cloneSymbol.js _cloneTypedArray.js _compareAscending.js _compareMultiple.js _composeArgs.js _composeArgsRight.js _copyArray.js _copyObject.js _copySymbols.js _copySymbolsIn.js _coreJsData.js _countHolders.js _createAggregator.js _createAssigner.js _createBaseEach.js _createBaseFor.js _createBind.js _createCaseFirst.js _createCompounder.js _createCtor.js _createCurry.js _createFind.js _createFlow.js _createHybrid.js _createInverter.js _createMathOperation.js _createOver.js _createPadding.js _createPartial.js _createRange.js _createRecurry.js _createRelationalOperation.js _createRound.js _createSet.js _createToPairs.js _createWrap.js _customDefaultsAssignIn.js _customDefaultsMerge.js _customOmitClone.js _deburrLetter.js _defineProperty.js _equalArrays.js _equalByTag.js _equalObjects.js _escapeHtmlChar.js _escapeStringChar.js _flatRest.js _freeGlobal.js _getAllKeys.js _getAllKeysIn.js _getData.js _getFuncName.js _getHolder.js _getMapData.js _getMatchData.js _getNative.js _getPrototype.js _getRawTag.js _getSymbols.js _getSymbolsIn.js _getTag.js _getValue.js _getView.js _getWrapDetails.js _hasPath.js _hasUnicode.js _hasUnicodeWord.js _hashClear.js _hashDelete.js _hashGet.js _hashHas.js _hashSet.js _initCloneArray.js _initCloneByTag.js _initCloneObject.js _insertWrapDetails.js _isFlattenable.js _isIndex.js _isIterateeCall.js _isKey.js _isKeyable.js _isLaziable.js _isMaskable.js _isMasked.js _isPrototype.js _isStrictComparable.js _iteratorToArray.js _lazyClone.js _lazyReverse.js _lazyValue.js _listCacheClear.js _listCacheDelete.js _listCacheGet.js _listCacheHas.js _listCacheSet.js _mapCacheClear.js _mapCacheDelete.js _mapCacheGet.js _mapCacheHas.js _mapCacheSet.js _mapToArray.js _matchesStrictComparable.js _memoizeCapped.js _mergeData.js _metaMap.js _nativeCreate.js _nativeKeys.js _nativeKeysIn.js _nodeUtil.js _objectToString.js _overArg.js _overRest.js _parent.js _reEscape.js _reEvaluate.js _reInterpolate.js _realNames.js _reorder.js _replaceHolders.js _root.js _safeGet.js _setCacheAdd.js _setCacheHas.js _setData.js _setToArray.js _setToPairs.js _setToString.js _setWrapToString.js _shortOut.js _shuffleSelf.js _stackClear.js _stackDelete.js _stackGet.js _stackHas.js _stackSet.js _strictIndexOf.js _strictLastIndexOf.js _stringSize.js _stringToArray.js _stringToPath.js _toKey.js _toSource.js _trimmedEndIndex.js _unescapeHtmlChar.js _unicodeSize.js _unicodeToArray.js _unicodeWords.js _updateWrapDetails.js _wrapperClone.js add.js after.js array.js ary.js assign.js assignIn.js assignInWith.js assignWith.js at.js attempt.js before.js bind.js bindAll.js bindKey.js camelCase.js capitalize.js castArray.js ceil.js chain.js chunk.js clamp.js clone.js cloneDeep.js cloneDeepWith.js cloneWith.js collection.js commit.js compact.js concat.js cond.js conforms.js conformsTo.js constant.js core.js core.min.js countBy.js create.js curry.js curryRight.js date.js debounce.js deburr.js defaultTo.js defaults.js defaultsDeep.js defer.js delay.js difference.js differenceBy.js differenceWith.js divide.js drop.js dropRight.js dropRightWhile.js dropWhile.js each.js eachRight.js endsWith.js entries.js entriesIn.js eq.js escape.js escapeRegExp.js every.js extend.js extendWith.js fill.js filter.js find.js findIndex.js findKey.js findLast.js findLastIndex.js findLastKey.js first.js flatMap.js flatMapDeep.js flatMapDepth.js flatten.js flattenDeep.js flattenDepth.js flip.js floor.js flow.js flowRight.js forEach.js forEachRight.js forIn.js forInRight.js forOwn.js forOwnRight.js fp.js fp F.js T.js __.js _baseConvert.js _convertBrowser.js _falseOptions.js _mapping.js _util.js add.js after.js all.js allPass.js always.js any.js anyPass.js apply.js array.js ary.js assign.js assignAll.js assignAllWith.js assignIn.js assignInAll.js assignInAllWith.js assignInWith.js assignWith.js assoc.js assocPath.js at.js attempt.js before.js bind.js bindAll.js bindKey.js camelCase.js capitalize.js castArray.js ceil.js chain.js chunk.js clamp.js clone.js cloneDeep.js cloneDeepWith.js cloneWith.js collection.js commit.js compact.js complement.js compose.js concat.js cond.js conforms.js conformsTo.js constant.js contains.js convert.js countBy.js create.js curry.js curryN.js curryRight.js curryRightN.js date.js debounce.js deburr.js defaultTo.js defaults.js defaultsAll.js defaultsDeep.js defaultsDeepAll.js defer.js delay.js difference.js differenceBy.js differenceWith.js dissoc.js dissocPath.js divide.js drop.js dropLast.js dropLastWhile.js dropRight.js dropRightWhile.js dropWhile.js each.js eachRight.js endsWith.js entries.js entriesIn.js eq.js equals.js escape.js escapeRegExp.js every.js extend.js extendAll.js extendAllWith.js extendWith.js fill.js filter.js find.js findFrom.js findIndex.js findIndexFrom.js findKey.js findLast.js findLastFrom.js findLastIndex.js findLastIndexFrom.js findLastKey.js first.js flatMap.js flatMapDeep.js flatMapDepth.js flatten.js flattenDeep.js flattenDepth.js flip.js floor.js flow.js flowRight.js forEach.js forEachRight.js forIn.js forInRight.js forOwn.js forOwnRight.js fromPairs.js function.js functions.js functionsIn.js get.js getOr.js groupBy.js gt.js gte.js has.js hasIn.js head.js identical.js identity.js inRange.js includes.js includesFrom.js indexBy.js indexOf.js indexOfFrom.js init.js initial.js intersection.js intersectionBy.js intersectionWith.js invert.js invertBy.js invertObj.js invoke.js invokeArgs.js invokeArgsMap.js invokeMap.js isArguments.js isArray.js isArrayBuffer.js isArrayLike.js isArrayLikeObject.js isBoolean.js isBuffer.js isDate.js isElement.js isEmpty.js isEqual.js isEqualWith.js isError.js isFinite.js isFunction.js isInteger.js isLength.js isMap.js isMatch.js isMatchWith.js isNaN.js isNative.js isNil.js isNull.js isNumber.js isObject.js isObjectLike.js isPlainObject.js isRegExp.js isSafeInteger.js isSet.js isString.js isSymbol.js isTypedArray.js isUndefined.js isWeakMap.js isWeakSet.js iteratee.js join.js juxt.js kebabCase.js keyBy.js keys.js keysIn.js lang.js last.js lastIndexOf.js lastIndexOfFrom.js lowerCase.js lowerFirst.js lt.js lte.js map.js mapKeys.js mapValues.js matches.js matchesProperty.js math.js max.js maxBy.js mean.js meanBy.js memoize.js merge.js mergeAll.js mergeAllWith.js mergeWith.js method.js methodOf.js min.js minBy.js mixin.js multiply.js nAry.js negate.js next.js noop.js now.js nth.js nthArg.js number.js object.js omit.js omitAll.js omitBy.js once.js orderBy.js over.js overArgs.js overEvery.js overSome.js pad.js padChars.js padCharsEnd.js padCharsStart.js padEnd.js padStart.js parseInt.js partial.js partialRight.js partition.js path.js pathEq.js pathOr.js paths.js pick.js pickAll.js pickBy.js pipe.js placeholder.js plant.js pluck.js prop.js propEq.js propOr.js property.js propertyOf.js props.js pull.js pullAll.js pullAllBy.js pullAllWith.js pullAt.js random.js range.js rangeRight.js rangeStep.js rangeStepRight.js rearg.js reduce.js reduceRight.js reject.js remove.js repeat.js replace.js rest.js restFrom.js result.js reverse.js round.js sample.js sampleSize.js seq.js set.js setWith.js shuffle.js size.js slice.js snakeCase.js some.js sortBy.js sortedIndex.js sortedIndexBy.js sortedIndexOf.js sortedLastIndex.js sortedLastIndexBy.js sortedLastIndexOf.js sortedUniq.js sortedUniqBy.js split.js spread.js spreadFrom.js startCase.js startsWith.js string.js stubArray.js stubFalse.js stubObject.js stubString.js stubTrue.js subtract.js sum.js sumBy.js symmetricDifference.js symmetricDifferenceBy.js symmetricDifferenceWith.js tail.js take.js takeLast.js takeLastWhile.js takeRight.js takeRightWhile.js takeWhile.js tap.js template.js templateSettings.js throttle.js thru.js times.js toArray.js toFinite.js toInteger.js toIterator.js toJSON.js toLength.js toLower.js toNumber.js toPairs.js toPairsIn.js toPath.js toPlainObject.js toSafeInteger.js toString.js toUpper.js transform.js trim.js trimChars.js trimCharsEnd.js trimCharsStart.js trimEnd.js trimStart.js truncate.js unapply.js unary.js unescape.js union.js unionBy.js unionWith.js uniq.js uniqBy.js uniqWith.js uniqueId.js unnest.js unset.js unzip.js unzipWith.js update.js updateWith.js upperCase.js upperFirst.js useWith.js util.js value.js valueOf.js values.js valuesIn.js where.js whereEq.js without.js words.js wrap.js wrapperAt.js wrapperChain.js wrapperLodash.js wrapperReverse.js wrapperValue.js xor.js xorBy.js xorWith.js zip.js zipAll.js zipObj.js zipObject.js zipObjectDeep.js zipWith.js fromPairs.js function.js functions.js functionsIn.js get.js groupBy.js gt.js gte.js has.js hasIn.js head.js identity.js inRange.js includes.js index.js indexOf.js initial.js intersection.js intersectionBy.js intersectionWith.js invert.js invertBy.js invoke.js invokeMap.js isArguments.js isArray.js isArrayBuffer.js isArrayLike.js isArrayLikeObject.js isBoolean.js isBuffer.js isDate.js isElement.js isEmpty.js isEqual.js isEqualWith.js isError.js isFinite.js isFunction.js isInteger.js isLength.js isMap.js isMatch.js isMatchWith.js isNaN.js isNative.js isNil.js isNull.js isNumber.js isObject.js isObjectLike.js isPlainObject.js isRegExp.js isSafeInteger.js isSet.js isString.js isSymbol.js isTypedArray.js isUndefined.js isWeakMap.js isWeakSet.js iteratee.js join.js kebabCase.js keyBy.js keys.js keysIn.js lang.js last.js lastIndexOf.js lodash.js lodash.min.js lowerCase.js lowerFirst.js lt.js lte.js map.js mapKeys.js mapValues.js matches.js matchesProperty.js math.js max.js maxBy.js mean.js meanBy.js memoize.js merge.js mergeWith.js method.js methodOf.js min.js minBy.js mixin.js multiply.js negate.js next.js noop.js now.js nth.js nthArg.js number.js object.js omit.js omitBy.js once.js orderBy.js over.js overArgs.js overEvery.js overSome.js package.json pad.js padEnd.js padStart.js parseInt.js partial.js partialRight.js partition.js pick.js pickBy.js plant.js property.js propertyOf.js pull.js pullAll.js pullAllBy.js pullAllWith.js pullAt.js random.js range.js rangeRight.js rearg.js reduce.js reduceRight.js reject.js release.md remove.js repeat.js replace.js rest.js result.js reverse.js round.js sample.js sampleSize.js seq.js set.js setWith.js shuffle.js size.js slice.js snakeCase.js some.js sortBy.js sortedIndex.js sortedIndexBy.js sortedIndexOf.js sortedLastIndex.js sortedLastIndexBy.js sortedLastIndexOf.js sortedUniq.js sortedUniqBy.js split.js spread.js startCase.js startsWith.js string.js stubArray.js stubFalse.js stubObject.js stubString.js stubTrue.js subtract.js sum.js sumBy.js tail.js take.js takeRight.js takeRightWhile.js takeWhile.js tap.js template.js templateSettings.js throttle.js thru.js times.js toArray.js toFinite.js toInteger.js toIterator.js toJSON.js toLength.js toLower.js toNumber.js toPairs.js toPairsIn.js toPath.js toPlainObject.js toSafeInteger.js toString.js toUpper.js transform.js trim.js trimEnd.js trimStart.js truncate.js unary.js unescape.js union.js unionBy.js unionWith.js uniq.js uniqBy.js uniqWith.js uniqueId.js unset.js unzip.js unzipWith.js update.js updateWith.js upperCase.js upperFirst.js util.js value.js valueOf.js values.js valuesIn.js without.js words.js wrap.js wrapperAt.js wrapperChain.js wrapperLodash.js wrapperReverse.js wrapperValue.js xor.js xorBy.js xorWith.js zip.js zipObject.js zipObjectDeep.js zipWith.js long README.md dist long.js index.js package.json src long.js lowercase-keys index.js package.json readme.md lru-cache README.md index.js package.json make-dir index.d.ts index.js node_modules .bin semver.cmd semver CHANGELOG.md README.md bin semver.js package.json semver.js package.json readme.md md5.js README.md index.js package.json micromatch README.md index.js package.json mimic-response index.js package.json readme.md minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mixpanel .travis.yml example.js history.md lib groups.js mixpanel-node.d.ts mixpanel-node.js people.js profile_helpers.js utils.js package.json readme.md test alias.js config.js groups.js import.js people.js send_request.js track.js utils.js mkdirp-classic README.md index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md mustache CHANGELOG.md README.md mustache.js mustache.min.js package.json napi-build-utils README.md index.js index.md package.json natural-compare README.md index.js package.json ncp .travis.yml LICENSE.md README.md lib ncp.js package.json test ncp.js near-api-js README.md browser-exports.js dist near-api-js.js near-api-js.min.js lib account.d.ts account.js account_creator.d.ts account_creator.js account_multisig.d.ts account_multisig.js browser-connect.d.ts browser-connect.js browser-index.d.ts browser-index.js common-index.d.ts common-index.js connect.d.ts connect.js connection.d.ts connection.js constants.d.ts constants.js contract.d.ts contract.js generated rpc_error_schema.json index.d.ts index.js key_stores browser-index.d.ts browser-index.js browser_local_storage_key_store.d.ts browser_local_storage_key_store.js in_memory_key_store.d.ts in_memory_key_store.js index.d.ts index.js keystore.d.ts keystore.js merge_key_store.d.ts merge_key_store.js unencrypted_file_system_keystore.d.ts unencrypted_file_system_keystore.js near.d.ts near.js providers index.d.ts index.js json-rpc-provider.d.ts json-rpc-provider.js provider.d.ts provider.js res error_messages.d.ts error_messages.json signer.d.ts signer.js transaction.d.ts transaction.js utils enums.d.ts enums.js errors.d.ts errors.js exponential-backoff.d.ts exponential-backoff.js format.d.ts format.js index.d.ts index.js key_pair.d.ts key_pair.js network.d.ts network.js rpc_errors.d.ts rpc_errors.js serialize.d.ts serialize.js setup-node-fetch.d.ts setup-node-fetch.js web.d.ts web.js validators.d.ts validators.js wallet-account.d.ts wallet-account.js node_modules .bin mustache.cmd package.json near-cli CHANGELOG.md README.md bin near-cli.js commands add-key.js call.js create-account.js delete-key.js dev-deploy.js evm-call.js evm-dev-init.js evm-view.js generate-key.js proposals.js repl.js set-x-api-key.js tx-status.js validators.js view-state.js config.js context index.d.ts get-config.js index.js middleware abi.js base64-args.js initial-balance.js key-store.js ledger.js print-options.js seed-phrase.js node_modules .bin is-ci.cmd ncp.cmd rimraf.cmd uuid.cmd package.json test_environment.js utils capture-login-success.js check-credentials.js check-version.js connect.js deprecation-warning.js eventtracking.js exit-on-error.js explorer.js implicit-accountid.js inspect-response.js log-event.js readline.js settings.js validators-info.js verify-account.js x-api-key-settings.js near-hd-key README.md dist index.d.ts index.js utils.d.ts utils.js package.json near-ledger-js .travis.yml README.md demo demo.js index.html index.js package.json supportedTransports.js near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as README.md as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt datetime.spec.ts empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js node_modules .bin near-vm-as.cmd out assembly __tests__ ason.ts model.ts ~lib as-bignum integer safe u128.ts package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts datetime.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html node_modules .bin asb.cmd asbuild.cmd asc.cmd asinit.cmd asp.cmd aspect.cmd assemblyscript-build.cmd near-vm.cmd @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json node_modules .bin wasm-opt.cmd package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ empty.ts exportAs.ts model.ts sentences.ts singleton copy.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-seed-phrase .eslintrc.yml .fossa.yml .travis.yml index.js package.json test index.test.js near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json node-abi .github workflows update-abi.yml .travis.yml CODE_OF_CONDUCT.md CONTRIBUTING.md README.md abi_registry.json index.js node_modules .bin semver.cmd semver CHANGELOG.md README.md package.json semver.js package.json scripts update-abi-registry.js test index.js node-addon-api CHANGELOG.md LICENSE.md README.md index.js napi-inl.deprecated.h napi-inl.h napi.h nothing.c package-support.json package.json tools README.md check-napi.js clang-format.js conversion.js node-fetch LICENSE.md README.md browser.js lib index.es.js index.js package.json node-gyp-build README.md bin.js build-test.js index.js optional.js package.json node-hid Publishing.md README.md hidapi .appveyor.yml .builds alpine.yml archlinux.yml fedora-mingw.yml freebsd.yml .cirrus.yml AUTHORS.txt HACKING.txt LICENSE-bsd.txt LICENSE-gpl3.txt LICENSE-orig.txt LICENSE.txt README.md hidapi hidapi.h hidtest test.c libusb hid.c linux hid.c mac hid.c testgui copy_to_bundle.sh mac_support.h test.cpp testgui.sln windows hid.c hidapi.sln node_modules .bin prebuild-install.cmd nodehid.js package.json src buzzers.js powermate.js show-devices.js test-bigredbutton.js test-blink1.js test-buzzers.js test-ci.js test-macbookprotrackpad.js test-powermate.js test-ps3-rumbleled.js test-ps3.js test-teensyrawhid.js test-tinyusbrawhid.js testReadSync.js normalize-url index.d.ts index.js package.json readme.md npmlog CHANGELOG.md README.md log.js package.json number-is-nan index.js package.json readme.md o3 README.md index.js lib Class.js abstractMethod.js index.js package.json object-assign index.js package.json readme.md once README.md once.js package.json open index.d.ts index.js node_modules .bin is-docker.cmd package.json readme.md optionator CHANGELOG.md README.md lib help.js index.js util.js package.json p-cancelable index.d.ts index.js package.json readme.md p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md package-json index.d.ts index.js node_modules .bin semver.cmd semver CHANGELOG.md README.md bin semver.js package.json semver.js package.json readme.md parent-module index.js package.json readme.md parse-passwd README.md index.js package.json path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md path-key index.d.ts index.js package.json readme.md pbkdf2 README.md browser.js index.js lib async.js default-encoding.js precondition.js sync-browser.js sync.js to-buffer.js node_modules .bin sha.js sha.js.cmd package.json picomatch CHANGELOG.md README.md index.js lib constants.js parse.js picomatch.js scan.js utils.js package.json platform README.md package.json platform.js prebuild-install CHANGELOG.md CONTRIBUTING.md README.md asset.js bin.js download.js error.js help.txt index.js log.js node_modules .bin detect-libc.cmd rc.cmd package.json proxy.js rc.js util.js prelude-ls CHANGELOG.md README.md lib Func.js List.js Num.js Obj.js Str.js index.js package.json prepend-http index.js package.json readme.md pretty-format README.md build collections.d.ts collections.js index.d.ts index.js plugins AsymmetricMatcher.d.ts AsymmetricMatcher.js ConvertAnsi.d.ts ConvertAnsi.js DOMCollection.d.ts DOMCollection.js DOMElement.d.ts DOMElement.js Immutable.d.ts Immutable.js ReactElement.d.ts ReactElement.js ReactTestComponent.d.ts ReactTestComponent.js lib escapeHTML.d.ts escapeHTML.js markup.d.ts markup.js types.d.ts types.js node_modules ansi-styles index.d.ts index.js package.json readme.md package.json process-nextick-args index.js license.md package.json readme.md progress CHANGELOG.md Readme.md index.js lib node-progress.js package.json pump .travis.yml README.md index.js package.json test-browser.js test-node.js punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js pupa index.d.ts index.js package.json readme.md railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json randombytes .travis.yml .zuul.yml README.md browser.js index.js package.json test.js rc README.md browser.js cli.js index.js lib utils.js node_modules ini README.md ini.js package.json strip-json-comments index.js package.json readme.md package.json test ini.js nested-env-vars.js test.js react-is README.md build-info.json cjs react-is.development.js react-is.production.min.js index.js package.json umd react-is.development.js react-is.production.min.js readable-stream CONTRIBUTING.md GOVERNANCE.md README.md errors-browser.js errors.js experimentalWarning.js lib _stream_duplex.js _stream_passthrough.js _stream_readable.js _stream_transform.js _stream_writable.js internal streams async_iterator.js buffer_list.js destroy.js end-of-stream.js from-browser.js from.js pipeline.js state.js stream-browser.js stream.js package.json readable-browser.js readable.js regexpp README.md index.d.ts index.js package.json registry-auth-token CHANGELOG.md README.md base64.js index.js node_modules .bin rc.cmd package.json registry-url.js registry-url index.d.ts index.js node_modules .bin rc.cmd package.json readme.md require-directory .travis.yml index.js package.json require-from-string index.js package.json readme.md require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json resolve-from index.js package.json readme.md responselike README.md package.json src index.js ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js ripemd160 CHANGELOG.md README.md index.js package.json rxjs AsyncSubject.d.ts AsyncSubject.js BehaviorSubject.d.ts BehaviorSubject.js InnerSubscriber.d.ts InnerSubscriber.js LICENSE.txt Notification.d.ts Notification.js Observable.d.ts Observable.js Observer.d.ts Observer.js Operator.d.ts Operator.js OuterSubscriber.d.ts OuterSubscriber.js README.md ReplaySubject.d.ts ReplaySubject.js Rx.d.ts Rx.js Scheduler.d.ts Scheduler.js Subject.d.ts Subject.js SubjectSubscription.d.ts SubjectSubscription.js Subscriber.d.ts Subscriber.js Subscription.d.ts Subscription.js _esm2015 LICENSE.txt README.md ajax index.js fetch index.js index.js internal-compatibility index.js internal AsyncSubject.js BehaviorSubject.js InnerSubscriber.js Notification.js Observable.js Observer.js Operator.js OuterSubscriber.js ReplaySubject.js Rx.js Scheduler.js Subject.js SubjectSubscription.js Subscriber.js Subscription.js config.js innerSubscribe.js observable ConnectableObservable.js SubscribeOnObservable.js bindCallback.js bindNodeCallback.js combineLatest.js concat.js defer.js dom AjaxObservable.js WebSocketSubject.js ajax.js fetch.js webSocket.js empty.js forkJoin.js from.js fromArray.js fromEvent.js fromEventPattern.js fromIterable.js fromPromise.js generate.js iif.js interval.js merge.js never.js of.js onErrorResumeNext.js pairs.js partition.js race.js range.js throwError.js timer.js using.js zip.js operators audit.js auditTime.js buffer.js bufferCount.js bufferTime.js bufferToggle.js bufferWhen.js catchError.js combineAll.js combineLatest.js concat.js concatAll.js concatMap.js concatMapTo.js count.js debounce.js debounceTime.js defaultIfEmpty.js delay.js delayWhen.js dematerialize.js distinct.js distinctUntilChanged.js distinctUntilKeyChanged.js elementAt.js endWith.js every.js exhaust.js exhaustMap.js expand.js filter.js finalize.js find.js findIndex.js first.js groupBy.js ignoreElements.js index.js isEmpty.js last.js map.js mapTo.js materialize.js max.js merge.js mergeAll.js mergeMap.js mergeMapTo.js mergeScan.js min.js multicast.js observeOn.js onErrorResumeNext.js pairwise.js partition.js pluck.js publish.js publishBehavior.js publishLast.js publishReplay.js race.js reduce.js refCount.js repeat.js repeatWhen.js retry.js retryWhen.js sample.js sampleTime.js scan.js sequenceEqual.js share.js shareReplay.js single.js skip.js skipLast.js skipUntil.js skipWhile.js startWith.js subscribeOn.js switchAll.js switchMap.js switchMapTo.js take.js takeLast.js takeUntil.js takeWhile.js tap.js throttle.js throttleTime.js throwIfEmpty.js timeInterval.js timeout.js timeoutWith.js timestamp.js toArray.js window.js windowCount.js windowTime.js windowToggle.js windowWhen.js withLatestFrom.js zip.js zipAll.js scheduled scheduleArray.js scheduleIterable.js scheduleObservable.js schedulePromise.js scheduled.js scheduler Action.js AnimationFrameAction.js AnimationFrameScheduler.js AsapAction.js AsapScheduler.js AsyncAction.js AsyncScheduler.js QueueAction.js QueueScheduler.js VirtualTimeScheduler.js animationFrame.js asap.js async.js queue.js symbol iterator.js observable.js rxSubscriber.js testing ColdObservable.js HotObservable.js SubscriptionLog.js SubscriptionLoggable.js TestMessage.js TestScheduler.js types.js util ArgumentOutOfRangeError.js EmptyError.js Immediate.js ObjectUnsubscribedError.js TimeoutError.js UnsubscriptionError.js applyMixins.js canReportError.js errorObject.js hostReportError.js identity.js isArray.js isArrayLike.js isDate.js isFunction.js isInteropObservable.js isIterable.js isNumeric.js isObject.js isObservable.js isPromise.js isScheduler.js noop.js not.js pipe.js root.js subscribeTo.js subscribeToArray.js subscribeToIterable.js subscribeToObservable.js subscribeToPromise.js subscribeToResult.js toSubscriber.js tryCatch.js operators index.js path-mapping.js testing index.js webSocket index.js _esm5 LICENSE.txt README.md ajax index.js fetch index.js index.js internal-compatibility index.js internal AsyncSubject.js BehaviorSubject.js InnerSubscriber.js Notification.js Observable.js Observer.js Operator.js OuterSubscriber.js ReplaySubject.js Rx.js Scheduler.js Subject.js SubjectSubscription.js Subscriber.js Subscription.js config.js innerSubscribe.js observable ConnectableObservable.js SubscribeOnObservable.js bindCallback.js bindNodeCallback.js combineLatest.js concat.js defer.js dom AjaxObservable.js WebSocketSubject.js ajax.js fetch.js webSocket.js empty.js forkJoin.js from.js fromArray.js fromEvent.js fromEventPattern.js fromIterable.js fromPromise.js generate.js iif.js interval.js merge.js never.js of.js onErrorResumeNext.js pairs.js partition.js race.js range.js throwError.js timer.js using.js zip.js operators audit.js auditTime.js buffer.js bufferCount.js bufferTime.js bufferToggle.js bufferWhen.js catchError.js combineAll.js combineLatest.js concat.js concatAll.js concatMap.js concatMapTo.js count.js debounce.js debounceTime.js defaultIfEmpty.js delay.js delayWhen.js dematerialize.js distinct.js distinctUntilChanged.js distinctUntilKeyChanged.js elementAt.js endWith.js every.js exhaust.js exhaustMap.js expand.js filter.js finalize.js find.js findIndex.js first.js groupBy.js ignoreElements.js index.js isEmpty.js last.js map.js mapTo.js materialize.js max.js merge.js mergeAll.js mergeMap.js mergeMapTo.js mergeScan.js min.js multicast.js observeOn.js onErrorResumeNext.js pairwise.js partition.js pluck.js publish.js publishBehavior.js publishLast.js publishReplay.js race.js reduce.js refCount.js repeat.js repeatWhen.js retry.js retryWhen.js sample.js sampleTime.js scan.js sequenceEqual.js share.js shareReplay.js single.js skip.js skipLast.js skipUntil.js skipWhile.js startWith.js subscribeOn.js switchAll.js switchMap.js switchMapTo.js take.js takeLast.js takeUntil.js takeWhile.js tap.js throttle.js throttleTime.js throwIfEmpty.js timeInterval.js timeout.js timeoutWith.js timestamp.js toArray.js window.js windowCount.js windowTime.js windowToggle.js windowWhen.js withLatestFrom.js zip.js zipAll.js scheduled scheduleArray.js scheduleIterable.js scheduleObservable.js schedulePromise.js scheduled.js scheduler Action.js AnimationFrameAction.js AnimationFrameScheduler.js AsapAction.js AsapScheduler.js AsyncAction.js AsyncScheduler.js QueueAction.js QueueScheduler.js VirtualTimeScheduler.js animationFrame.js asap.js async.js queue.js symbol iterator.js observable.js rxSubscriber.js testing ColdObservable.js HotObservable.js SubscriptionLog.js SubscriptionLoggable.js TestMessage.js TestScheduler.js types.js util ArgumentOutOfRangeError.js EmptyError.js Immediate.js ObjectUnsubscribedError.js TimeoutError.js UnsubscriptionError.js applyMixins.js canReportError.js errorObject.js hostReportError.js identity.js isArray.js isArrayLike.js isDate.js isFunction.js isInteropObservable.js isIterable.js isNumeric.js isObject.js isObservable.js isPromise.js isScheduler.js noop.js not.js pipe.js root.js subscribeTo.js subscribeToArray.js subscribeToIterable.js subscribeToObservable.js subscribeToPromise.js subscribeToResult.js toSubscriber.js tryCatch.js operators index.js path-mapping.js testing index.js webSocket index.js add observable bindCallback.d.ts bindCallback.js bindNodeCallback.d.ts bindNodeCallback.js combineLatest.d.ts combineLatest.js concat.d.ts concat.js defer.d.ts defer.js dom ajax.d.ts ajax.js webSocket.d.ts webSocket.js empty.d.ts empty.js forkJoin.d.ts forkJoin.js from.d.ts from.js fromEvent.d.ts fromEvent.js fromEventPattern.d.ts fromEventPattern.js fromPromise.d.ts fromPromise.js generate.d.ts generate.js if.d.ts if.js interval.d.ts interval.js merge.d.ts merge.js never.d.ts never.js of.d.ts of.js onErrorResumeNext.d.ts onErrorResumeNext.js pairs.d.ts pairs.js race.d.ts race.js range.d.ts range.js throw.d.ts throw.js timer.d.ts timer.js using.d.ts using.js zip.d.ts zip.js operator audit.d.ts audit.js auditTime.d.ts auditTime.js buffer.d.ts buffer.js bufferCount.d.ts bufferCount.js bufferTime.d.ts bufferTime.js bufferToggle.d.ts bufferToggle.js bufferWhen.d.ts bufferWhen.js catch.d.ts catch.js combineAll.d.ts combineAll.js combineLatest.d.ts combineLatest.js concat.d.ts concat.js concatAll.d.ts concatAll.js concatMap.d.ts concatMap.js concatMapTo.d.ts concatMapTo.js count.d.ts count.js debounce.d.ts debounce.js debounceTime.d.ts debounceTime.js defaultIfEmpty.d.ts defaultIfEmpty.js delay.d.ts delay.js delayWhen.d.ts delayWhen.js dematerialize.d.ts dematerialize.js distinct.d.ts distinct.js distinctUntilChanged.d.ts distinctUntilChanged.js distinctUntilKeyChanged.d.ts distinctUntilKeyChanged.js do.d.ts do.js elementAt.d.ts elementAt.js every.d.ts every.js exhaust.d.ts exhaust.js exhaustMap.d.ts exhaustMap.js expand.d.ts expand.js filter.d.ts filter.js finally.d.ts finally.js find.d.ts find.js findIndex.d.ts findIndex.js first.d.ts first.js groupBy.d.ts groupBy.js ignoreElements.d.ts ignoreElements.js isEmpty.d.ts isEmpty.js last.d.ts last.js let.d.ts let.js map.d.ts map.js mapTo.d.ts mapTo.js materialize.d.ts materialize.js max.d.ts max.js merge.d.ts merge.js mergeAll.d.ts mergeAll.js mergeMap.d.ts mergeMap.js mergeMapTo.d.ts mergeMapTo.js mergeScan.d.ts mergeScan.js min.d.ts min.js multicast.d.ts multicast.js observeOn.d.ts observeOn.js onErrorResumeNext.d.ts onErrorResumeNext.js pairwise.d.ts pairwise.js partition.d.ts partition.js pluck.d.ts pluck.js publish.d.ts publish.js publishBehavior.d.ts publishBehavior.js publishLast.d.ts publishLast.js publishReplay.d.ts publishReplay.js race.d.ts race.js reduce.d.ts reduce.js repeat.d.ts repeat.js repeatWhen.d.ts repeatWhen.js retry.d.ts retry.js retryWhen.d.ts retryWhen.js sample.d.ts sample.js sampleTime.d.ts sampleTime.js scan.d.ts scan.js sequenceEqual.d.ts sequenceEqual.js share.d.ts share.js shareReplay.d.ts shareReplay.js single.d.ts single.js skip.d.ts skip.js skipLast.d.ts skipLast.js skipUntil.d.ts skipUntil.js skipWhile.d.ts skipWhile.js startWith.d.ts startWith.js subscribeOn.d.ts subscribeOn.js switch.d.ts switch.js switchMap.d.ts switchMap.js switchMapTo.d.ts switchMapTo.js take.d.ts take.js takeLast.d.ts takeLast.js takeUntil.d.ts takeUntil.js takeWhile.d.ts takeWhile.js throttle.d.ts throttle.js throttleTime.d.ts throttleTime.js timeInterval.d.ts timeInterval.js timeout.d.ts timeout.js timeoutWith.d.ts timeoutWith.js timestamp.d.ts timestamp.js toArray.d.ts toArray.js toPromise.d.ts toPromise.js window.d.ts window.js windowCount.d.ts windowCount.js windowTime.d.ts windowTime.js windowToggle.d.ts windowToggle.js windowWhen.d.ts windowWhen.js withLatestFrom.d.ts withLatestFrom.js zip.d.ts zip.js zipAll.d.ts zipAll.js ajax index.d.ts index.js package.json bundles rxjs.umd.js rxjs.umd.min.js fetch index.d.ts index.js package.json index.d.ts index.js interfaces.d.ts interfaces.js internal-compatibility index.d.ts index.js package.json internal AsyncSubject.d.ts AsyncSubject.js BehaviorSubject.d.ts BehaviorSubject.js InnerSubscriber.d.ts InnerSubscriber.js Notification.d.ts Notification.js Observable.d.ts Observable.js Observer.d.ts Observer.js Operator.d.ts Operator.js OuterSubscriber.d.ts OuterSubscriber.js ReplaySubject.d.ts ReplaySubject.js Rx.d.ts Rx.js Scheduler.d.ts Scheduler.js Subject.d.ts Subject.js SubjectSubscription.d.ts SubjectSubscription.js Subscriber.d.ts Subscriber.js Subscription.d.ts Subscription.js config.d.ts config.js innerSubscribe.d.ts innerSubscribe.js observable ConnectableObservable.d.ts ConnectableObservable.js SubscribeOnObservable.d.ts SubscribeOnObservable.js bindCallback.d.ts bindCallback.js bindNodeCallback.d.ts bindNodeCallback.js combineLatest.d.ts combineLatest.js concat.d.ts concat.js defer.d.ts defer.js dom AjaxObservable.d.ts AjaxObservable.js WebSocketSubject.d.ts WebSocketSubject.js ajax.d.ts ajax.js fetch.d.ts fetch.js webSocket.d.ts webSocket.js empty.d.ts empty.js forkJoin.d.ts forkJoin.js from.d.ts from.js fromArray.d.ts fromArray.js fromEvent.d.ts fromEvent.js fromEventPattern.d.ts fromEventPattern.js fromIterable.d.ts fromIterable.js fromPromise.d.ts fromPromise.js generate.d.ts generate.js iif.d.ts iif.js interval.d.ts interval.js merge.d.ts merge.js never.d.ts never.js of.d.ts of.js onErrorResumeNext.d.ts onErrorResumeNext.js pairs.d.ts pairs.js partition.d.ts partition.js race.d.ts race.js range.d.ts range.js throwError.d.ts throwError.js timer.d.ts timer.js using.d.ts using.js zip.d.ts zip.js operators audit.d.ts audit.js auditTime.d.ts auditTime.js buffer.d.ts buffer.js bufferCount.d.ts bufferCount.js bufferTime.d.ts bufferTime.js bufferToggle.d.ts bufferToggle.js bufferWhen.d.ts bufferWhen.js catchError.d.ts catchError.js combineAll.d.ts combineAll.js combineLatest.d.ts combineLatest.js concat.d.ts concat.js concatAll.d.ts concatAll.js concatMap.d.ts concatMap.js concatMapTo.d.ts concatMapTo.js count.d.ts count.js debounce.d.ts debounce.js debounceTime.d.ts debounceTime.js defaultIfEmpty.d.ts defaultIfEmpty.js delay.d.ts delay.js delayWhen.d.ts delayWhen.js dematerialize.d.ts dematerialize.js distinct.d.ts distinct.js distinctUntilChanged.d.ts distinctUntilChanged.js distinctUntilKeyChanged.d.ts distinctUntilKeyChanged.js elementAt.d.ts elementAt.js endWith.d.ts endWith.js every.d.ts every.js exhaust.d.ts exhaust.js exhaustMap.d.ts exhaustMap.js expand.d.ts expand.js filter.d.ts filter.js finalize.d.ts finalize.js find.d.ts find.js findIndex.d.ts findIndex.js first.d.ts first.js groupBy.d.ts groupBy.js ignoreElements.d.ts ignoreElements.js index.d.ts index.js isEmpty.d.ts isEmpty.js last.d.ts last.js map.d.ts map.js mapTo.d.ts mapTo.js materialize.d.ts materialize.js max.d.ts max.js merge.d.ts merge.js mergeAll.d.ts mergeAll.js mergeMap.d.ts mergeMap.js mergeMapTo.d.ts mergeMapTo.js mergeScan.d.ts mergeScan.js min.d.ts min.js multicast.d.ts multicast.js observeOn.d.ts observeOn.js onErrorResumeNext.d.ts onErrorResumeNext.js pairwise.d.ts pairwise.js partition.d.ts partition.js pluck.d.ts pluck.js publish.d.ts publish.js publishBehavior.d.ts publishBehavior.js publishLast.d.ts publishLast.js publishReplay.d.ts publishReplay.js race.d.ts race.js reduce.d.ts reduce.js refCount.d.ts refCount.js repeat.d.ts repeat.js repeatWhen.d.ts repeatWhen.js retry.d.ts retry.js retryWhen.d.ts retryWhen.js sample.d.ts sample.js sampleTime.d.ts sampleTime.js scan.d.ts scan.js sequenceEqual.d.ts sequenceEqual.js share.d.ts share.js shareReplay.d.ts shareReplay.js single.d.ts single.js skip.d.ts skip.js skipLast.d.ts skipLast.js skipUntil.d.ts skipUntil.js skipWhile.d.ts skipWhile.js startWith.d.ts startWith.js subscribeOn.d.ts subscribeOn.js switchAll.d.ts switchAll.js switchMap.d.ts switchMap.js switchMapTo.d.ts switchMapTo.js take.d.ts take.js takeLast.d.ts takeLast.js takeUntil.d.ts takeUntil.js takeWhile.d.ts takeWhile.js tap.d.ts tap.js throttle.d.ts throttle.js throttleTime.d.ts throttleTime.js throwIfEmpty.d.ts throwIfEmpty.js timeInterval.d.ts timeInterval.js timeout.d.ts timeout.js timeoutWith.d.ts timeoutWith.js timestamp.d.ts timestamp.js toArray.d.ts toArray.js window.d.ts window.js windowCount.d.ts windowCount.js windowTime.d.ts windowTime.js windowToggle.d.ts windowToggle.js windowWhen.d.ts windowWhen.js withLatestFrom.d.ts withLatestFrom.js zip.d.ts zip.js zipAll.d.ts zipAll.js scheduled scheduleArray.d.ts scheduleArray.js scheduleIterable.d.ts scheduleIterable.js scheduleObservable.d.ts scheduleObservable.js schedulePromise.d.ts schedulePromise.js scheduled.d.ts scheduled.js scheduler Action.d.ts Action.js AnimationFrameAction.d.ts AnimationFrameAction.js AnimationFrameScheduler.d.ts AnimationFrameScheduler.js AsapAction.d.ts AsapAction.js AsapScheduler.d.ts AsapScheduler.js AsyncAction.d.ts AsyncAction.js AsyncScheduler.d.ts AsyncScheduler.js QueueAction.d.ts QueueAction.js QueueScheduler.d.ts QueueScheduler.js VirtualTimeScheduler.d.ts VirtualTimeScheduler.js animationFrame.d.ts animationFrame.js asap.d.ts asap.js async.d.ts async.js queue.d.ts queue.js symbol iterator.d.ts iterator.js observable.d.ts observable.js rxSubscriber.d.ts rxSubscriber.js testing ColdObservable.d.ts ColdObservable.js HotObservable.d.ts HotObservable.js SubscriptionLog.d.ts SubscriptionLog.js SubscriptionLoggable.d.ts SubscriptionLoggable.js TestMessage.d.ts TestMessage.js TestScheduler.d.ts TestScheduler.js types.d.ts types.js util ArgumentOutOfRangeError.d.ts ArgumentOutOfRangeError.js EmptyError.d.ts EmptyError.js Immediate.d.ts Immediate.js ObjectUnsubscribedError.d.ts ObjectUnsubscribedError.js TimeoutError.d.ts TimeoutError.js UnsubscriptionError.d.ts UnsubscriptionError.js applyMixins.d.ts applyMixins.js canReportError.d.ts canReportError.js errorObject.d.ts errorObject.js hostReportError.d.ts hostReportError.js identity.d.ts identity.js isArray.d.ts isArray.js isArrayLike.d.ts isArrayLike.js isDate.d.ts isDate.js isFunction.d.ts isFunction.js isInteropObservable.d.ts isInteropObservable.js isIterable.d.ts isIterable.js isNumeric.d.ts isNumeric.js isObject.d.ts isObject.js isObservable.d.ts isObservable.js isPromise.d.ts isPromise.js isScheduler.d.ts isScheduler.js noop.d.ts noop.js not.d.ts not.js pipe.d.ts pipe.js root.d.ts root.js subscribeTo.d.ts subscribeTo.js subscribeToArray.d.ts subscribeToArray.js subscribeToIterable.d.ts subscribeToIterable.js subscribeToObservable.d.ts subscribeToObservable.js subscribeToPromise.d.ts subscribeToPromise.js subscribeToResult.d.ts subscribeToResult.js toSubscriber.d.ts toSubscriber.js tryCatch.d.ts tryCatch.js migrations collection.json update-6_0_0 index.js observable ArrayLikeObservable.d.ts ArrayLikeObservable.js ArrayObservable.d.ts ArrayObservable.js BoundCallbackObservable.d.ts BoundCallbackObservable.js BoundNodeCallbackObservable.d.ts BoundNodeCallbackObservable.js ConnectableObservable.d.ts ConnectableObservable.js DeferObservable.d.ts DeferObservable.js EmptyObservable.d.ts EmptyObservable.js ErrorObservable.d.ts ErrorObservable.js ForkJoinObservable.d.ts ForkJoinObservable.js FromEventObservable.d.ts FromEventObservable.js FromEventPatternObservable.d.ts FromEventPatternObservable.js FromObservable.d.ts FromObservable.js GenerateObservable.d.ts GenerateObservable.js IfObservable.d.ts IfObservable.js IntervalObservable.d.ts IntervalObservable.js IteratorObservable.d.ts IteratorObservable.js NeverObservable.d.ts NeverObservable.js PairsObservable.d.ts PairsObservable.js PromiseObservable.d.ts PromiseObservable.js RangeObservable.d.ts RangeObservable.js ScalarObservable.d.ts ScalarObservable.js SubscribeOnObservable.d.ts SubscribeOnObservable.js TimerObservable.d.ts TimerObservable.js UsingObservable.d.ts UsingObservable.js bindCallback.d.ts bindCallback.js bindNodeCallback.d.ts bindNodeCallback.js combineLatest.d.ts combineLatest.js concat.d.ts concat.js defer.d.ts defer.js dom AjaxObservable.d.ts AjaxObservable.js WebSocketSubject.d.ts WebSocketSubject.js ajax.d.ts ajax.js webSocket.d.ts webSocket.js empty.d.ts empty.js forkJoin.d.ts forkJoin.js from.d.ts from.js fromArray.d.ts fromArray.js fromEvent.d.ts fromEvent.js fromEventPattern.d.ts fromEventPattern.js fromIterable.d.ts fromIterable.js fromPromise.d.ts fromPromise.js generate.d.ts generate.js if.d.ts if.js interval.d.ts interval.js merge.d.ts merge.js never.d.ts never.js of.d.ts of.js onErrorResumeNext.d.ts onErrorResumeNext.js pairs.d.ts pairs.js race.d.ts race.js range.d.ts range.js throw.d.ts throw.js timer.d.ts timer.js using.d.ts using.js zip.d.ts zip.js operator audit.d.ts audit.js auditTime.d.ts auditTime.js buffer.d.ts buffer.js bufferCount.d.ts bufferCount.js bufferTime.d.ts bufferTime.js bufferToggle.d.ts bufferToggle.js bufferWhen.d.ts bufferWhen.js catch.d.ts catch.js combineAll.d.ts combineAll.js combineLatest.d.ts combineLatest.js concat.d.ts concat.js concatAll.d.ts concatAll.js concatMap.d.ts concatMap.js concatMapTo.d.ts concatMapTo.js count.d.ts count.js debounce.d.ts debounce.js debounceTime.d.ts debounceTime.js defaultIfEmpty.d.ts defaultIfEmpty.js delay.d.ts delay.js delayWhen.d.ts delayWhen.js dematerialize.d.ts dematerialize.js distinct.d.ts distinct.js distinctUntilChanged.d.ts distinctUntilChanged.js distinctUntilKeyChanged.d.ts distinctUntilKeyChanged.js do.d.ts do.js elementAt.d.ts elementAt.js every.d.ts every.js exhaust.d.ts exhaust.js exhaustMap.d.ts exhaustMap.js expand.d.ts expand.js filter.d.ts filter.js finally.d.ts finally.js find.d.ts find.js findIndex.d.ts findIndex.js first.d.ts first.js groupBy.d.ts groupBy.js ignoreElements.d.ts ignoreElements.js isEmpty.d.ts isEmpty.js last.d.ts last.js let.d.ts let.js map.d.ts map.js mapTo.d.ts mapTo.js materialize.d.ts materialize.js max.d.ts max.js merge.d.ts merge.js mergeAll.d.ts mergeAll.js mergeMap.d.ts mergeMap.js mergeMapTo.d.ts mergeMapTo.js mergeScan.d.ts mergeScan.js min.d.ts min.js multicast.d.ts multicast.js observeOn.d.ts observeOn.js onErrorResumeNext.d.ts onErrorResumeNext.js pairwise.d.ts pairwise.js partition.d.ts partition.js pluck.d.ts pluck.js publish.d.ts publish.js publishBehavior.d.ts publishBehavior.js publishLast.d.ts publishLast.js publishReplay.d.ts publishReplay.js race.d.ts race.js reduce.d.ts reduce.js repeat.d.ts repeat.js repeatWhen.d.ts repeatWhen.js retry.d.ts retry.js retryWhen.d.ts retryWhen.js sample.d.ts sample.js sampleTime.d.ts sampleTime.js scan.d.ts scan.js sequenceEqual.d.ts sequenceEqual.js share.d.ts share.js shareReplay.d.ts shareReplay.js single.d.ts single.js skip.d.ts skip.js skipLast.d.ts skipLast.js skipUntil.d.ts skipUntil.js skipWhile.d.ts skipWhile.js startWith.d.ts startWith.js subscribeOn.d.ts subscribeOn.js switch.d.ts switch.js switchMap.d.ts switchMap.js switchMapTo.d.ts switchMapTo.js take.d.ts take.js takeLast.d.ts takeLast.js takeUntil.d.ts takeUntil.js takeWhile.d.ts takeWhile.js throttle.d.ts throttle.js throttleTime.d.ts throttleTime.js timeInterval.d.ts timeInterval.js timeout.d.ts timeout.js timeoutWith.d.ts timeoutWith.js timestamp.d.ts timestamp.js toArray.d.ts toArray.js toPromise.d.ts toPromise.js window.d.ts window.js windowCount.d.ts windowCount.js windowTime.d.ts windowTime.js windowToggle.d.ts windowToggle.js windowWhen.d.ts windowWhen.js withLatestFrom.d.ts withLatestFrom.js zip.d.ts zip.js zipAll.d.ts zipAll.js operators audit.d.ts audit.js auditTime.d.ts auditTime.js buffer.d.ts buffer.js bufferCount.d.ts bufferCount.js bufferTime.d.ts bufferTime.js bufferToggle.d.ts bufferToggle.js bufferWhen.d.ts bufferWhen.js catchError.d.ts catchError.js combineAll.d.ts combineAll.js combineLatest.d.ts combineLatest.js concat.d.ts concat.js concatAll.d.ts concatAll.js concatMap.d.ts concatMap.js concatMapTo.d.ts concatMapTo.js count.d.ts count.js debounce.d.ts debounce.js debounceTime.d.ts debounceTime.js defaultIfEmpty.d.ts defaultIfEmpty.js delay.d.ts delay.js delayWhen.d.ts delayWhen.js dematerialize.d.ts dematerialize.js distinct.d.ts distinct.js distinctUntilChanged.d.ts distinctUntilChanged.js distinctUntilKeyChanged.d.ts distinctUntilKeyChanged.js elementAt.d.ts elementAt.js every.d.ts every.js exhaust.d.ts exhaust.js exhaustMap.d.ts exhaustMap.js expand.d.ts expand.js filter.d.ts filter.js finalize.d.ts finalize.js find.d.ts find.js findIndex.d.ts findIndex.js first.d.ts first.js groupBy.d.ts groupBy.js ignoreElements.d.ts ignoreElements.js index.d.ts index.js isEmpty.d.ts isEmpty.js last.d.ts last.js map.d.ts map.js mapTo.d.ts mapTo.js materialize.d.ts materialize.js max.d.ts max.js merge.d.ts merge.js mergeAll.d.ts mergeAll.js mergeMap.d.ts mergeMap.js mergeMapTo.d.ts mergeMapTo.js mergeScan.d.ts mergeScan.js min.d.ts min.js multicast.d.ts multicast.js observeOn.d.ts observeOn.js onErrorResumeNext.d.ts onErrorResumeNext.js package.json pairwise.d.ts pairwise.js partition.d.ts partition.js pluck.d.ts pluck.js publish.d.ts publish.js publishBehavior.d.ts publishBehavior.js publishLast.d.ts publishLast.js publishReplay.d.ts publishReplay.js race.d.ts race.js reduce.d.ts reduce.js refCount.d.ts refCount.js repeat.d.ts repeat.js repeatWhen.d.ts repeatWhen.js retry.d.ts retry.js retryWhen.d.ts retryWhen.js sample.d.ts sample.js sampleTime.d.ts sampleTime.js scan.d.ts scan.js sequenceEqual.d.ts sequenceEqual.js share.d.ts share.js shareReplay.d.ts shareReplay.js single.d.ts single.js skip.d.ts skip.js skipLast.d.ts skipLast.js skipUntil.d.ts skipUntil.js skipWhile.d.ts skipWhile.js startWith.d.ts startWith.js subscribeOn.d.ts subscribeOn.js switchAll.d.ts switchAll.js switchMap.d.ts switchMap.js switchMapTo.d.ts switchMapTo.js take.d.ts take.js takeLast.d.ts takeLast.js takeUntil.d.ts takeUntil.js takeWhile.d.ts takeWhile.js tap.d.ts tap.js throttle.d.ts throttle.js throttleTime.d.ts throttleTime.js throwIfEmpty.d.ts throwIfEmpty.js timeInterval.d.ts timeInterval.js timeout.d.ts timeout.js timeoutWith.d.ts timeoutWith.js timestamp.d.ts timestamp.js toArray.d.ts toArray.js window.d.ts window.js windowCount.d.ts windowCount.js windowTime.d.ts windowTime.js windowToggle.d.ts windowToggle.js windowWhen.d.ts windowWhen.js withLatestFrom.d.ts withLatestFrom.js zip.d.ts zip.js zipAll.d.ts zipAll.js package.json scheduler animationFrame.d.ts animationFrame.js asap.d.ts asap.js async.d.ts async.js queue.d.ts queue.js src AsyncSubject.ts BehaviorSubject.ts InnerSubscriber.ts LICENSE.txt MiscJSDoc.ts Notification.ts Observable.ts Observer.ts Operator.ts OuterSubscriber.ts README.md ReplaySubject.ts Rx.global.js Rx.ts Scheduler.ts Subject.ts SubjectSubscription.ts Subscriber.ts Subscription.ts add observable bindCallback.ts bindNodeCallback.ts combineLatest.ts concat.ts defer.ts dom ajax.ts webSocket.ts empty.ts forkJoin.ts from.ts fromEvent.ts fromEventPattern.ts fromPromise.ts generate.ts if.ts interval.ts merge.ts never.ts of.ts onErrorResumeNext.ts pairs.ts race.ts range.ts throw.ts timer.ts using.ts zip.ts operator audit.ts auditTime.ts buffer.ts bufferCount.ts bufferTime.ts bufferToggle.ts bufferWhen.ts catch.ts combineAll.ts combineLatest.ts concat.ts concatAll.ts concatMap.ts concatMapTo.ts count.ts debounce.ts debounceTime.ts defaultIfEmpty.ts delay.ts delayWhen.ts dematerialize.ts distinct.ts distinctUntilChanged.ts distinctUntilKeyChanged.ts do.ts elementAt.ts every.ts exhaust.ts exhaustMap.ts expand.ts filter.ts finally.ts find.ts findIndex.ts first.ts groupBy.ts ignoreElements.ts isEmpty.ts last.ts let.ts map.ts mapTo.ts materialize.ts max.ts merge.ts mergeAll.ts mergeMap.ts mergeMapTo.ts mergeScan.ts min.ts multicast.ts observeOn.ts onErrorResumeNext.ts pairwise.ts partition.ts pluck.ts publish.ts publishBehavior.ts publishLast.ts publishReplay.ts race.ts reduce.ts repeat.ts repeatWhen.ts retry.ts retryWhen.ts sample.ts sampleTime.ts scan.ts sequenceEqual.ts share.ts shareReplay.ts single.ts skip.ts skipLast.ts skipUntil.ts skipWhile.ts startWith.ts subscribeOn.ts switch.ts switchMap.ts switchMapTo.ts take.ts takeLast.ts takeUntil.ts takeWhile.ts throttle.ts throttleTime.ts timeInterval.ts timeout.ts timeoutWith.ts timestamp.ts toArray.ts toPromise.ts window.ts windowCount.ts windowTime.ts windowToggle.ts windowWhen.ts withLatestFrom.ts zip.ts zipAll.ts ajax index.ts package.json fetch index.ts package.json index.ts interfaces.ts internal-compatibility index.ts package.json internal AsyncSubject.ts BehaviorSubject.ts InnerSubscriber.ts Notification.ts Observable.ts Observer.ts Operator.ts OuterSubscriber.ts ReplaySubject.ts Rx.ts Scheduler.ts Subject.ts SubjectSubscription.ts Subscriber.ts Subscription.ts config.ts innerSubscribe.ts observable ConnectableObservable.ts SubscribeOnObservable.ts bindCallback.ts bindNodeCallback.ts combineLatest.ts concat.ts defer.ts dom AjaxObservable.ts MiscJSDoc.ts WebSocketSubject.ts ajax.ts fetch.ts webSocket.ts empty.ts forkJoin.ts from.ts fromArray.ts fromEvent.ts fromEventPattern.ts fromIterable.ts fromObservable.ts fromPromise.ts generate.ts iif.ts interval.ts merge.ts never.ts of.ts onErrorResumeNext.ts pairs.ts partition.ts race.ts range.ts throwError.ts timer.ts using.ts zip.ts operators audit.ts auditTime.ts buffer.ts bufferCount.ts bufferTime.ts bufferToggle.ts bufferWhen.ts catchError.ts combineAll.ts combineLatest.ts concat.ts concatAll.ts concatMap.ts concatMapTo.ts count.ts debounce.ts debounceTime.ts defaultIfEmpty.ts delay.ts delayWhen.ts dematerialize.ts distinct.ts distinctUntilChanged.ts distinctUntilKeyChanged.ts elementAt.ts endWith.ts every.ts exhaust.ts exhaustMap.ts expand.ts filter.ts finalize.ts find.ts findIndex.ts first.ts groupBy.ts ignoreElements.ts index.ts isEmpty.ts last.ts map.ts mapTo.ts materialize.ts max.ts merge.ts mergeAll.ts mergeMap.ts mergeMapTo.ts mergeScan.ts min.ts multicast.ts observeOn.ts onErrorResumeNext.ts pairwise.ts partition.ts pluck.ts publish.ts publishBehavior.ts publishLast.ts publishReplay.ts race.ts reduce.ts refCount.ts repeat.ts repeatWhen.ts retry.ts retryWhen.ts sample.ts sampleTime.ts scan.ts sequenceEqual.ts share.ts shareReplay.ts single.ts skip.ts skipLast.ts skipUntil.ts skipWhile.ts startWith.ts subscribeOn.ts switchAll.ts switchMap.ts switchMapTo.ts take.ts takeLast.ts takeUntil.ts takeWhile.ts tap.ts throttle.ts throttleTime.ts throwIfEmpty.ts timeInterval.ts timeout.ts timeoutWith.ts timestamp.ts toArray.ts window.ts windowCount.ts windowTime.ts windowToggle.ts windowWhen.ts withLatestFrom.ts zip.ts zipAll.ts scheduled scheduleArray.ts scheduleIterable.ts scheduleObservable.ts schedulePromise.ts scheduled.ts scheduler Action.ts AnimationFrameAction.ts AnimationFrameScheduler.ts AsapAction.ts AsapScheduler.ts AsyncAction.ts AsyncScheduler.ts QueueAction.ts QueueScheduler.ts VirtualTimeScheduler.ts animationFrame.ts asap.ts async.ts queue.ts symbol iterator.ts observable.ts rxSubscriber.ts testing ColdObservable.ts HotObservable.ts SubscriptionLog.ts SubscriptionLoggable.ts TestMessage.ts TestScheduler.ts types.ts umd.ts util ArgumentOutOfRangeError.ts EmptyError.ts Immediate.ts ObjectUnsubscribedError.ts TimeoutError.ts UnsubscriptionError.ts applyMixins.ts canReportError.ts errorObject.ts hostReportError.ts identity.ts isArray.ts isArrayLike.ts isDate.ts isFunction.ts isInteropObservable.ts isIterable.ts isNumeric.ts isObject.ts isObservable.ts isPromise.ts isScheduler.ts noop.ts not.ts pipe.ts root.ts subscribeTo.ts subscribeToArray.ts subscribeToIterable.ts subscribeToObservable.ts subscribeToPromise.ts subscribeToResult.ts toSubscriber.ts tryCatch.ts observable ArrayLikeObservable.ts ArrayObservable.ts BoundCallbackObservable.ts BoundNodeCallbackObservable.ts ConnectableObservable.ts DeferObservable.ts EmptyObservable.ts ErrorObservable.ts ForkJoinObservable.ts FromEventObservable.ts FromEventPatternObservable.ts FromObservable.ts GenerateObservable.ts IfObservable.ts IntervalObservable.ts IteratorObservable.ts NeverObservable.ts PairsObservable.ts PromiseObservable.ts RangeObservable.ts ScalarObservable.ts SubscribeOnObservable.ts TimerObservable.ts UsingObservable.ts bindCallback.ts bindNodeCallback.ts combineLatest.ts concat.ts defer.ts dom AjaxObservable.ts WebSocketSubject.ts ajax.ts webSocket.ts empty.ts forkJoin.ts from.ts fromArray.ts fromEvent.ts fromEventPattern.ts fromIterable.ts fromPromise.ts generate.ts if.ts interval.ts merge.ts never.ts of.ts onErrorResumeNext.ts pairs.ts race.ts range.ts throw.ts timer.ts using.ts zip.ts operator audit.ts auditTime.ts buffer.ts bufferCount.ts bufferTime.ts bufferToggle.ts bufferWhen.ts catch.ts combineAll.ts combineLatest.ts concat.ts concatAll.ts concatMap.ts concatMapTo.ts count.ts debounce.ts debounceTime.ts defaultIfEmpty.ts delay.ts delayWhen.ts dematerialize.ts distinct.ts distinctUntilChanged.ts distinctUntilKeyChanged.ts do.ts elementAt.ts every.ts exhaust.ts exhaustMap.ts expand.ts filter.ts finally.ts find.ts findIndex.ts first.ts groupBy.ts ignoreElements.ts isEmpty.ts last.ts let.ts map.ts mapTo.ts materialize.ts max.ts merge.ts mergeAll.ts mergeMap.ts mergeMapTo.ts mergeScan.ts min.ts multicast.ts observeOn.ts onErrorResumeNext.ts pairwise.ts partition.ts pluck.ts publish.ts publishBehavior.ts publishLast.ts publishReplay.ts race.ts reduce.ts repeat.ts repeatWhen.ts retry.ts retryWhen.ts sample.ts sampleTime.ts scan.ts sequenceEqual.ts share.ts shareReplay.ts single.ts skip.ts skipLast.ts skipUntil.ts skipWhile.ts startWith.ts subscribeOn.ts switch.ts switchMap.ts switchMapTo.ts take.ts takeLast.ts takeUntil.ts takeWhile.ts throttle.ts throttleTime.ts timeInterval.ts timeout.ts timeoutWith.ts timestamp.ts toArray.ts toPromise.ts window.ts windowCount.ts windowTime.ts windowToggle.ts windowWhen.ts withLatestFrom.ts zip.ts zipAll.ts operators audit.ts auditTime.ts buffer.ts bufferCount.ts bufferTime.ts bufferToggle.ts bufferWhen.ts catchError.ts combineAll.ts combineLatest.ts concat.ts concatAll.ts concatMap.ts concatMapTo.ts count.ts debounce.ts debounceTime.ts defaultIfEmpty.ts delay.ts delayWhen.ts dematerialize.ts distinct.ts distinctUntilChanged.ts distinctUntilKeyChanged.ts elementAt.ts every.ts exhaust.ts exhaustMap.ts expand.ts filter.ts finalize.ts find.ts findIndex.ts first.ts groupBy.ts ignoreElements.ts index.ts isEmpty.ts last.ts map.ts mapTo.ts materialize.ts max.ts merge.ts mergeAll.ts mergeMap.ts mergeMapTo.ts mergeScan.ts min.ts multicast.ts observeOn.ts onErrorResumeNext.ts package.json pairwise.ts partition.ts pluck.ts publish.ts publishBehavior.ts publishLast.ts publishReplay.ts race.ts reduce.ts refCount.ts repeat.ts repeatWhen.ts retry.ts retryWhen.ts sample.ts sampleTime.ts scan.ts sequenceEqual.ts share.ts shareReplay.ts single.ts skip.ts skipLast.ts skipUntil.ts skipWhile.ts startWith.ts subscribeOn.ts switchAll.ts switchMap.ts switchMapTo.ts take.ts takeLast.ts takeUntil.ts takeWhile.ts tap.ts throttle.ts throttleTime.ts throwIfEmpty.ts timeInterval.ts timeout.ts timeoutWith.ts timestamp.ts toArray.ts window.ts windowCount.ts windowTime.ts windowToggle.ts windowWhen.ts withLatestFrom.ts zip.ts zipAll.ts scheduler animationFrame.ts asap.ts async.ts queue.ts symbol iterator.ts observable.ts rxSubscriber.ts testing index.ts package.json tsconfig.json util ArgumentOutOfRangeError.ts EmptyError.ts Immediate.ts ObjectUnsubscribedError.ts TimeoutError.ts UnsubscriptionError.ts applyMixins.ts errorObject.ts hostReportError.ts identity.ts isArray.ts isArrayLike.ts isDate.ts isFunction.ts isIterable.ts isNumeric.ts isObject.ts isObservable.ts isPromise.ts isScheduler.ts noop.ts not.ts pipe.ts root.ts subscribeTo.ts subscribeToArray.ts subscribeToIterable.ts subscribeToObservable.ts subscribeToPromise.ts subscribeToResult.ts toSubscriber.ts tryCatch.ts webSocket index.ts package.json symbol iterator.d.ts iterator.js observable.d.ts observable.js rxSubscriber.d.ts rxSubscriber.js testing index.d.ts index.js package.json util ArgumentOutOfRangeError.d.ts ArgumentOutOfRangeError.js EmptyError.d.ts EmptyError.js Immediate.d.ts Immediate.js ObjectUnsubscribedError.d.ts ObjectUnsubscribedError.js TimeoutError.d.ts TimeoutError.js UnsubscriptionError.d.ts UnsubscriptionError.js applyMixins.d.ts applyMixins.js errorObject.d.ts errorObject.js hostReportError.d.ts hostReportError.js identity.d.ts identity.js isArray.d.ts isArray.js isArrayLike.d.ts isArrayLike.js isDate.d.ts isDate.js isFunction.d.ts isFunction.js isIterable.d.ts isIterable.js isNumeric.d.ts isNumeric.js isObject.d.ts isObject.js isObservable.d.ts isObservable.js isPromise.d.ts isPromise.js isScheduler.d.ts isScheduler.js noop.d.ts noop.js not.d.ts not.js pipe.d.ts pipe.js root.d.ts root.js subscribeTo.d.ts subscribeTo.js subscribeToArray.d.ts subscribeToArray.js subscribeToIterable.d.ts subscribeToIterable.js subscribeToObservable.d.ts subscribeToObservable.js subscribeToPromise.d.ts subscribeToPromise.js subscribeToResult.d.ts subscribeToResult.js toSubscriber.d.ts toSubscriber.js tryCatch.d.ts tryCatch.js webSocket index.d.ts index.js package.json safe-buffer README.md index.d.ts index.js package.json semver-diff index.d.ts index.js node_modules .bin semver.cmd semver CHANGELOG.md README.md bin semver.js package.json semver.js package.json readme.md semver README.md bin semver.js classes comparator.js index.js range.js semver.js functions clean.js cmp.js coerce.js compare-build.js compare-loose.js compare.js diff.js eq.js gt.js gte.js inc.js lt.js lte.js major.js minor.js neq.js parse.js patch.js prerelease.js rcompare.js rsort.js satisfies.js sort.js valid.js index.js internal constants.js debug.js identifiers.js parse-options.js re.js package.json preload.js ranges gtr.js intersects.js ltr.js max-satisfying.js min-satisfying.js min-version.js outside.js simplify.js subset.js to-comparators.js valid.js set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json setprototypeof README.md index.d.ts index.js package.json test index.js sha.js .travis.yml README.md bin.js hash.js index.js package.json sha.js sha1.js sha224.js sha256.js sha384.js sha512.js test hash.js test.js vectors.js shebang-command index.js package.json readme.md shebang-regex index.d.ts index.js package.json readme.md signal-exit LICENSE.txt README.md index.js package.json signals.js simple-concat .travis.yml README.md index.js package.json test basic.js simple-get README.md index.js node_modules decompress-response index.d.ts index.js package.json readme.md mimic-response index.d.ts index.js package.json readme.md package.json slash index.d.ts index.js package.json readme.md slice-ansi index.js package.json readme.md sprintf-js README.md bower.json demo angular.html dist angular-sprintf.min.js sprintf.min.js gruntfile.js package.json src angular-sprintf.js sprintf.js test test.js stack-utils index.js node_modules escape-string-regexp index.d.ts index.js package.json readme.md package.json readme.md statuses HISTORY.md README.md codes.json index.js package.json stoppable lib stoppable.js package.json readme.md string-width index.d.ts index.js package.json readme.md string_decoder README.md lib string_decoder.js package.json strip-ansi index.d.ts index.js package.json readme.md strip-json-comments index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md table README.md dist src alignSpanningCell.d.ts alignSpanningCell.js alignString.d.ts alignString.js alignTableData.d.ts alignTableData.js calculateCellHeight.d.ts calculateCellHeight.js calculateMaximumColumnWidths.d.ts calculateMaximumColumnWidths.js calculateOutputColumnWidths.d.ts calculateOutputColumnWidths.js calculateRowHeights.d.ts calculateRowHeights.js calculateSpanningCellWidth.d.ts calculateSpanningCellWidth.js createStream.d.ts createStream.js drawBorder.d.ts drawBorder.js drawContent.d.ts drawContent.js drawRow.d.ts drawRow.js drawTable.d.ts drawTable.js generated validators.d.ts validators.js getBorderCharacters.d.ts getBorderCharacters.js index.d.ts index.js injectHeaderConfig.d.ts injectHeaderConfig.js makeRangeConfig.d.ts makeRangeConfig.js makeStreamConfig.d.ts makeStreamConfig.js makeTableConfig.d.ts makeTableConfig.js mapDataUsingRowHeights.d.ts mapDataUsingRowHeights.js padTableData.d.ts padTableData.js schemas config.json shared.json streamConfig.json spanningCellManager.d.ts spanningCellManager.js stringifyTableData.d.ts stringifyTableData.js table.d.ts table.js truncateTableData.d.ts truncateTableData.js types api.d.ts api.js internal.d.ts internal.js utils.d.ts utils.js validateConfig.d.ts validateConfig.js validateSpanningCellConfig.d.ts validateSpanningCellConfig.js validateTableData.d.ts validateTableData.js wrapCell.d.ts wrapCell.js wrapString.d.ts wrapString.js wrapWord.d.ts wrapWord.js node_modules ajv .runkit_example.js README.md dist 2019.d.ts 2019.js 2020.d.ts 2020.js ajv.d.ts ajv.js compile codegen code.d.ts code.js index.d.ts index.js scope.d.ts scope.js errors.d.ts errors.js index.d.ts index.js jtd parse.d.ts parse.js serialize.d.ts serialize.js types.d.ts types.js names.d.ts names.js ref_error.d.ts ref_error.js resolve.d.ts resolve.js rules.d.ts rules.js util.d.ts util.js validate applicability.d.ts applicability.js boolSchema.d.ts boolSchema.js dataType.d.ts dataType.js defaults.d.ts defaults.js index.d.ts index.js keyword.d.ts keyword.js subschema.d.ts subschema.js core.d.ts core.js jtd.d.ts jtd.js refs data.json json-schema-2019-09 index.d.ts index.js meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.d.ts index.js meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.d.ts jtd-schema.js runtime equal.d.ts equal.js parseJson.d.ts parseJson.js quote.d.ts quote.js re2.d.ts re2.js timestamp.d.ts timestamp.js ucs2length.d.ts ucs2length.js uri.d.ts uri.js validation_error.d.ts validation_error.js standalone index.d.ts index.js instance.d.ts instance.js types index.d.ts index.js json-schema.d.ts json-schema.js jtd-schema.d.ts jtd-schema.js vocabularies applicator additionalItems.d.ts additionalItems.js additionalProperties.d.ts additionalProperties.js allOf.d.ts allOf.js anyOf.d.ts anyOf.js contains.d.ts contains.js dependencies.d.ts dependencies.js dependentSchemas.d.ts dependentSchemas.js if.d.ts if.js index.d.ts index.js items.d.ts items.js items2020.d.ts items2020.js not.d.ts not.js oneOf.d.ts oneOf.js patternProperties.d.ts patternProperties.js prefixItems.d.ts prefixItems.js properties.d.ts properties.js propertyNames.d.ts propertyNames.js thenElse.d.ts thenElse.js code.d.ts code.js core id.d.ts id.js index.d.ts index.js ref.d.ts ref.js discriminator index.d.ts index.js types.d.ts types.js draft2020.d.ts draft2020.js draft7.d.ts draft7.js dynamic dynamicAnchor.d.ts dynamicAnchor.js dynamicRef.d.ts dynamicRef.js index.d.ts index.js recursiveAnchor.d.ts recursiveAnchor.js recursiveRef.d.ts recursiveRef.js errors.d.ts errors.js format format.d.ts format.js index.d.ts index.js jtd discriminator.d.ts discriminator.js elements.d.ts elements.js enum.d.ts enum.js error.d.ts error.js index.d.ts index.js metadata.d.ts metadata.js nullable.d.ts nullable.js optionalProperties.d.ts optionalProperties.js properties.d.ts properties.js ref.d.ts ref.js type.d.ts type.js union.d.ts union.js values.d.ts values.js metadata.d.ts metadata.js next.d.ts next.js unevaluated index.d.ts index.js unevaluatedItems.d.ts unevaluatedItems.js unevaluatedProperties.d.ts unevaluatedProperties.js validation const.d.ts const.js dependentRequired.d.ts dependentRequired.js enum.d.ts enum.js index.d.ts index.js limitContains.d.ts limitContains.js limitItems.d.ts limitItems.js limitLength.d.ts limitLength.js limitNumber.d.ts limitNumber.js limitProperties.d.ts limitProperties.js multipleOf.d.ts multipleOf.js pattern.d.ts pattern.js required.d.ts required.js uniqueItems.d.ts uniqueItems.js lib 2019.ts 2020.ts ajv.ts compile codegen code.ts index.ts scope.ts errors.ts index.ts jtd parse.ts serialize.ts types.ts names.ts ref_error.ts resolve.ts rules.ts util.ts validate applicability.ts boolSchema.ts dataType.ts defaults.ts index.ts keyword.ts subschema.ts core.ts jtd.ts refs data.json json-schema-2019-09 index.ts meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.ts meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.ts runtime equal.ts parseJson.ts quote.ts re2.ts timestamp.ts ucs2length.ts uri.ts validation_error.ts standalone index.ts instance.ts types index.ts json-schema.ts jtd-schema.ts vocabularies applicator additionalItems.ts additionalProperties.ts allOf.ts anyOf.ts contains.ts dependencies.ts dependentSchemas.ts if.ts index.ts items.ts items2020.ts not.ts oneOf.ts patternProperties.ts prefixItems.ts properties.ts propertyNames.ts thenElse.ts code.ts core id.ts index.ts ref.ts discriminator index.ts types.ts draft2020.ts draft7.ts dynamic dynamicAnchor.ts dynamicRef.ts index.ts recursiveAnchor.ts recursiveRef.ts errors.ts format format.ts index.ts jtd discriminator.ts elements.ts enum.ts error.ts index.ts metadata.ts nullable.ts optionalProperties.ts properties.ts ref.ts type.ts union.ts values.ts metadata.ts next.ts unevaluated index.ts unevaluatedItems.ts unevaluatedProperties.ts validation const.ts dependentRequired.ts enum.ts index.ts limitContains.ts limitItems.ts limitLength.ts limitNumber.ts limitProperties.ts multipleOf.ts pattern.ts required.ts uniqueItems.ts package.json json-schema-traverse .eslintrc.yml .github FUNDING.yml workflows build.yml publish.yml README.md index.d.ts index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json tar-fs .travis.yml README.md index.js package.json test fixtures a hello.txt b a test.txt index.js tar-stream README.md extract.js headers.js index.js pack.js package.json sandbox.js tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js normalize-windows-path.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js strip-absolute-path.js strip-trailing-slashes.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js node_modules .bin mkdirp.cmd package.json tcp-port-used README.md index.js node_modules debug README.md package.json src browser.js common.js index.js node.js package.json test.js text-encoding-utf-8 LICENSE.md README.md lib encoding.js encoding.lib.js package.json src encoding.js polyfill.js text-table .travis.yml example align.js center.js dotalign.js doubledot.js table.js index.js package.json test align.js ansi-colors.js center.js dotalign.js doubledot.js table.js to-readable-stream index.js package.json readme.md to-regex-range README.md index.js package.json toidentifier HISTORY.md README.md index.js package.json tr46 index.js lib mappingTable.json package.json ts-mixer CHANGELOG.md README.md dist cjs decorator.js index.js mixin-tracking.js mixins.js proxy.js settings.js types.js util.js esm index.js index.min.js types decorator.d.ts index.d.ts mixin-tracking.d.ts mixins.d.ts proxy.d.ts settings.d.ts types.d.ts util.d.ts package.json tslib CopyrightNotice.txt LICENSE.txt README.md modules index.js package.json package.json test validateModuleExportsMatchCommonJS index.js package.json tslib.d.ts tslib.es6.html tslib.es6.js tslib.html tslib.js tunnel-agent README.md index.js package.json tweetnacl AUTHORS.md CHANGELOG.md PULL_REQUEST_TEMPLATE.md README.md nacl-fast.js nacl-fast.min.js nacl.d.ts nacl.js nacl.min.js package.json type-check README.md lib check.js index.js parse-type.js package.json type-detect README.md index.js package.json type-detect.js type-fest base.d.ts index.d.ts package.json readme.md source async-return-type.d.ts asyncify.d.ts basic.d.ts conditional-except.d.ts conditional-keys.d.ts conditional-pick.d.ts entries.d.ts entry.d.ts except.d.ts fixed-length-array.d.ts iterable-element.d.ts literal-union.d.ts merge-exclusive.d.ts merge.d.ts mutable.d.ts opaque.d.ts package-json.d.ts partial-deep.d.ts promisable.d.ts promise-value.d.ts readonly-deep.d.ts require-at-least-one.d.ts require-exactly-one.d.ts set-optional.d.ts set-required.d.ts set-return-type.d.ts stringified.d.ts tsconfig-json.d.ts union-to-intersection.d.ts utilities.d.ts value-of.d.ts ts41 camel-case.d.ts delimiter-case.d.ts index.d.ts kebab-case.d.ts pascal-case.d.ts snake-case.d.ts typedarray-to-buffer .airtap.yml .travis.yml README.md index.js package.json test basic.js u2f-api .travis.yml README.md index.d.ts index.js lib google-u2f-api.js u2f-api.js package.json scripts test.sh test.in setup.js tsconfig.json u2f-api index.ts u3 README.md index.js lib cache.js eachCombination.js index.js package.json unique-string index.d.ts index.js package.json readme.md universal-url README.md browser.js index.js node_modules tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json package.json update-notifier check.js index.js node_modules .bin is-ci.cmd semver.cmd package.json readme.md uri-js README.md dist es5 uri.all.d.ts uri.all.js uri.all.min.d.ts uri.all.min.js esnext index.d.ts index.js regexps-iri.d.ts regexps-iri.js regexps-uri.d.ts regexps-uri.js schemes http.d.ts http.js https.d.ts https.js mailto.d.ts mailto.js urn-uuid.d.ts urn-uuid.js urn.d.ts urn.js ws.d.ts ws.js wss.d.ts wss.js uri.d.ts uri.js util.d.ts util.js package.json url-parse-lax index.js package.json readme.md usb CHANGELOG.md README.md libusb .private README.txt bd.cmd bm.sh bwince.cmd post-rewrite.sh pre-commit.sh wbs.txt wbs_wince.txt .travis.yml INSTALL_WIN.txt README.md Xcode config.h android config.h appveyor.yml appveyor_cygwin.bat appveyor_minGW.bat autogen.sh bootstrap.sh examples dpfp.c dpfp_threaded.c ezusb.c ezusb.h fxload.c getopt getopt.c getopt.h getopt1.c hotplugtest.c listdevs.c sam3u_benchmark.c testlibusb.c xusb.c libusb core.c descriptor.c hotplug.c hotplug.h io.c libusb.h libusbi.h os darwin_usb.c darwin_usb.h haiku_pollfs.cpp haiku_usb.h haiku_usb_backend.cpp haiku_usb_raw.cpp haiku_usb_raw.h linux_netlink.c linux_udev.c linux_usbfs.c linux_usbfs.h netbsd_usb.c openbsd_usb.c poll_posix.c poll_posix.h poll_windows.c poll_windows.h sunos_usb.c sunos_usb.h threads_posix.c threads_posix.h threads_windows.c threads_windows.h wince_usb.c wince_usb.h windows_common.h windows_nt_common.c windows_nt_common.h windows_nt_shared_types.h windows_usbdk.c windows_usbdk.h windows_winusb.c windows_winusb.h strerror.c sync.c version.h version_nano.h msvc appveyor.bat config.h ddk_build.cmd errno.h inttypes.h libusb_2005.sln libusb_2010.sln libusb_2012.sln libusb_2013.sln libusb_2015.sln libusb_2017.sln libusb_wince.sln missing.c missing.h stdint.h tests libusb_testlib.h stress.c testlib.c travis-autogen.sh libusb_config config.h node_modules .bin node-gyp-build-optional.cmd node-gyp-build-test.cmd node-gyp-build.cmd node-addon-api LICENSE.md README.md index.js napi-inl.deprecated.h napi-inl.h napi.h nothing.c package-support.json package.json tools README.md check-napi.js clang-format.js conversion.js eslint-format.js package.json src helpers.h node_usb.h uv_async_queue.h test usb.coffee usb.js util-deprecate History.md README.md browser.js node.js package.json uuid CHANGELOG.md CONTRIBUTING.md LICENSE.md README.md dist esm-browser index.js md5.js nil.js parse.js regex.js rng.js sha1.js stringify.js v1.js v3.js v35.js v4.js v5.js validate.js version.js esm-node index.js md5.js nil.js parse.js regex.js rng.js sha1.js stringify.js v1.js v3.js v35.js v4.js v5.js validate.js version.js index.js md5-browser.js md5.js nil.js parse.js regex.js rng-browser.js rng.js sha1-browser.js sha1.js stringify.js umd uuid.min.js uuidNIL.min.js uuidParse.min.js uuidStringify.min.js uuidValidate.min.js uuidVersion.min.js uuidv1.min.js uuidv3.min.js uuidv4.min.js uuidv5.min.js uuid-bin.js v1.js v3.js v35.js v4.js v5.js validate.js version.js package.json v8-compile-cache CHANGELOG.md README.md package.json v8-compile-cache.js v8flags README.md config-path.js index.js package.json visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js public-api.js url-state-machine.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json which CHANGELOG.md README.md package.json which.js wide-align README.md align.js package.json widest-line index.d.ts index.js package.json readme.md word-wrap README.md index.d.ts index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js write-file-atomic CHANGELOG.md README.md index.js package.json xdg-basedir index.d.ts index.js package.json readme.md y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json | Inspector mode only ----------- features not yet implemented issues with the tests differences between PCRE and JS regex | | | | | nested-env-vars package-lock.json package.json scripts 1.dev-deploy.sh 2.create-shortfilm.sh 3.remove-shortfilm.sh 4.fund-shortfilm.sh 5.send-fund.sh 6.find-shortfilm.sh 7.list-shortfilms.sh tests index.js
bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # pretty-format Stringify any JavaScript value. - Serialize built-in JavaScript types. - Serialize application-specific data types with built-in or user-defined plugins. ## Installation ```sh $ yarn add pretty-format ``` ## Usage ```js const {format: prettyFormat} = require('pretty-format'); // CommonJS ``` ```js import {format as prettyFormat} from 'pretty-format'; // ES2015 modules ``` ```js const val = {object: {}}; val.circularReference = val; val[Symbol('foo')] = 'foo'; val.map = new Map([['prop', 'value']]); val.array = [-0, Infinity, NaN]; console.log(prettyFormat(val)); /* Object { "array": Array [ -0, Infinity, NaN, ], "circularReference": [Circular], "map": Map { "prop" => "value", }, "object": Object {}, Symbol(foo): "foo", } */ ``` ## Usage with options ```js function onClick() {} console.log(prettyFormat(onClick)); /* [Function onClick] */ const options = { printFunctionName: false, }; console.log(prettyFormat(onClick, options)); /* [Function] */ ``` <!-- prettier-ignore --> | key | type | default | description | | :-------------------- | :-------- | :--------- | :------------------------------------------------------ | | `callToJSON` | `boolean` | `true` | call `toJSON` method (if it exists) on objects | | `compareKeys` | `function`| `undefined`| compare function used when sorting object keys | | `escapeRegex` | `boolean` | `false` | escape special characters in regular expressions | | `escapeString` | `boolean` | `true` | escape special characters in strings | | `highlight` | `boolean` | `false` | highlight syntax with colors in terminal (some plugins) | | `indent` | `number` | `2` | spaces in each level of indentation | | `maxDepth` | `number` | `Infinity` | levels to print in arrays, objects, elements, and so on | | `min` | `boolean` | `false` | minimize added space: no indentation nor line breaks | | `plugins` | `array` | `[]` | plugins to serialize application-specific data types | | `printBasicPrototype` | `boolean` | `false` | print the prototype for plain objects and arrays | | `printFunctionName` | `boolean` | `true` | include or omit the name of a function | | `theme` | `object` | | colors to highlight syntax in terminal | Property values of `theme` are from [ansi-styles colors](https://github.com/chalk/ansi-styles#colors) ```js const DEFAULT_THEME = { comment: 'gray', content: 'reset', prop: 'yellow', tag: 'cyan', value: 'green', }; ``` ## Usage with plugins The `pretty-format` package provides some built-in plugins, including: - `ReactElement` for elements from `react` - `ReactTestComponent` for test objects from `react-test-renderer` ```js // CommonJS const React = require('react'); const renderer = require('react-test-renderer'); const {format: prettyFormat, plugins} = require('pretty-format'); const {ReactElement, ReactTestComponent} = plugins; ``` ```js // ES2015 modules and destructuring assignment import React from 'react'; import renderer from 'react-test-renderer'; import {plugins, format as prettyFormat} from 'pretty-format'; const {ReactElement, ReactTestComponent} = plugins; ``` ```js const onClick = () => {}; const element = React.createElement('button', {onClick}, 'Hello World'); const formatted1 = prettyFormat(element, { plugins: [ReactElement], printFunctionName: false, }); const formatted2 = prettyFormat(renderer.create(element).toJSON(), { plugins: [ReactTestComponent], printFunctionName: false, }); /* <button onClick=[Function] > Hello World </button> */ ``` ## Usage in Jest For snapshot tests, Jest uses `pretty-format` with options that include some of its built-in plugins. For this purpose, plugins are also known as **snapshot serializers**. To serialize application-specific data types, you can add modules to `devDependencies` of a project, and then: In an **individual** test file, you can add a module as follows. It precedes any modules from Jest configuration. ```js import serializer from 'my-serializer-module'; expect.addSnapshotSerializer(serializer); // tests which have `expect(value).toMatchSnapshot()` assertions ``` For **all** test files, you can specify modules in Jest configuration. They precede built-in plugins for React, HTML, and Immutable.js data types. For example, in a `package.json` file: ```json { "jest": { "snapshotSerializers": ["my-serializer-module"] } } ``` ## Writing plugins A plugin is a JavaScript object. If `options` has a `plugins` array: for the first plugin whose `test(val)` method returns a truthy value, then `prettyFormat(val, options)` returns the result from either: - `serialize(val, …)` method of the **improved** interface (available in **version 21** or later) - `print(val, …)` method of the **original** interface (if plugin does not have `serialize` method) ### test Write `test` so it can receive `val` argument of any type. To serialize **objects** which have certain properties, then a guarded expression like `val != null && …` or more concise `val && …` prevents the following errors: - `TypeError: Cannot read property 'whatever' of null` - `TypeError: Cannot read property 'whatever' of undefined` For example, `test` method of built-in `ReactElement` plugin: ```js const elementSymbol = Symbol.for('react.element'); const test = val => val && val.$$typeof === elementSymbol; ``` Pay attention to efficiency in `test` because `pretty-format` calls it often. ### serialize The **improved** interface is available in **version 21** or later. Write `serialize` to return a string, given the arguments: - `val` which “passed the test” - unchanging `config` object: derived from `options` - current `indentation` string: concatenate to `indent` from `config` - current `depth` number: compare to `maxDepth` from `config` - current `refs` array: find circular references in objects - `printer` callback function: serialize children ### config <!-- prettier-ignore --> | key | type | description | | :------------------ | :-------- | :------------------------------------------------------ | | `callToJSON` | `boolean` | call `toJSON` method (if it exists) on objects | | `compareKeys` | `function`| compare function used when sorting object keys | | `colors` | `Object` | escape codes for colors to highlight syntax | | `escapeRegex` | `boolean` | escape special characters in regular expressions | | `escapeString` | `boolean` | escape special characters in strings | | `indent` | `string` | spaces in each level of indentation | | `maxDepth` | `number` | levels to print in arrays, objects, elements, and so on | | `min` | `boolean` | minimize added space: no indentation nor line breaks | | `plugins` | `array` | plugins to serialize application-specific data types | | `printFunctionName` | `boolean` | include or omit the name of a function | | `spacingInner` | `strong` | spacing to separate items in a list | | `spacingOuter` | `strong` | spacing to enclose a list of items | Each property of `colors` in `config` corresponds to a property of `theme` in `options`: - the key is the same (for example, `tag`) - the value in `colors` is a object with `open` and `close` properties whose values are escape codes from [ansi-styles](https://github.com/chalk/ansi-styles) for the color value in `theme` (for example, `'cyan'`) Some properties in `config` are derived from `min` in `options`: - `spacingInner` and `spacingOuter` are **newline** if `min` is `false` - `spacingInner` is **space** and `spacingOuter` is **empty string** if `min` is `true` ### Example of serialize and test This plugin is a pattern you can apply to serialize composite data types. Side note: `pretty-format` does not need a plugin to serialize arrays. ```js // We reused more code when we factored out a function for child items // that is independent of depth, name, and enclosing punctuation (see below). const SEPARATOR = ','; function serializeItems(items, config, indentation, depth, refs, printer) { if (items.length === 0) { return ''; } const indentationItems = indentation + config.indent; return ( config.spacingOuter + items .map( item => indentationItems + printer(item, config, indentationItems, depth, refs), // callback ) .join(SEPARATOR + config.spacingInner) + (config.min ? '' : SEPARATOR) + // following the last item config.spacingOuter + indentation ); } const plugin = { test(val) { return Array.isArray(val); }, serialize(array, config, indentation, depth, refs, printer) { const name = array.constructor.name; return ++depth > config.maxDepth ? '[' + name + ']' : (config.min ? '' : name + ' ') + '[' + serializeItems(array, config, indentation, depth, refs, printer) + ']'; }, }; ``` ```js const val = { filter: 'completed', items: [ { text: 'Write test', completed: true, }, { text: 'Write serialize', completed: true, }, ], }; ``` ```js console.log( prettyFormat(val, { plugins: [plugin], }), ); /* Object { "filter": "completed", "items": Array [ Object { "completed": true, "text": "Write test", }, Object { "completed": true, "text": "Write serialize", }, ], } */ ``` ```js console.log( prettyFormat(val, { indent: 4, plugins: [plugin], }), ); /* Object { "filter": "completed", "items": Array [ Object { "completed": true, "text": "Write test", }, Object { "completed": true, "text": "Write serialize", }, ], } */ ``` ```js console.log( prettyFormat(val, { maxDepth: 1, plugins: [plugin], }), ); /* Object { "filter": "completed", "items": [Array], } */ ``` ```js console.log( prettyFormat(val, { min: true, plugins: [plugin], }), ); /* {"filter": "completed", "items": [{"completed": true, "text": "Write test"}, {"completed": true, "text": "Write serialize"}]} */ ``` ### print The **original** interface is adequate for plugins: - that **do not** depend on options other than `highlight` or `min` - that **do not** depend on `depth` or `refs` in recursive traversal, and - if values either - do **not** require indentation, or - do **not** occur as children of JavaScript data structures (for example, array) Write `print` to return a string, given the arguments: - `val` which “passed the test” - current `printer(valChild)` callback function: serialize children - current `indenter(lines)` callback function: indent lines at the next level - unchanging `config` object: derived from `options` - unchanging `colors` object: derived from `options` The 3 properties of `config` are `min` in `options` and: - `spacing` and `edgeSpacing` are **newline** if `min` is `false` - `spacing` is **space** and `edgeSpacing` is **empty string** if `min` is `true` Each property of `colors` corresponds to a property of `theme` in `options`: - the key is the same (for example, `tag`) - the value in `colors` is a object with `open` and `close` properties whose values are escape codes from [ansi-styles](https://github.com/chalk/ansi-styles) for the color value in `theme` (for example, `'cyan'`) ### Example of print and test This plugin prints functions with the **number of named arguments** excluding rest argument. ```js const plugin = { print(val) { return `[Function ${val.name || 'anonymous'} ${val.length}]`; }, test(val) { return typeof val === 'function'; }, }; ``` ```js const val = { onClick(event) {}, render() {}, }; prettyFormat(val, { plugins: [plugin], }); /* Object { "onClick": [Function onClick 1], "render": [Function render 0], } */ prettyFormat(val); /* Object { "onClick": [Function onClick], "render": [Function render], } */ ``` This plugin **ignores** the `printFunctionName` option. That limitation of the original `print` interface is a reason to use the improved `serialize` interface, described above. ```js prettyFormat(val, { plugins: [pluginOld], printFunctionName: false, }); /* Object { "onClick": [Function onClick 1], "render": [Function render 0], } */ prettyFormat(val, { printFunctionName: false, }); /* Object { "onClick": [Function], "render": [Function], } */ ``` # Node.js ABI [![Build Status](https://travis-ci.org/lgeiger/node-abi.svg?branch=v1.0.0)](https://travis-ci.org/lgeiger/node-abi) [![Greenkeeper badge](https://badges.greenkeeper.io/lgeiger/node-abi.svg)](https://greenkeeper.io/) Get the Node ABI for a given target and runtime, and vice versa. ## Installation ``` npm install node-abi ``` ## Usage ```javascript const nodeAbi = require('node-abi') nodeAbi.getAbi('7.2.0', 'node') // '51' nodeAbi.getAbi('1.4.10', 'electron') // '50' nodeAbi.getTarget('51', 'node') // '7.2.0' nodeAbi.getTarget('50', 'electron') // '1.4.15' nodeAbi.allTargets // [ // { runtime: 'node', target: '0.10.48', abi: '11', lts: false }, // { runtime: 'node', target: '0.12.17', abi: '14', lts: false }, // { runtime: 'node', target: '4.6.1', abi: '46', lts: true }, // { runtime: 'node', target: '5.12.0', abi: '47', lts: false }, // { runtime: 'node', target: '6.9.4', abi: '48', lts: true }, // { runtime: 'node', target: '7.4.0', abi: '51', lts: false }, // { runtime: 'electron', target: '1.0.2', abi: '47', lts: false }, // { runtime: 'electron', target: '1.2.8', abi: '48', lts: false }, // { runtime: 'electron', target: '1.3.13', abi: '49', lts: false }, // { runtime: 'electron', target: '1.4.15', abi: '50', lts: false } // ] nodeAbi.deprecatedTargets nodeAbi.supportedTargets nodeAbi.additionalTargets nodeAbi.futureTargets // ... ``` ## References - https://github.com/lgeiger/electron-abi - https://nodejs.org/en/download/releases/ - https://github.com/nodejs/Release # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC <p align="center"> <a href="http://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # flagged-respawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] A tool for respawning node binaries when special flags are present. ## What is it? Say you wrote a command line tool that runs arbitrary javascript (e.g. task runner, test framework, etc). For the sake of discussion, let's pretend it's a testing harness you've named `testify`. Everything is going splendidly until one day you decide to test some code that relies on a feature behind a v8 flag in node (`--harmony`, for example). Without much thought, you run `testify --harmony spec tests.js`. It doesn't work. After digging around for a bit, you realize this produces a [`process.argv`](http://nodejs.org/docs/latest/api/process.html#process_process_argv) of: `['node', '/usr/local/bin/test', '--harmony', 'spec', 'tests.js']` Crap. The `--harmony` flag is in the wrong place! It should be applied to the **node** command, not our binary. What we actually wanted was this: `['node', '--harmony', '/usr/local/bin/test', 'spec', 'tests.js']` Flagged-respawn fixes this problem and handles all the edge cases respawning creates, such as: - Providing a method to determine if a respawn is needed. - Piping stderr/stdout from the child into the parent. - Making the parent process exit with the same code as the child. - If the child is killed, making the parent exit with the same signal. To see it in action, clone this repository and run `npm install` / `npm run respawn` / `npm run nospawn`. ## Sample Usage ```js #!/usr/bin/env node const flaggedRespawn = require('flagged-respawn'); // get a list of all possible v8 flags for the running version of node const v8flags = require('v8flags').fetch(); flaggedRespawn(v8flags, process.argv, function (ready, child) { if (ready) { console.log('Running!'); // your cli code here } else { console.log('Special flags found, respawning.'); } if (process.pid !== child.pid) { console.log('Respawned to PID:', child.pid); } }); ``` ## API ### <u>flaggedRespawn(flags, argv, [ forcedFlags, ] callback) : Void</u> Respawns the script itself when *argv* has special flag contained in *flags* and/or *forcedFlags* is not empty. Because members of *flags* and *forcedFlags* are passed to `node` command, each of them needs to be a node flag or a V8 flag. #### Forbid respawning If `--no-respawning` flag is given in *argv*, this function does not respawned even if *argv* contains members of flags or *forcedFlags* is not empty. (This flag is also used internally to prevent from respawning more than once). #### Parameter: | Parameter | Type | Description | |:--------------|:------:|:----------------------------------------------------| | *flags* | Array | An array of node flags and V8 flags which are available when present in *argv*. | | *argv* | Array | Command line arguments to respawn. | | *forcedFlags* | Array or String | An array of node flags or a string of a single flag and V8 flags for respawning forcely. | | *callback* | function | A called function when not respawning or after respawned. | * **<u><i>callback</i>(ready, proc, argv) : Void</u>** *callback* function is called both when respawned or not, and it can be distinguished by callback's argument: *ready*. (*ready* indicates whether a process spawned its child process (false) or not (true), but it does not indicate whether a process is a spawned child process or not. *ready* for a spawned child process is true.) *argv* is an array of command line arguments which is respawned (when *ready* is false) or is passed current process except flags within *flags* and `--no-respawning` (when *ready* is true). **Parameter:** | Parameter | Type | Description | |:----------|:-------:|:--------------------------| | *ready* | boolean | True, if not respawning and is ready to execute main function. | | *proc* | object | Child process object if respawned, otherwise current process object. | | *argv* | Array | An array of command line arguments. | ## License MIT [downloads-image]: http://img.shields.io/npm/dm/flagged-respawn.svg [npm-url]: https://www.npmjs.com/package/flagged-respawn [npm-image]: http://img.shields.io/npm/v/flagged-respawn.svg [travis-url]: https://travis-ci.org/gulpjs/flagged-respawn [travis-image]: http://img.shields.io/travis/gulpjs/flagged-respawn.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/flagged-respawn [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/flagged-respawn.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/flagged-respawn [coveralls-image]: http://img.shields.io/coveralls/gulpjs/flagged-respawn/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. tcp-port-used ============= A simple Node.js module to check if a TCP port is currently in use. It returns a deferred promise from the q library. ## Installation npm install tcp-port-used ## Examples To check a port's state: var tcpPortUsed = require('tcp-port-used'); tcpPortUsed.check(44201, '127.0.0.1') .then(function(inUse) { console.log('Port 44201 usage: '+inUse); }, function(err) { console.error('Error on check:', err.message); }); To wait until a port on localhost is available: tcpPortUsed.waitUntilFree(44203, 500, 4000) .then(function() { console.log('Port 44203 is now free.'); }, function(err) { console.log('Error:', err.message); }); To wait until a port on a host is available: tcpPortUsed.waitUntilFreeOnHost(44203, 'some.host.com', 500, 4000) .then(function() { console.log('Port 44203 on some.host.com is now free.'); }, function(err) { console.log('Error:', err.message); }); To wait until a port on localhost is accepting connections: tcpPortUsed.waitUntilUsed(44204, 500, 4000) .then(function() { console.log('Port 44204 is now in use.'); }, function(err) { console.log('Error:', err.message); }); To wait until a port on a host is accepting connections: tcpPortUsed.waitUntilUsedOnHost(44204, 'some.host.com', 500, 4000) .then(function() { console.log('Port 44204 on some.host.com is now in use.'); }, function(err) { console.log('Error:', err.message); }); To wait until a port on a host is in specific state: var inUse = true; // wait until the port is in use tcpPortUsed.waitForStatus(44204, 'some.host.com', inUse, 500, 4000) .then(function() { console.log('Port 44204 on some.host.com is now in use.'); }, function(err) { console.log('Error:', err.message); }); ## API ### check(port [, host]) Checks if a TCP port is in use by attempting to connect to the port on host. If no host is specified, the module uses '127.0.0.1' (localhost). When the promise is resolved, there is a parameter `inUse`, when true means the port is in use and false means the port is free. **Parameters:** * **Number|Object** *port* The port you are curious to see if available. If an object, must contain all the parameters as properties. * **String** *host* The host name or IP address of the host. Default, if not defined: '127.0.0.1' **Returns:** **Object** A deferred promise from the q module. ### waitUntilFree(port [, retryTimeMs] [, timeOutMs]) Returns a deferred promise and fulfills it only when the localhost socket is free. Will retry on an interval specified in retryTimeMs until the timeout. If not defined the retryTime is 200 ms and the timeout is 2000 ms. **Parameters:** * **Number|Object** *port* a valid TCP port number. If an object must contain all the parameters as properties. * **Number** *[retryTimeMs]* the retry interval in milliseconds - defaultis is 100ms. * **Number** *[timeOutMs]* the amount of time to wait until port is free. Default 300ms. **Returns:** **Object** A deferred promise from the q module. ### waitUntilFreeOnHost(port [, host] [, retryTimeMs] [, timeOutMs]) Returns a deferred promise and fulfills it only when the localhost socket is free. Will retry on an interval specified in retryTimeMs until the timeout. If not defined the retryTime is 200 ms and the timeout is 2000 ms. If the host is not defined, the modules uses the default '127.0.0.1'. **Parameters:** * **Number|Object** *port* a valid TCP port number. If an object, must contain all the parameters as properties. * **String** *host* The host name or IP address of the host. Default, if not defined: '127.0.0.1' * **Number** *[retryTimeMs]* the retry interval in milliseconds - defaultis is 100ms. * **Number** *[timeOutMs]* the amount of time to wait until port is free. Default 300ms. **Returns:** **Object** A deferred promise from the q module. ### waitUntilUsed(port [, retryTimeMs] [, timeOutMs]) Returns a deferred promise and fulfills it only when the socket is accepting connections. Will retry on an interval specified in retryTimeMs until the timeout. If the host is not defined the retryTime is 200 ms and the timeout is 2000 ms. **Parameters:** * **Number|Object** *port* a valid TCP port number. If an object, must contain all the parameters as properties. * **Number** *[retryTimeMs]* the retry interval in milliseconds - defaultis is 100ms. * **Number** *[timeOutMs]* the amount of time to wait until port is free. Default 300ms. **Returns:** **Object** A deferred promise from the q module. ### waitUntilUsedOnHost(port [, host] [, retryTimeMs] [, timeOutMs]) Returns a deferred promise and fulfills it only when the socket is accepting connections. Will retry on an interval specified in retryTimeMs until the timeout. If not defined the retryTime is 200 ms and the timeout is 2000 ms. If the host is not defined the module uses the default '127.0.0.1'. **Parameters:** * **Number|Object** *port* a valid TCP port number. If an object, must contain all the parameters as properties. * **String** *host* The host name or IP address of the host. Default, if not defined: '127.0.0.1' * **Number** *[retryTimeMs]* the retry interval in milliseconds - defaultis is 100ms. * **Number** *[timeOutMs]* the amount of time to wait until port is free. Default 300ms. **Returns:** **Object** A deferred promise from the q module. ### waitForStatus(port, host, status [, retryTimeMs] [, timeOutMs]) Waits until the port on host matches the boolean status in terms of use. If the status is true, the promise defers until the port is in use. If the status is false the promise defers until the port is free. If the host is undefined or null, the module uses the default '127.0.0.1'. Also, if not defined the retryTime is 200 ms and the timeout is 2000 ms. **Parameters:** * **Number** *port* a valid TCP port number. If an object, must contain all the parameters as properties. * **String** *host* The host name or IP address of the host. Default, if not defined: '127.0.0.1' * **Boolean** *status* A boolean describing the condition to wait for in terms of "in use." True indicates wait until the port is in use. False indicates wait until the port is free. * **Number** *[retryTimeMs]* the retry interval in milliseconds - defaultis is 100ms. * **Number** *[timeOutMs]* the amount of time to wait until port is free. Default 300ms. **Returns:** **Object** A deferred promise from the q module. ## License The MIT License (MIT) Copyright (c) 2013 jut-io Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Library ======= Additional packages provided by the main package. | Package | Description |------------------------------------|------------------------- | [@assemblyscript/loader](./loader) | Module loader utility | [@assemblyscript/rtrace](./rtrace) | Runtime tracing utility | binaryen | Binaryen proxy The Binaryen proxy herein is imported accross the code base and forwards the `binaryen` npm package by default. It can be modified to use a custom build, for example for testing purposes. # fill-range [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/fill-range.svg?style=flat)](https://www.npmjs.com/package/fill-range) [![NPM monthly downloads](https://img.shields.io/npm/dm/fill-range.svg?style=flat)](https://npmjs.org/package/fill-range) [![NPM total downloads](https://img.shields.io/npm/dt/fill-range.svg?style=flat)](https://npmjs.org/package/fill-range) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/fill-range.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/fill-range) > Fill in a range of numbers or letters, optionally passing an increment or `step` to use, or create a regex-compatible range with `options.toRegex` Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save fill-range ``` ## Usage Expands numbers and letters, optionally using a `step` as the last argument. _(Numbers may be defined as JavaScript numbers or strings)_. ```js const fill = require('fill-range'); // fill(from, to[, step, options]); console.log(fill('1', '10')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10'] console.log(fill('1', '10', { toRegex: true })); //=> [1-9]|10 ``` **Params** * `from`: **{String|Number}** the number or letter to start with * `to`: **{String|Number}** the number or letter to end with * `step`: **{String|Number|Object|Function}** Optionally pass a [step](#optionsstep) to use. * `options`: **{Object|Function}**: See all available [options](#options) ## Examples By default, an array of values is returned. **Alphabetical ranges** ```js console.log(fill('a', 'e')); //=> ['a', 'b', 'c', 'd', 'e'] console.log(fill('A', 'E')); //=> [ 'A', 'B', 'C', 'D', 'E' ] ``` **Numerical ranges** Numbers can be defined as actual numbers or strings. ```js console.log(fill(1, 5)); //=> [ 1, 2, 3, 4, 5 ] console.log(fill('1', '5')); //=> [ 1, 2, 3, 4, 5 ] ``` **Negative ranges** Numbers can be defined as actual numbers or strings. ```js console.log(fill('-5', '-1')); //=> [ '-5', '-4', '-3', '-2', '-1' ] console.log(fill('-5', '5')); //=> [ '-5', '-4', '-3', '-2', '-1', '0', '1', '2', '3', '4', '5' ] ``` **Steps (increments)** ```js // numerical ranges with increments console.log(fill('0', '25', 4)); //=> [ '0', '4', '8', '12', '16', '20', '24' ] console.log(fill('0', '25', 5)); //=> [ '0', '5', '10', '15', '20', '25' ] console.log(fill('0', '25', 6)); //=> [ '0', '6', '12', '18', '24' ] // alphabetical ranges with increments console.log(fill('a', 'z', 4)); //=> [ 'a', 'e', 'i', 'm', 'q', 'u', 'y' ] console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ] console.log(fill('a', 'z', 6)); //=> [ 'a', 'g', 'm', 's', 'y' ] ``` ## Options ### options.step **Type**: `number` (formatted as a string or number) **Default**: `undefined` **Description**: The increment to use for the range. Can be used with letters or numbers. **Example(s)** ```js // numbers console.log(fill('1', '10', 2)); //=> [ '1', '3', '5', '7', '9' ] console.log(fill('1', '10', 3)); //=> [ '1', '4', '7', '10' ] console.log(fill('1', '10', 4)); //=> [ '1', '5', '9' ] // letters console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ] console.log(fill('a', 'z', 7)); //=> [ 'a', 'h', 'o', 'v' ] console.log(fill('a', 'z', 9)); //=> [ 'a', 'j', 's' ] ``` ### options.strictRanges **Type**: `boolean` **Default**: `false` **Description**: By default, `null` is returned when an invalid range is passed. Enable this option to throw a `RangeError` on invalid ranges. **Example(s)** The following are all invalid: ```js fill('1.1', '2'); // decimals not supported in ranges fill('a', '2'); // incompatible range values fill(1, 10, 'foo'); // invalid "step" argument ``` ### options.stringify **Type**: `boolean` **Default**: `undefined` **Description**: Cast all returned values to strings. By default, integers are returned as numbers. **Example(s)** ```js console.log(fill(1, 5)); //=> [ 1, 2, 3, 4, 5 ] console.log(fill(1, 5, { stringify: true })); //=> [ '1', '2', '3', '4', '5' ] ``` ### options.toRegex **Type**: `boolean` **Default**: `undefined` **Description**: Create a regex-compatible source string, instead of expanding values to an array. **Example(s)** ```js // alphabetical range console.log(fill('a', 'e', { toRegex: true })); //=> '[a-e]' // alphabetical with step console.log(fill('a', 'z', 3, { toRegex: true })); //=> 'a|d|g|j|m|p|s|v|y' // numerical range console.log(fill('1', '100', { toRegex: true })); //=> '[1-9]|[1-9][0-9]|100' // numerical range with zero padding console.log(fill('000001', '100000', { toRegex: true })); //=> '0{5}[1-9]|0{4}[1-9][0-9]|0{3}[1-9][0-9]{2}|0{2}[1-9][0-9]{3}|0[1-9][0-9]{4}|100000' ``` ### options.transform **Type**: `function` **Default**: `undefined` **Description**: Customize each value in the returned array (or [string](#optionstoRegex)). _(you can also pass this function as the last argument to `fill()`)_. **Example(s)** ```js // add zero padding console.log(fill(1, 5, value => String(value).padStart(4, '0'))); //=> ['0001', '0002', '0003', '0004', '0005'] ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Contributors | **Commits** | **Contributor** | | --- | --- | | 116 | [jonschlinkert](https://github.com/jonschlinkert) | | 4 | [paulmillr](https://github.com/paulmillr) | | 2 | [realityking](https://github.com/realityking) | | 2 | [bluelovers](https://github.com/bluelovers) | | 1 | [edorivai](https://github.com/edorivai) | | 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) Please consider supporting me on Patreon, or [start your own Patreon page](https://patreon.com/invite/bxpbvm)! <a href="https://www.patreon.com/jonschlinkert"> <img src="https://c5.patreon.com/external/logo/become_a_patron_button@2x.png" height="50"> </a> ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 08, 2019._ # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![build](https://github.com/epoberezkin/json-schema-traverse/workflows/build/badge.svg)](https://github.com/epoberezkin/json-schema-traverse/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/json-schema-traverse)](https://www.npmjs.com/package/json-schema-traverse) [![coverage](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## Enterprise support json-schema-traverse package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-json-schema-traverse?utm_source=npm-json-schema-traverse&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # fast-json-stable-stringify Deterministic `JSON.stringify()` - a faster version of [@substack](https://github.com/substack)'s json-stable-strigify without [jsonify](https://github.com/substack/jsonify). You can also pass in a custom comparison function. [![Build Status](https://travis-ci.org/epoberezkin/fast-json-stable-stringify.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-json-stable-stringify) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-json-stable-stringify/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-json-stable-stringify?branch=master) # example ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; console.log(stringify(obj)); ``` output: ``` {"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8} ``` # methods ``` js var stringify = require('fast-json-stable-stringify') ``` ## var str = stringify(obj, opts) Return a deterministic stringified string `str` from the object `obj`. ## options ### cmp If `opts` is given, you can supply an `opts.cmp` to have a custom comparison function for object keys. Your function `opts.cmp` is called with these parameters: ``` js opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue }) ``` For example, to sort on the object key names in reverse order you could write: ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; var s = stringify(obj, function (a, b) { return a.key < b.key ? 1 : -1; }); console.log(s); ``` which results in the output string: ``` {"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3} ``` Or if you wanted to sort on the object values in reverse order, you could write: ``` var stringify = require('fast-json-stable-stringify'); var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 }; var s = stringify(obj, function (a, b) { return a.value < b.value ? 1 : -1; }); console.log(s); ``` which outputs: ``` {"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10} ``` ### cycles Pass `true` in `opts.cycles` to stringify circular property as `__cycle__` - the result will not be a valid JSON string in this case. TypeError will be thrown in case of circular object without this option. # install With [npm](https://npmjs.org) do: ``` npm install fast-json-stable-stringify ``` # benchmark To run benchmark (requires Node.js 6+): ``` node benchmark ``` Results: ``` fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled) json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled) fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled) faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled) The fastest is fast-stable-stringify ``` ## Enterprise support fast-json-stable-stringify package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-json-stable-stringify?utm_source=npm-fast-json-stable-stringify&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. # license [MIT](https://github.com/epoberezkin/fast-json-stable-stringify/blob/master/LICENSE) # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # tar-fs filesystem bindings for [tar-stream](https://github.com/mafintosh/tar-stream). ``` npm install tar-fs ``` [![build status](https://secure.travis-ci.org/mafintosh/tar-fs.png)](http://travis-ci.org/mafintosh/tar-fs) ## Usage tar-fs allows you to pack directories into tarballs and extract tarballs into directories. It doesn't gunzip for you, so if you want to extract a `.tar.gz` with this you'll need to use something like [gunzip-maybe](https://github.com/mafintosh/gunzip-maybe) in addition to this. ``` js var tar = require('tar-fs') var fs = require('fs') // packing a directory tar.pack('./my-directory').pipe(fs.createWriteStream('my-tarball.tar')) // extracting a directory fs.createReadStream('my-other-tarball.tar').pipe(tar.extract('./my-other-directory')) ``` To ignore various files when packing or extracting add a ignore function to the options. `ignore` is also an alias for `filter`. Additionally you get `header` if you use ignore while extracting. That way you could also filter by metadata. ``` js var pack = tar.pack('./my-directory', { ignore: function(name) { return path.extname(name) === '.bin' // ignore .bin files when packing } }) var extract = tar.extract('./my-other-directory', { ignore: function(name) { return path.extname(name) === '.bin' // ignore .bin files inside the tarball when extracing } }) var extractFilesDirs = tar.extract('./my-other-other-directory', { ignore: function(_, header) { // pass files & directories, ignore e.g. symlinks return header.type !== 'file' && header.type !== 'directory' } }) ``` You can also specify which entries to pack using the `entries` option ```js var pack = tar.pack('./my-directory', { entries: ['file1', 'subdir/file2'] // only the specific entries will be packed }) ``` If you want to modify the headers when packing/extracting add a map function to the options ``` js var pack = tar.pack('./my-directory', { map: function(header) { header.name = 'prefixed/'+header.name return header } }) var extract = tar.extract('./my-directory', { map: function(header) { header.name = 'another-prefix/'+header.name return header } }) ``` Similarly you can use `mapStream` incase you wanna modify the input/output file streams ``` js var pack = tar.pack('./my-directory', { mapStream: function(fileStream, header) { // NOTE: the returned stream HAS to have the same length as the input stream. // If not make sure to update the size in the header passed in here. if (path.extname(header.name) === '.js') { return fileStream.pipe(someTransform) } return fileStream; } }) var extract = tar.extract('./my-directory', { mapStream: function(fileStream, header) { if (path.extname(header.name) === '.js') { return fileStream.pipe(someTransform) } return fileStream; } }) ``` Set `options.fmode` and `options.dmode` to ensure that files/directories extracted have the corresponding modes ``` js var extract = tar.extract('./my-directory', { dmode: parseInt(555, 8), // all dirs should be readable fmode: parseInt(444, 8) // all files should be readable }) ``` It can be useful to use `dmode` and `fmode` if you are packing/unpacking tarballs between *nix/windows to ensure that all files/directories unpacked are readable. Alternatively you can set `options.readable` and/or `options.writable` to set the dmode and fmode to readable/writable. ``` js var extract = tar.extract('./my-directory', { readable: true, // all dirs and files should be readable writable: true, // all dirs and files should be writable }) ``` Set `options.strict` to `false` if you want to ignore errors due to unsupported entry types (like device files) To dereference symlinks (pack the contents of the symlink instead of the link itself) set `options.dereference` to `true`. ## Copy a directory Copying a directory with permissions and mtime intact is as simple as ``` js tar.pack('source-directory').pipe(tar.extract('dest-directory')) ``` ## Interaction with [`tar-stream`](https://github.com/mafintosh/tar-stream) Use `finalize: false` and the `finish` hook to leave the pack stream open for further entries (see [`tar-stream#pack`](https://github.com/mafintosh/tar-stream#packing)), and use `pack` to pass an existing pack stream. ``` js var mypack = tar.pack('./my-directory', { finalize: false, finish: function(sameAsMypack) { mypack.entry({name: 'generated-file.txt'}, "hello") tar.pack('./other-directory', { pack: sameAsMypack }) } }) ``` ## Performance Packing and extracting a 6.1 GB with 2496 directories and 2398 files yields the following results on my Macbook Air. [See the benchmark here](https://gist.github.com/mafintosh/8102201) * tar-fs: 34.261 seconds * [node-tar](https://github.com/isaacs/node-tar): 366.123 seconds (or 10x slower) ## License MIT # tslib This is a runtime library for [TypeScript](http://www.typescriptlang.org/) that contains all of the TypeScript helper functions. This library is primarily used by the `--importHelpers` flag in TypeScript. When using `--importHelpers`, a module that uses helper functions like `__extends` and `__assign` in the following emitted file: ```ts var __assign = (this && this.__assign) || Object.assign || function(t) { for (var s, i = 1, n = arguments.length; i < n; i++) { s = arguments[i]; for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; } return t; }; exports.x = {}; exports.y = __assign({}, exports.x); ``` will instead be emitted as something like the following: ```ts var tslib_1 = require("tslib"); exports.x = {}; exports.y = tslib_1.__assign({}, exports.x); ``` Because this can avoid duplicate declarations of things like `__extends`, `__assign`, etc., this means delivering users smaller files on average, as well as less runtime overhead. For optimized bundles with TypeScript, you should absolutely consider using `tslib` and `--importHelpers`. # Installing For the latest stable version, run: ## npm ```sh # TypeScript 2.3.3 or later npm install tslib # TypeScript 2.3.2 or earlier npm install tslib@1.6.1 ``` ## yarn ```sh # TypeScript 2.3.3 or later yarn add tslib # TypeScript 2.3.2 or earlier yarn add tslib@1.6.1 ``` ## bower ```sh # TypeScript 2.3.3 or later bower install tslib # TypeScript 2.3.2 or earlier bower install tslib@1.6.1 ``` ## JSPM ```sh # TypeScript 2.3.3 or later jspm install tslib # TypeScript 2.3.2 or earlier jspm install tslib@1.6.1 ``` # Usage Set the `importHelpers` compiler option on the command line: ``` tsc --importHelpers file.ts ``` or in your tsconfig.json: ```json { "compilerOptions": { "importHelpers": true } } ``` #### For bower and JSPM users You will need to add a `paths` mapping for `tslib`, e.g. For Bower users: ```json { "compilerOptions": { "module": "amd", "importHelpers": true, "baseUrl": "./", "paths": { "tslib" : ["bower_components/tslib/tslib.d.ts"] } } } ``` For JSPM users: ```json { "compilerOptions": { "module": "system", "importHelpers": true, "baseUrl": "./", "paths": { "tslib" : ["jspm_packages/npm/tslib@1.[version].0/tslib.d.ts"] } } } ``` # Contribute There are many ways to [contribute](https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md) to TypeScript. * [Submit bugs](https://github.com/Microsoft/TypeScript/issues) and help us verify fixes as they are checked in. * Review the [source code changes](https://github.com/Microsoft/TypeScript/pulls). * Engage with other TypeScript users and developers on [StackOverflow](http://stackoverflow.com/questions/tagged/typescript). * Join the [#typescript](http://twitter.com/#!/search/realtime/%23typescript) discussion on Twitter. * [Contribute bug fixes](https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md). # Documentation * [Quick tutorial](http://www.typescriptlang.org/Tutorial) * [Programming handbook](http://www.typescriptlang.org/Handbook) * [Homepage](http://www.typescriptlang.org/) # jest-mock ## API ```js import {ModuleMocker} from 'jest-mock'; ``` ### `constructor(global)` Creates a new module mocker that generates mocks as if they were created in an environment with the given global object. ### `generateFromMetadata(metadata)` Generates a mock based on the given metadata (Metadata for the mock in the schema returned by the getMetadata method of this module). Mocks treat functions specially, and all mock functions have additional members, described in the documentation for `fn` in this module. One important note: function prototypes are handled specially by this mocking framework. For functions with prototypes, when called as a constructor, the mock will install mocked function members on the instance. This allows different instances of the same constructor to have different values for its mocks member and its return values. ### `getMetadata(component)` Inspects the argument and returns its schema in the following recursive format: ``` { type: ... members: {} } ``` Where type is one of `array`, `object`, `function`, or `ref`, and members is an optional dictionary where the keys are member names and the values are metadata objects. Function prototypes are defined by defining metadata for the `member.prototype` of the function. The type of a function prototype should always be `object`. For instance, a class might be defined like this: ```js const classDef = { type: 'function', members: { staticMethod: {type: 'function'}, prototype: { type: 'object', members: { instanceMethod: {type: 'function'}, }, }, }, }; ``` Metadata may also contain references to other objects defined within the same metadata object. The metadata for the referent must be marked with `refID` key and an arbitrary value. The referrer must be marked with a `ref` key that has the same value as object with refID that it refers to. For instance, this metadata blob: ```js const refID = { type: 'object', refID: 1, members: { self: {ref: 1}, }, }; ``` defines an object with a slot named `self` that refers back to the object. ### `fn` Generates a stand-alone function with members that help drive unit tests or confirm expectations. Specifically, functions returned by this method have the following members: ##### `.mock` An object with three members, `calls`, `instances` and `invocationCallOrder`, which are all lists. The items in the `calls` list are the arguments with which the function was called. The "instances" list stores the value of 'this' for each call to the function. This is useful for retrieving instances from a constructor. The `invocationCallOrder` lists the order in which the mock was called in relation to all mock calls, starting at 1. ##### `.mockReturnValueOnce(value)` Pushes the given value onto a FIFO queue of return values for the function. ##### `.mockReturnValue(value)` Sets the default return value for the function. ##### `.mockImplementationOnce(function)` Pushes the given mock implementation onto a FIFO queue of mock implementations for the function. ##### `.mockImplementation(function)` Sets the default mock implementation for the function. ##### `.mockReturnThis()` Syntactic sugar for .mockImplementation(function() {return this;}) In case both `mockImplementationOnce()/mockImplementation()` and `mockReturnValueOnce()/mockReturnValue()` are called. The priority of which to use is based on what is the last call: - if the last call is mockReturnValueOnce() or mockReturnValue(), use the specific return value or default return value. If specific return values are used up or no default return value is set, fall back to try mockImplementation(); - if the last call is mockImplementationOnce() or mockImplementation(), run the specific implementation and return the result or run default implementation and return the result. # NEAR CLI (command line interface) [![Build Status](https://travis-ci.com/near/near-cli.svg?branch=master)](https://travis-ci.com/near/near-cli) [![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/near/near-cli) NEAR CLI is a Node.js application that relies on [`near-api-js`](https://github.com/near/near-api-js) to connect to and interact with the NEAR blockchain. Create accounts, access keys, sign & send transactions with this versatile command line interface tool. **Note:** Node.js version 10+ is required to run NEAR CLI. ## Release notes **Release notes and unreleased changes can be found in the [CHANGELOG](CHANGELOG.md)** ## Overview _Click on a command for more information and examples._ | Command | Description | | ----------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | | **ACCESS KEYS** | | | [`near login`](#near-login) | stores a full access key locally using [NEAR Wallet](https://wallet.testnet.near.org/) | | [`near keys`](#near-keys) | displays all access keys and their details for a given account | | [`near generate-key`](#near-generate-key) | generates a local key pair **or** shows public key & [implicit account](http://docs.near.org/docs/roles/integrator/implicit-accounts) | | [`near add-key`](#near-add-key) | adds a new access key to an account | | [`near delete-key`](#near-delete-key) | deletes an access key from an account | | **ACCOUNTS** | | | [`near create-account`](#near-create-account) | creates an account | | [`near state`](#near-state) | shows general details of an account | | [`near keys`](#near-keys) | displays all access keys for a given account | | [`near send`](#near-send) | sends tokens from one account to another | | [`near delete`](#near-delete) | deletes an account and transfers remaining balance to a beneficiary account | | **CONTRACTS** | | | [`near deploy`](#near-deploy) | deploys a smart contract to the NEAR blockchain | | [`near dev-deploy`](#near-dev-deploy) | creates a development account and deploys a contract to it _(`testnet` only)_ | | [`near call`](#near-call) | makes a contract call which can invoke `change` _or_ `view` methods | | [`near view`](#near-view) | makes a contract call which can **only** invoke a `view` method | | **TRANSACTIONS** | | | [`near tx-status`](#near-tx-status) | queries a transaction's status by `txHash` | | **VALIDATORS** | | | [`near validators current`](#near-validators-current) | displays current [epoch](http://docs.near.org/docs/concepts/epoch) validator pool details | | [`near validators next`](#near-validators-next) | displays validator details for the next [epoch](http://docs.near.org/docs/concepts/epoch) | | [`near proposals`](#near-proposals) | displays validator proposals for the [epoch](http://docs.near.org/docs/concepts/epoch) _after_ next | | **REPL** | | | [`near repl`](#near-repl) | launches an interactive connection to the NEAR blockchain ([REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop)) | | | can also run a JS/TS file which exports an async main function that takes a [context](./context/index.d.ts) object | [ [**OPTIONS**](#options) ] > For EVM support see [Project Aurora's](https://aurora.dev) [`aurora-cli`](https://github.com/aurora-is-near/aurora-cli). --- ## Setup ### Installation > Make sure you have a current version of `npm` and `NodeJS` installed. #### Mac and Linux 1. Install `npm` and `node` using a package manager like `nvm` as sometimes there are issues using Ledger due to how OS X handles node packages related to USB devices. [[click here]](https://nodejs.org/en/download/package-manager/) 2. Ensure you have installed Node version 12 or above. 3. Install `near-cli` globally by running: ```bash npm install -g near-cli ``` #### Windows > For Windows users, we recommend using Windows Subsystem for Linux (`WSL`). 1. Install `WSL` [[click here]](https://docs.microsoft.com/en-us/windows/wsl/install-manual#downloading-distros) 2. Install `npm` [[click here]](https://www.npmjs.com/get-npm) 3. Install ` Node.js` [ [ click here ]](https://nodejs.org/en/download/package-manager/) 4. Change `npm` default directory [ [ click here ] ](https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally#manually-change-npms-default-directory) - This is to avoid any permission issues with `WSL` 5. Open `WSL` and install `near-cli` globally by running: ```bash npm install -g near-cli ``` --- ### Network selection > The default network for `near-cli` is `testnet`. - You can change the network by prepending an environment variable to your command. ```bash NEAR_ENV=betanet near send ... ``` - Alternatively, you can set up a global environment variable by running: ```bash export NEAR_ENV=mainnet ``` --- ### Custom RPC server selection You can set custom RPC server URL by setting this env variables: ```bash NEAR_CLI_MAINNET_RPC_SERVER_URL NEAR_CLI_TESTNET_RPC_SERVER_URL NEAR_CLI_BETANET_RPC_SERVER_URL NEAR_CLI_GUILDNET_RPC_SERVER_URL NEAR_CLI_LOCALNET_RPC_SERVER_URL NEAR_CLI_CI_RPC_SERVER_URL ``` Clear them in case you want to get back to the default RPC server. Example: ```bash export NEAR_CLI_TESTNET_RPC_SERVER_URL=<put_your_rpc_server_url_here> ``` --- ### RPC server API Keys Some RPC servers may require that you provide a valid API key to use them. You can set `x-api-key` for a server by running the next command: ```bash near set-api-key <rpc-server-url> <api-key> ``` This API Key will be saved in a config and used for each command you execute with this RPC URL. --- ## Access Keys ### `near login` > locally stores a full access key of an account you created with [NEAR Wallet](https://wallet.testnet.near.org/). - arguments: `none` - options: `default` **Example:** ```bash near login ``` #### Access Key Location: - Once complete you will now have your Access Key stored locally in a hidden directory called `.near-credentials` - This directory is located at the root of your `HOME` directory: - `~/.near-credentials` _(MAC / Linux)_ - `C:\Users\YOUR_ACCOUNT\.near-credentials` _(Windows)_ - Inside `.near-credentials`, access keys are organized in network subdirectories: - `default` _for `testnet`_ - `betanet` - `mainnet` - These network subdirectories contain `.JSON` objects with an: - `account_id` - `private_key` - `public_key` **Example:** ```json { "account_id": "example-acct.testnet", "public_key": "ed25519:7ns2AZVaG8XZrFrgRw7g8qhgddNTN64Zkz7Eo8JBnV5g", "private_key": "ed25519:4Ijd3vNUmdWJ4L922BxcsGN1aDrdpvUHEgqLQAUSLmL7S2qE9tYR9fqL6DqabGGDxCSHkKwdaAGNcHJ2Sfd" } ``` --- ### `near keys` > Displays all access keys for a given account. - arguments: `accountId` - options: `default` **Example:** ```bash near keys client.chainlink.testnet ``` <details> <summary> <strong>Example Response</strong> </summary> <p> ``` Keys for account client.chainlink.testnet [ { public_key: 'ed25519:4wrVrZbHrurMYgkcyusfvSJGLburmaw7m3gmCApxgvY4', access_key: { nonce: 97, permission: 'FullAccess' } }, { public_key: 'ed25519:H9k5eiU4xXS3M4z8HzKJSLaZdqGdGwBG49o7orNC4eZW', access_key: { nonce: 88, permission: { FunctionCall: { allowance: '18483247987345065500000000', receiver_id: 'client.chainlink.testnet', method_names: [ 'get_token_price', [length]: 1 ] } } } }, [length]: 2 ] ``` </p> </details> --- ### `near generate-key` > Creates a key pair locally in `.near-credentials` **or** displays public key from Ledger or seed phrase. - arguments: `accountId` or `none` - options: `--useLedgerKey`, `--seedPhrase`, or `--seedPath` **Note:** There are several ways to use `generate-key` that return very different results. Please reference the examples below for further details. --- #### 1) `near generate-key` > Creates a key pair locally in `.near-credentials` with an [implicit account](http://docs.near.org/docs/roles/integrator/implicit-accounts) as the accountId. _(hash representation of the public key)_ ```bash near generate-key ``` <details> <summary><strong>Example Response</strong></summary> <p> ```bash Key pair with ed25519:33Vn9VtNEtWQPPd1f4jf5HzJ5weLcvGHU8oz7o5UnPqy public key for an account "1e5b1346bdb4fc5ccd465f6757a9082a84bcacfd396e7d80b0c726252fe8b3e8" ``` </p> </details> --- #### 2) `near generate-key accountId` > Creates a key pair locally in `.near-credentials` with an `accountId` that you specify. **Note:** This does NOT create an account with this name, and will overwrite an existing `.json` file with the same name. ```bash near generate-key example.testnet ``` <details> <summary><strong>Example Response</strong></summary> <p> ```bash Key pair with ed25519:CcH3oMEFg8tpJLekyvF7Wp49G81K3QLhGbaWEFwtCjht public key for an account "example.testnet" ``` </p> </details> --- #### 3a) `near generate-key --useLedgerKey` > Uses a connected Ledger device to display a public key and [implicit account](http://docs.near.org/docs/roles/integrator/implicit-accounts) using the default HD path (`"44'/397'/0'/0'/1'"`) ```bash near generate-key --useLedgerKey ``` You should then see the following prompt to confirm this request on your Ledger device: Make sure to connect your Ledger and open NEAR app Waiting for confirmation on Ledger... After confirming the request on your Ledger device, a public key and implicit accountId will be displayed. <details> <summary><strong>Example Response</strong></summary> <p> ```bash Using public key: ed25519:B22RP10g695wyeRvKIWv61NjmQZEkWTMzAYgdfx6oSeB2 Implicit account: 42c320xc20739fd9a6bqf2f89z61rd14efe5d3de234199bc771235a4bb8b0e1 ``` </p> </details> --- #### 3b) `near generate-key --useLedgerKey="HD path you specify"` > Uses a connected Ledger device to display a public key and [implicit account](http://docs.near.org/docs/roles/integrator/implicit-accounts) using a custom HD path. ```bash near generate-key --useLedgerKey="44'/397'/0'/0'/2'" ``` You should then see the following prompt to confirm this request on your Ledger device: Make sure to connect your Ledger and open NEAR app Waiting for confirmation on Ledger... After confirming the request on your Ledger device, a public key and implicit accountId will be displayed. <details> <summary><strong>Example Response</strong></summary> <p> ```bash Using public key: ed25519:B22RP10g695wye3dfa32rDjmQZEkWTMzAYgCX6oSeB2 Implicit account: 42c320xc20739ASD9a6bqf2Dsaf289z61rd14efe5d3de23213789009afDsd5bb8b0e1 ``` </p> </details> --- #### 4a) `near generate-key --seedPhrase="your seed phrase"` > Uses a seed phrase to display a public key and [implicit account](http://docs.near.org/docs/roles/integrator/implicit-accounts) ```bash near generate-key --seedPhrase="cow moon right send now cool dense quark pretty see light after" ``` <details> <summary><strong>Example Response</strong></summary> <p> Key pair with ed25519:GkMNfc92fwM1AmwH1MTjF4b7UZuceamsq96XPkHsQ9vi public key for an account "e9fa50ac20522987a87e566fcd6febdc97bd35c8c489999ca8aff465c56969c3" </p> </details> --- #### 4b) `near generate-key accountId --seedPhrase="your seed phrase"` > Uses a seed phrase to display a public key **without** the [implicit account](http://docs.near.org/docs/roles/integrator/implicit-accounts). ```bash near generate-key example.testnet --seedPhrase="cow moon right send now cool dense quark pretty see light after" ``` <details> <summary><strong>Example Response</strong></summary> <p> Key pair with ed25519:GkMNfc92fwM1AmwH1MTjF4b7UZuceamsq96XPkHsQ9vi public key for an account "example.testnet" </p> </details> --- ### `near add-key` > Adds an either a **full access** or **function access** key to a given account. **Note:** You will use an _existing_ full access key for the account you would like to add a _new_ key to. ([`near login`](http://docs.near.org/docs/tools/near-cli#near-login)) #### 1) add a `full access` key - arguments: `accountId` `publicKey` **Example:** ```bash near add-key example-acct.testnet Cxg2wgFYrdLTEkMu6j5D6aEZqTb3kXbmJygS48ZKbo1S ``` <details> <summary><strong>Example Response</strong></summary> <p> Adding full access key = Cxg2wgFYrdLTEkMu6j5D6aEZqTb3kXbmJygS48ZKbo1S to example-acct.testnet. Transaction Id EwU1ooEvkR42HvGoJHu5ou3xLYT3JcgQwFV3fAwevGJg To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/EwU1ooEvkR42HvGoJHu5ou3xLYT3JcgQwFV3fAwevGJg </p> </details> #### 2) add a `function access` key - arguments: `accountId` `publicKey` `--contract-id` - options: `--method-names` `--allowance` > `accountId` is the account you are adding the key to > > `--contract-id` is the contract you are allowing methods to be called on > > `--method-names` are optional and if omitted, all methods of the `--contract-id` can be called. > > `--allowance` is the amount of Ⓝ the key is allowed to spend on gas fees _only_. If omitted then key will only be able to call view methods. **Note:** Each transaction made with this key will have gas fees deducted from the initial allowance and once it runs out a new key must be issued. **Example:** ```bash near add-key example-acct.testnet GkMNfc92fwM1AmwH1MTjF4b7UZuceamsq96XPkHsQ9vi --contract-id example-contract.testnet --method-names example_method --allowance 30000000000 ``` <details> <summary><strong>Example Response</strong></summary> <p> Adding function call access key = GkMNfc92fwM1AmwH1MTjF4b7UZuceamsq96XPkHsQ9vi to example-acct.testnet. Transaction Id H2BQL9fXVmdTbwkXcMFfZ7qhZqC8fFhsA8KDHFdT9q2r To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/H2BQL9fXVmdTbwkXcMFfZ7qhZqC8fFhsA8KDHFdT9q2r </p> </details> --- ### `near delete-key` > Deletes an existing key for a given account. - arguments: `accountId` `publicKey` - options: `default` **Note:** You will need separate full access key for the account you would like to delete a key from. ([`near login`](http://docs.near.org/docs/tools/near-cli#near-login)) **Example:** ```bash near delete-key example-acct.testnet Cxg2wgFYrdLTEkMu6j5D6aEZqTb3kXbmJygS48ZKbo1S ``` <details> <summary><strong>Example Response</strong></summary> <p> Transaction Id 4PwW7vjzTCno7W433nu4ieA6FvsAjp7zNFwicNLKjQFT To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/4PwW7vjzTCno7W433nu4ieA6FvsAjp7zNFwicNLKjQFT </p> </details> --- ## Accounts ### `near create-account` > Creates an account using a `--masterAccount` that will pay for the account's creation and any initial balance. - arguments: `accountId` `--masterAccount` - options: `--initialBalance` **Note:** You will only be able to create subaccounts of the `--masterAccount` unless the name of the new account is ≥ 32 characters. **Example**: ```bash near create-account 12345678901234567890123456789012 --masterAccount example-acct.testnet ``` **Subaccount example:** ```bash near create-account sub-acct.example-acct.testnet --masterAccount example-acct.testnet ``` **Example using `--initialBalance`:** ```bash near create-account sub-acct2.example-acct.testnet --masterAccount example-acct.testnet --initialBalance 10 ``` <details> <summary><strong>Example Response</strong></summary> <p> Saving key to '/HOME_DIR/.near-credentials/default/sub-acct2.example-acct.testnet.json' Account sub-acct2.example-acct.testnet for network "default" was created. </p> </details> --- ### `near state` > Shows details of an account's state. - arguments: `accountId` - options: `default` **Example:** ```bash near state example.testnet ``` <details> <summary><strong>Example Response</strong></summary> <p> ```json { "amount": "99999999303364037168535000", "locked": "0", "code_hash": "G1PCjeQbvbUsJ8piXNb7Yg6dn3mfivDQN7QkvsVuMt4e", "storage_usage": 53528, "storage_paid_at": 0, "block_height": 21577354, "block_hash": "AWu1mrT3eMJLjqyhNHvMKrrbahN6DqcNxXanB5UH1RjB", "formattedAmount": "99.999999303364037168535" } ``` </p> </details> --- ### `near send` > Sends NEAR tokens (Ⓝ) from one account to another. - arguments: `senderId` `receiverId` `amount` - options: `default` **Note:** You will need a full access key for the sending account. ([`near login`](http://docs.near.org/docs/tools/near-cli#near-login)) **Example:** ```bash near send sender.testnet receiver.testnet 10 ``` <details> <summary><strong>Example Response</strong></summary> <p> Sending 10 NEAR to receiver.testnet from sender.testnet Transaction Id BYTr6WNyaEy2ykAiQB9P5VvTyrJcFk6Yw95HPhXC6KfN To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/BYTr6WNyaEy2ykAiQB9P5VvTyrJcFk6Yw95HPhXC6KfN </p> </details> --- ### `near delete` > Deletes an account and transfers remaining balance to a beneficiary account. - arguments: `accountId` `beneficiaryId` - options: `default` **Example:** ```bash near delete sub-acct2.example-acct.testnet example-acct.testnet ``` <details> <summary><strong>Example Response</strong></summary> <p> Deleting account. Account id: sub-acct2.example-acct.testnet, node: https://rpc.testnet.near.org, helper: https://helper.testnet.near.org, beneficiary: example-acct.testnet Transaction Id 4x8xohER1E3yxeYdXPfG8GvXin1ShiaroqE5GdCd5YxX To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/4x8xohER1E3yxeYdXPfG8GvXin1ShiaroqE5GdCd5YxX Account sub-acct2.example-acct.testnet for network "default" was deleted. </p> </details> --- ## Contracts ### `near deploy` > Deploys a smart contract to a given accountId. - arguments: `accountId` `.wasmFile` - options: `initFunction` `initArgs` `initGas` `initDeposit` **Note:** You will need a full access key for the account you are deploying the contract to. ([`near login`](http://docs.near.org/docs/tools/near-cli#near-login)) **Example:** ```bash near deploy --accountId example-contract.testnet --wasmFile out/example.wasm ``` **Initialize Example:** ```bash near deploy --accountId example-contract.testnet --wasmFile out/example.wasm --initFunction new --initArgs '{"owner_id": "example-contract.testnet", "total_supply": "10000000"}' ``` <details> <summary><strong>Example Response</strong></summary> <p> Starting deployment. Account id: example-contract.testnet, node: https://rpc.testnet.near.org, helper: https://helper.testnet.near.org, file: main.wasm Transaction Id G8GhhPuujMHTRnwursPXE1Lv5iUZ8WUecwiST1PcKWMt To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/G8GhhPuujMHTRnwursPXE1Lv5iUZ8WUecwiST1PcKWMt Done deploying to example-contract.testnet </p> </details> ### `near dev-deploy` > Creates a development account and deploys a smart contract to it. No access keys needed. **_(`testnet` only)_** - options: `wasmFile`, `initFunction`, `initArgs`, `initGas`, `initDeposit`, `initialBalance`, `force` **Example:** ```bash near dev-deploy --wasmFile out/example.wasm ``` **Initialize Example:** ```bash near dev-deploy --wasmFile out/example.wasm --initFunction new --initArgs '{"owner_id": "example-contract.testnet", "total_supply": "10000000"}' ``` <details> <summary><strong>Example Response</strong></summary> <p> Starting deployment. Account id: dev-1603749005325-6432576, node: https://rpc.testnet.near.org, helper: https://helper.testnet.near.org, file: out/main.wasm Transaction Id 5nixQT87KeN3eZFX7zwBLUAKSY4nyjhwzLF27SWWKkAp To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/5nixQT87KeN3eZFX7zwBLUAKSY4nyjhwzLF27SWWKkAp Done deploying to dev-1603749005325-6432576 </p> </details> --- ### `near call` > makes a contract call which can modify _or_ view state. **Note:** Contract calls require a transaction fee (gas) so you will need an access key for the `--accountId` that will be charged. ([`near login`](http://docs.near.org/docs/tools/near-cli#near-login)) - arguments: `contractName` `method_name` `{ args }` `--accountId` - options: `--gas` `--deposit` **Example:** ```bash near call guest-book.testnet addMessage '{"text": "Aloha"}' --account-id example-acct.testnet ``` <details> <summary><strong>Example Response</strong></summary> <p> Scheduling a call: guest-book.testnet.addMessage({"text": "Aloha"}) Transaction Id FY8hBam2iyQfdHkdR1dp6w5XEPJzJSosX1wUeVPyUvVK To see the transaction in the transaction explorer, please open this url in your browser https://explorer.testnet.near.org/transactions/FY8hBam2iyQfdHkdR1dp6w5XEPJzJSosX1wUeVPyUvVK '' </p> </details> --- ### `near view` > Makes a contract call which can **only** view state. _(Call is free of charge)_ - arguments: `contractName` `method_name` `{ args }` - options: `default` **Example:** ```bash near view guest-book.testnet getMessages '{}' ``` <details> <summary><strong>Example Response</strong></summary> <p> View call: guest-book.testnet.getMessages({}) [ { premium: false, sender: 'waverlymaven.testnet', text: 'TGIF' }, { premium: true, sender: 'waverlymaven.testnet', text: 'Hello from New York 🌈' }, { premium: false, sender: 'fhr.testnet', text: 'Hi' }, { premium: true, sender: 'eugenethedream', text: 'test' }, { premium: false, sender: 'dongri.testnet', text: 'test' }, { premium: false, sender: 'dongri.testnet', text: 'hello' }, { premium: true, sender: 'dongri.testnet', text: 'hey' }, { premium: false, sender: 'hirokihori.testnet', text: 'hello' }, { premium: true, sender: 'eugenethedream', text: 'hello' }, { premium: false, sender: 'example-acct.testnet', text: 'Aloha' }, [length]: 10 ] </p> </details> --- ## NEAR EVM Contracts ### `near evm-view` > Makes an EVM contract call which can **only** view state. _(Call is free of charge)_ - arguments: `evmAccount` `contractName` `methodName` `[arguments]` `--abi` `--accountId` - options: `default` **Example:** ```bash near evm-view evm 0x89dfB1Cd61F05ad3971EC1f83056Fd9793c2D521 getAdopters '[]' --abi /path/to/contract/abi/Adoption.json --accountId test.near ``` <details> <summary><strong>Example Response</strong></summary> <p> ```json [ "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0xCBdA96B3F2B8eb962f97AE50C3852CA976740e2B", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000" ] ``` </p> </details> --- ### `near evm-call (deprecated)` > makes an EVM contract call which can modify _or_ view state. **Note:** Contract calls require a transaction fee (gas) so you will need an access key for the `--accountId` that will be charged. ([`near login`](http://docs.near.org/docs/tools/near-cli#near-login)) - arguments: `evmAccount` `contractName` `methodName` `[arguments]` `--abi` `--accountId` - options: `default` (`--gas` and `--deposit` coming soon…) **Example:** ```bash near evm-call evm 0x89dfB1Cd61F05ad3971EC1f83056Fd9793c2D521 adopt '["6"]' --abi /path/to/contract/abi/Adoption.json --accountId test.near ``` <details> <summary><strong>Example Response</strong></summary> <p> Scheduling a call inside evm EVM: 0x89dfB1Cd61F05ad3971EC1f83056Fd9793c2D521.adopt() with args [ '6' ] </p> </details> --- ### `near evm-dev-init` > Used for running EVM tests — creates a given number of test accounts on the desired network using a master NEAR account - arguments: `accountId` - options: `numAccounts` ```bash NEAR_ENV=betanet near evm-dev-init you.betanet 3 ``` The above will create 3 subaccounts of `you.betanet`. This is useful for tests that require multiple accounts, for instance, sending fungible tokens back and forth. If the `3` value were to be omitted, it would use the default of 5. --- ## Transactions ### `near tx-status` > Queries transaction status by hash and accountId. - arguments: `txHash` `--accountId` - options: `default` **Example:** ```bash near tx-status FY8hBam2iyQfdHkdR1dp6w5XEPJzJSosX1wUeVPyUvVK --accountId guest-book.testnet ``` <details> <summary><strong>Example Response</strong></summary> <p> ```json Transaction guest-book.testnet:FY8hBam2iyQfdHkdR1dp6w5XEPJzJSosX1wUeVPyUvVK { status: { SuccessValue: '' }, transaction: { signer_id: 'example-acct.testnet', public_key: 'ed25519:AXZZKnp6ZcWXyRNdy8FztYrniKf1qt6YZw6mCCReXrDB', nonce: 20, receiver_id: 'guest-book.testnet', actions: [ { FunctionCall: { method_name: 'addMessage', args: 'eyJ0ZXh0IjoiQWxvaGEifQ==', gas: 300000000000000, deposit: '0' } }, [length]: 1 ], signature: 'ed25519:5S6nZXPU72nzgAsTQLmAFfdVSykdKHWhtPMb5U7duacfPdUjrj8ipJxuRiWkZ4yDodvDNt92wcHLJxGLsyNEsZNB', hash: 'FY8hBam2iyQfdHkdR1dp6w5XEPJzJSosX1wUeVPyUvVK' }, transaction_outcome: { proof: [ [length]: 0 ], block_hash: '6nsjvzt6C52SSuJ8UvfaXTsdrUwcx8JtHfnUj8XjdKy1', id: 'FY8hBam2iyQfdHkdR1dp6w5XEPJzJSosX1wUeVPyUvVK', outcome: { logs: [ [length]: 0 ], receipt_ids: [ '7n6wjMgpoBTp22ScLHxeMLzcCvN8Vf5FUuC9PMmCX6yU', [length]: 1 ], gas_burnt: 2427979134284, tokens_burnt: '242797913428400000000', executor_id: 'example-acct.testnet', status: { SuccessReceiptId: '7n6wjMgpoBTp22ScLHxeMLzcCvN8Vf5FUuC9PMmCX6yU' } } }, receipts_outcome: [ { proof: [ [length]: 0 ], block_hash: 'At6QMrBuFQYgEPAh6fuRBmrTAe9hXTY1NzAB5VxTH1J2', id: '7n6wjMgpoBTp22ScLHxeMLzcCvN8Vf5FUuC9PMmCX6yU', outcome: { logs: [ [length]: 0 ], receipt_ids: [ 'FUttfoM2odAhKNQrJ8F4tiBpQJPYu66NzFbxRKii294e', [length]: 1 ], gas_burnt: 3559403233496, tokens_burnt: '355940323349600000000', executor_id: 'guest-book.testnet', status: { SuccessValue: '' } } }, { proof: [ [length]: 0 ], block_hash: 'J7KjpMPzAqE7iX82FAQT3qERDs6UR1EAqBLPJXBzoLCk', id: 'FUttfoM2odAhKNQrJ8F4tiBpQJPYu66NzFbxRKii294e', outcome: { logs: [ [length]: 0 ], receipt_ids: [ [length]: 0 ], gas_burnt: 0, tokens_burnt: '0', executor_id: 'example-acct.testnet', status: { SuccessValue: '' } } }, [length]: 2 ] } ``` </p> </details> --- ## Validators ### `near validators current` > Displays details of current validators. > > - amount staked > - number of seats > - percentage of uptime > - expected block production > - blocks actually produced - arguments: `current` - options: `default` **Example:** ```bash near validators current ``` **Example for `mainnet`:** ```bash NEAR_ENV=mainnet near validators current ``` <details> <summary><strong>Example Response</strong></summary> <p> ```bash Validators (total: 49, seat price: 1,976,588): .--------------------------------------------------------------------------------------------------------------------. | Validator Id | Stake | Seats | % Online | Blocks produced | Blocks expected | |----------------------------------------------|------------|---------|----------|-----------------|-----------------| | cryptium.poolv1.near | 13,945,727 | 7 | 100% | 1143 | 1143 | | astro-stakers.poolv1.near | 11,660,189 | 5 | 100% | 817 | 817 | | blockdaemon.poolv1.near | 11,542,867 | 5 | 76.74% | 627 | 817 | | zavodil.poolv1.near | 11,183,187 | 5 | 100% | 818 | 818 | | bisontrails.poolv1.near | 10,291,696 | 5 | 99.38% | 810 | 815 | | dokiacapital.poolv1.near | 7,906,352 | 3 | 99.54% | 650 | 653 | | chorusone.poolv1.near | 7,480,508 | 3 | 100% | 490 | 490 | | figment.poolv1.near | 6,931,070 | 3 | 100% | 489 | 489 | | stardust.poolv1.near | 6,401,678 | 3 | 100% | 491 | 491 | | anonymous.poolv1.near | 6,291,821 | 3 | 97.55% | 479 | 491 | | d1.poolv1.near | 6,265,109 | 3 | 100% | 491 | 491 | | near8888.poolv1.near | 6,202,968 | 3 | 99.38% | 486 | 489 | | rekt.poolv1.near | 5,950,212 | 3 | 100% | 490 | 490 | | epic.poolv1.near | 5,639,256 | 2 | 100% | 326 | 326 | | fresh.poolv1.near | 5,460,410 | 2 | 100% | 327 | 327 | | buildlinks.poolv1.near | 4,838,398 | 2 | 99.38% | 325 | 327 | | jubi.poolv1.near | 4,805,921 | 2 | 100% | 326 | 326 | | openshards.poolv1.near | 4,644,553 | 2 | 100% | 326 | 326 | | jazza.poolv1.near | 4,563,432 | 2 | 100% | 327 | 327 | | northernlights.poolv1.near | 4,467,978 | 2 | 99.39% | 326 | 328 | | inotel.poolv1.near | 4,427,152 | 2 | 100% | 327 | 327 | | baziliknear.poolv1.near | 4,261,142 | 2 | 100% | 328 | 328 | | stakesabai.poolv1.near | 4,242,618 | 2 | 100% | 326 | 326 | | everstake.poolv1.near | 4,234,552 | 2 | 100% | 327 | 327 | | stakin.poolv1.near | 4,071,704 | 2 | 100% | 327 | 327 | | certusone.poolv1.near | 3,734,505 | 1 | 100% | 164 | 164 | | lux.poolv1.near | 3,705,394 | 1 | 100% | 163 | 163 | | staked.poolv1.near | 3,683,365 | 1 | 100% | 164 | 164 | | lunanova.poolv1.near | 3,597,231 | 1 | 100% | 163 | 163 | | appload.poolv1.near | 3,133,163 | 1 | 100% | 163 | 163 | | smart-stake.poolv1.near | 3,095,711 | 1 | 100% | 164 | 164 | | artemis.poolv1.near | 3,009,462 | 1 | 99.39% | 163 | 164 | | moonlet.poolv1.near | 2,790,296 | 1 | 100% | 163 | 163 | | nearfans.poolv1.near | 2,771,137 | 1 | 100% | 163 | 163 | | nodeasy.poolv1.near | 2,692,745 | 1 | 99.39% | 163 | 164 | | erm.poolv1.near | 2,653,524 | 1 | 100% | 164 | 164 | | zkv_staketosupportprivacy.poolv1.near | 2,548,343 | 1 | 99.39% | 163 | 164 | | dsrvlabs.poolv1.near | 2,542,925 | 1 | 100% | 164 | 164 | | 08investinwomen_runbybisontrails.poolv1.near | 2,493,123 | 1 | 100% | 163 | 163 | | electric.poolv1.near | 2,400,532 | 1 | 99.39% | 163 | 164 | | sparkpool.poolv1.near | 2,378,191 | 1 | 100% | 163 | 163 | | hashquark.poolv1.near | 2,376,424 | 1 | 100% | 164 | 164 | | masternode24.poolv1.near | 2,355,634 | 1 | 100% | 164 | 164 | | sharpdarts.poolv1.near | 2,332,398 | 1 | 99.38% | 162 | 163 | | fish.poolv1.near | 2,315,249 | 1 | 100% | 163 | 163 | | ashert.poolv1.near | 2,103,327 | 1 | 97.56% | 160 | 164 | | 01node.poolv1.near | 2,058,200 | 1 | 100% | 163 | 163 | | finoa.poolv1.near | 2,012,304 | 1 | 100% | 163 | 163 | | majlovesreg.poolv1.near | 2,005,032 | 1 | 100% | 164 | 164 | '--------------------------------------------------------------------------------------------------------------------' ``` </p> </details> --- ### `near validators next` > Displays details for the next round of validators. > > - total number of seats available > - seat price > - amount staked > - number of seats assigned per validator - arguments: `next` - options: `default` **Example:** ```bash near validators next ``` **Example for `mainnet`:** ```bash NEAR_ENV=mainnet near validators next ``` <details> <summary><strong>Example Response</strong></summary> <p> ```bash Next validators (total: 49, seat price: 1,983,932): .----------------------------------------------------------------------------------------------. | Status | Validator | Stake | Seats | |----------|----------------------------------------------|--------------------------|---------| | Rewarded | cryptium.poolv1.near | 13,945,727 -> 14,048,816 | 7 | | Rewarded | astro-stakers.poolv1.near | 11,660,189 -> 11,704,904 | 5 | | Rewarded | blockdaemon.poolv1.near | 11,542,867 -> 11,545,942 | 5 | | Rewarded | zavodil.poolv1.near | 11,183,187 -> 11,204,123 | 5 | | Rewarded | bisontrails.poolv1.near | 10,291,696 -> 10,297,923 | 5 | | Rewarded | dokiacapital.poolv1.near | 7,906,352 -> 8,097,275 | 4 | | Rewarded | chorusone.poolv1.near | 7,480,508 -> 7,500,576 | 3 | | Rewarded | figment.poolv1.near | 6,931,070 -> 6,932,916 | 3 | | Rewarded | stardust.poolv1.near | 6,401,678 -> 6,449,363 | 3 | | Rewarded | anonymous.poolv1.near | 6,291,821 -> 6,293,497 | 3 | | Rewarded | d1.poolv1.near | 6,265,109 -> 6,266,777 | 3 | | Rewarded | near8888.poolv1.near | 6,202,968 -> 6,204,620 | 3 | | Rewarded | rekt.poolv1.near | 5,950,212 -> 5,951,797 | 2 | | Rewarded | epic.poolv1.near | 5,639,256 -> 5,640,758 | 2 | | Rewarded | fresh.poolv1.near | 5,460,410 -> 5,461,811 | 2 | | Rewarded | buildlinks.poolv1.near | 4,838,398 -> 4,839,686 | 2 | | Rewarded | jubi.poolv1.near | 4,805,921 -> 4,807,201 | 2 | | Rewarded | openshards.poolv1.near | 4,644,553 -> 4,776,692 | 2 | | Rewarded | jazza.poolv1.near | 4,563,432 -> 4,564,648 | 2 | | Rewarded | northernlights.poolv1.near | 4,467,978 -> 4,469,168 | 2 | | Rewarded | inotel.poolv1.near | 4,427,152 -> 4,428,331 | 2 | | Rewarded | baziliknear.poolv1.near | 4,261,142 -> 4,290,338 | 2 | | Rewarded | stakesabai.poolv1.near | 4,242,618 -> 4,243,748 | 2 | | Rewarded | everstake.poolv1.near | 4,234,552 -> 4,235,679 | 2 | | Rewarded | stakin.poolv1.near | 4,071,704 -> 4,072,773 | 2 | | Rewarded | certusone.poolv1.near | 3,734,505 -> 3,735,500 | 1 | | Rewarded | lux.poolv1.near | 3,705,394 -> 3,716,381 | 1 | | Rewarded | staked.poolv1.near | 3,683,365 -> 3,684,346 | 1 | | Rewarded | lunanova.poolv1.near | 3,597,231 -> 3,597,836 | 1 | | Rewarded | appload.poolv1.near | 3,133,163 -> 3,152,302 | 1 | | Rewarded | smart-stake.poolv1.near | 3,095,711 -> 3,096,509 | 1 | | Rewarded | artemis.poolv1.near | 3,009,462 -> 3,010,265 | 1 | | Rewarded | moonlet.poolv1.near | 2,790,296 -> 2,948,565 | 1 | | Rewarded | nearfans.poolv1.near | 2,771,137 -> 2,771,875 | 1 | | Rewarded | nodeasy.poolv1.near | 2,692,745 -> 2,693,463 | 1 | | Rewarded | erm.poolv1.near | 2,653,524 -> 2,654,231 | 1 | | Rewarded | dsrvlabs.poolv1.near | 2,542,925 -> 2,571,865 | 1 | | Rewarded | zkv_staketosupportprivacy.poolv1.near | 2,548,343 -> 2,549,022 | 1 | | Rewarded | 08investinwomen_runbybisontrails.poolv1.near | 2,493,123 -> 2,493,787 | 1 | | Rewarded | masternode24.poolv1.near | 2,355,634 -> 2,456,226 | 1 | | Rewarded | fish.poolv1.near | 2,315,249 -> 2,415,831 | 1 | | Rewarded | electric.poolv1.near | 2,400,532 -> 2,401,172 | 1 | | Rewarded | sparkpool.poolv1.near | 2,378,191 -> 2,378,824 | 1 | | Rewarded | hashquark.poolv1.near | 2,376,424 -> 2,377,057 | 1 | | Rewarded | sharpdarts.poolv1.near | 2,332,398 -> 2,332,948 | 1 | | Rewarded | ashert.poolv1.near | 2,103,327 -> 2,103,887 | 1 | | Rewarded | 01node.poolv1.near | 2,058,200 -> 2,058,760 | 1 | | Rewarded | finoa.poolv1.near | 2,012,304 -> 2,015,808 | 1 | | Rewarded | majlovesreg.poolv1.near | 2,005,032 -> 2,005,566 | 1 | '----------------------------------------------------------------------------------------------' ``` </p> </details> --- ### `near proposals` > Displays validator proposals for [epoch](http://docs.near.org/docs/concepts/epoch) after next. > > - expected seat price > - status of proposals > - previous amount staked and new amount that _will_ be staked > - amount of seats assigned per validator - arguments: `none` - options: `default` **Example:** ```bash near proposals ``` **Example for `mainnet`:** ```bash NEAR_ENV=mainnet near proposals ``` <details> <summary><strong>Example Response</strong></summary> <p> ```bash Proposals for the epoch after next (new: 51, passing: 49, expected seat price = 1,983,932) .--------------------------------------------------------------------------------------------------------. | Status | Validator | Stake => New Stake | Seats | |--------------------|----------------------------------------------|--------------------------|---------| | Proposal(Accepted) | cryptium.poolv1.near | 13,945,727 => 14,041,766 | 7 | | Proposal(Accepted) | astro-stakers.poolv1.near | 11,660,189 => 11,705,673 | 5 | | Proposal(Accepted) | blockdaemon.poolv1.near | 11,542,867 => 11,545,942 | 5 | | Proposal(Accepted) | zavodil.poolv1.near | 11,183,187 => 11,207,805 | 5 | | Proposal(Accepted) | bisontrails.poolv1.near | 10,291,696 => 10,300,978 | 5 | | Proposal(Accepted) | dokiacapital.poolv1.near | 7,906,352 => 8,097,275 | 4 | | Proposal(Accepted) | chorusone.poolv1.near | 7,480,508 => 7,568,268 | 3 | | Proposal(Accepted) | figment.poolv1.near | 6,931,070 => 6,932,916 | 3 | | Proposal(Accepted) | stardust.poolv1.near | 6,401,678 => 6,449,363 | 3 | | Proposal(Accepted) | anonymous.poolv1.near | 6,291,821 => 6,293,497 | 3 | | Proposal(Accepted) | d1.poolv1.near | 6,265,109 => 6,266,777 | 3 | | Proposal(Accepted) | near8888.poolv1.near | 6,202,968 => 6,204,620 | 3 | | Proposal(Accepted) | rekt.poolv1.near | 5,950,212 => 5,951,797 | 2 | | Proposal(Accepted) | epic.poolv1.near | 5,639,256 => 5,640,758 | 2 | | Proposal(Accepted) | fresh.poolv1.near | 5,460,410 => 5,461,811 | 2 | | Proposal(Accepted) | buildlinks.poolv1.near | 4,838,398 => 4,839,686 | 2 | | Proposal(Accepted) | jubi.poolv1.near | 4,805,921 => 4,807,201 | 2 | | Proposal(Accepted) | openshards.poolv1.near | 4,644,553 => 4,776,692 | 2 | | Proposal(Accepted) | jazza.poolv1.near | 4,563,432 => 4,564,648 | 2 | | Proposal(Accepted) | northernlights.poolv1.near | 4,467,978 => 4,469,168 | 2 | | Proposal(Accepted) | inotel.poolv1.near | 4,427,152 => 4,428,331 | 2 | | Proposal(Accepted) | baziliknear.poolv1.near | 4,261,142 => 4,290,891 | 2 | | Proposal(Accepted) | stakesabai.poolv1.near | 4,242,618 => 4,243,748 | 2 | | Proposal(Accepted) | everstake.poolv1.near | 4,234,552 => 4,235,679 | 2 | | Proposal(Accepted) | stakin.poolv1.near | 4,071,704 => 4,072,773 | 2 | | Proposal(Accepted) | certusone.poolv1.near | 3,734,505 => 3,735,500 | 1 | | Proposal(Accepted) | lux.poolv1.near | 3,705,394 => 3,716,381 | 1 | | Proposal(Accepted) | staked.poolv1.near | 3,683,365 => 3,684,346 | 1 | | Proposal(Accepted) | lunanova.poolv1.near | 3,597,231 => 3,597,836 | 1 | | Proposal(Accepted) | appload.poolv1.near | 3,133,163 => 3,152,302 | 1 | | Proposal(Accepted) | smart-stake.poolv1.near | 3,095,711 => 3,096,509 | 1 | | Proposal(Accepted) | artemis.poolv1.near | 3,009,462 => 3,010,265 | 1 | | Proposal(Accepted) | moonlet.poolv1.near | 2,790,296 => 2,948,565 | 1 | | Proposal(Accepted) | nearfans.poolv1.near | 2,771,137 => 2,771,875 | 1 | | Proposal(Accepted) | nodeasy.poolv1.near | 2,692,745 => 2,693,463 | 1 | | Proposal(Accepted) | erm.poolv1.near | 2,653,524 => 2,654,231 | 1 | | Proposal(Accepted) | dsrvlabs.poolv1.near | 2,542,925 => 2,571,865 | 1 | | Proposal(Accepted) | zkv_staketosupportprivacy.poolv1.near | 2,548,343 => 2,549,022 | 1 | | Proposal(Accepted) | 08investinwomen_runbybisontrails.poolv1.near | 2,493,123 => 2,493,787 | 1 | | Proposal(Accepted) | masternode24.poolv1.near | 2,355,634 => 2,456,226 | 1 | | Proposal(Accepted) | fish.poolv1.near | 2,315,249 => 2,415,831 | 1 | | Proposal(Accepted) | electric.poolv1.near | 2,400,532 => 2,401,172 | 1 | | Proposal(Accepted) | sparkpool.poolv1.near | 2,378,191 => 2,378,824 | 1 | | Proposal(Accepted) | hashquark.poolv1.near | 2,376,424 => 2,377,057 | 1 | | Proposal(Accepted) | sharpdarts.poolv1.near | 2,332,398 => 2,332,948 | 1 | | Proposal(Accepted) | ashert.poolv1.near | 2,103,327 => 2,103,887 | 1 | | Proposal(Accepted) | 01node.poolv1.near | 2,058,200 => 2,059,314 | 1 | | Proposal(Accepted) | finoa.poolv1.near | 2,012,304 => 2,015,808 | 1 | | Proposal(Accepted) | majlovesreg.poolv1.near | 2,005,032 => 2,005,566 | 1 | | Proposal(Declined) | huobipool.poolv1.near | 1,666,976 | 0 | | Proposal(Declined) | hb436_pool.poolv1.near | 500,030 | 0 | '--------------------------------------------------------------------------------------------------------' ``` </p> </details> --- ## REPL ### `near repl` > Launches NEAR [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop) _(an interactive JavaScript programming invironment)_ connected to NEAR. - arguments: `none` - options: `--accountId`, `--script` To launch, run: ```bash near repl ``` - You will then be shown a prompt `>` and can begin interacting with NEAR. - Try typing the following into your prompt that converts NEAR (Ⓝ) into yoctoNEAR (10^-24): ```bash nearAPI.utils.format.parseNearAmount('1000') ``` > You can also use an `--accountId` with `near repl`. The `script` argument allows you to pass the path to a javascript/typescript file that exports a `main` function taking a [`Context`](./context/index.d.ts) as an argument. Anything passed after `--` is passed to the script as the `argv` argument. Note: you will need to add `near-cli` as a dependency in order to import the types. e.g. ```ts import { Context } from "near-cli/context"; ``` **Example:** ```bash near repl --accountId example-acct.testnet ``` - Then try console logging `account` after the `>` prompt. ```bash console.log(account) ``` Or in a JS files ```js module.exports.main = async function main({account, near, nearAPI, argv}) { console.log(account); } ``` <details> <summary><strong>Example Response</strong></summary> <p> ```json Account { accessKeyByPublicKeyCache: {}, connection: Connection { networkId: 'default', provider: JsonRpcProvider { connection: [Object] }, signer: InMemorySigner { keyStore: [MergeKeyStore] } }, accountId: 'example-acct.testnet', _ready: Promise { undefined }, _state: { amount: '98786165075093615800000000', locked: '0', code_hash: '11111111111111111111111111111111', storage_usage: 741, storage_paid_at: 0, block_height: 21661252, block_hash: 'HbAj25dTzP3ssYjNRHov9BQ72UxpHGVqZK1mZwGdGNbo' } } ``` </p> </details> > You can also get a private key's public key. - First, declare a `privateKey` variable: ```js const myPrivateKey = "3fKM9Rr7LHyzhhzmmedXLvc59rayfh1oUYS3VfUcxwpAFQZtdx1G9aTY6i8hG9mQtYoycTEFTBtatgNKHRtYamrS"; ``` - Then run: ```js nearAPI.KeyPair.fromString(myPrivateKey).publicKey.toString(); ``` With NEAR REPL you have complete access to [`near-api-js`](https://github.com/near/near-api-js) to help you develop on the NEAR platform. --- ## Options | Option | Description | | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | | `--help` | Show help [boolean] | | `--version` | Show version number [boolean] | | `--nodeUrl, --node_url` | NEAR node URL [string] [default: "https://rpc.testnet.near.org"] | | `--networkId, --network_id`| NEAR network ID, allows using different keys based on network [string] [default: "testnet"] | | `--helperUrl` | NEAR contract helper URL [string] | | `--keyPath` | Path to master account key [string] | | `--accountId, --account_id`| Unique identifier for the account [string] | | `--useLedgerKey` | Use Ledger for signing with given HD key path [string] [default: "44'/397'/0'/0'/1'"] | | `--seedPhrase` | Seed phrase mnemonic [string] | | `--seedPath` | HD path derivation [string] [default: "m/44'/397'/0'"] | | `--walletUrl` | Website for NEAR Wallet [string] | | `--contractName` | Account name of contract [string] | | `--masterAccount` | Master account used when creating new accounts [string] | | `--helperAccount` | Expected top-level account for a network [string] | | `-v, --verbose` | Prints out verbose output [boolean] [default: false] | |`-f, --force` | Forcefully execute the desired action even if it is unsafe to do so [boolean] [default: false]| > Got a question? <a href="https://stackoverflow.com/questions/tagged/nearprotocol"> <h8>Ask it on StackOverflow!</h8></a> ## License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE](LICENSE) and [LICENSE-APACHE](LICENSE-APACHE) for details. [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] [![Downloads][downloads-image]][downloads-url] [![Join the chat at https://gitter.im/eslint/doctrine](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/eslint/doctrine?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # Doctrine Doctrine is a [JSDoc](http://usejsdoc.org) parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file). ## Installation You can install Doctrine using [npm](https://npmjs.com): ``` $ npm install doctrine --save-dev ``` Doctrine can also be used in web browsers using [Browserify](http://browserify.org). ## Usage Require doctrine inside of your JavaScript: ```js var doctrine = require("doctrine"); ``` ### parse() The primary method is `parse()`, which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are: * `unwrap` - set to `true` to delete the leading `/**`, any `*` that begins a line, and the trailing `*/` from the source text. Default: `false`. * `tags` - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if `tags` is `["param"]`, then only `@param` tags will be returned. Default: `null`. * `recoverable` - set to `true` to keep parsing even when syntax errors occur. Default: `false`. * `sloppy` - set to `true` to allow optional parameters to be specified in brackets (`@param {string} [foo]`). Default: `false`. * `lineNumbers` - set to `true` to add `lineNumber` to each node, specifying the line on which the node is found in the source. Default: `false`. * `range` - set to `true` to add `range` to each node, specifying the start and end index of the node in the original comment. Default: `false`. Here's a simple example: ```js var ast = doctrine.parse( [ "/**", " * This function comment is parsed by doctrine", " * @param {{ok:String}} userName", "*/" ].join('\n'), { unwrap: true }); ``` This example returns the following AST: { "description": "This function comment is parsed by doctrine", "tags": [ { "title": "param", "description": null, "type": { "type": "RecordType", "fields": [ { "type": "FieldType", "key": "ok", "value": { "type": "NameExpression", "name": "String" } } ] }, "name": "userName" } ] } See the [demo page](http://eslint.org/doctrine/demo/) more detail. ## Team These folks keep the project moving and are resources for help: * Nicholas C. Zakas ([@nzakas](https://github.com/nzakas)) - project lead * Yusuke Suzuki ([@constellation](https://github.com/constellation)) - reviewer ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/doctrine/issues). ## Frequently Asked Questions ### Can I pass a whole JavaScript file to Doctrine? No. Doctrine can only parse JSDoc comments, so you'll need to pass just the JSDoc comment to Doctrine in order to work. ### License #### doctrine Copyright JS Foundation and other contributors, https://js.foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. #### esprima some of functions is derived from esprima Copyright (C) 2012, 2011 [Ariya Hidayat](http://ariya.ofilabs.com/about) (twitter: [@ariyahidayat](http://twitter.com/ariyahidayat)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #### closure-compiler some of extensions is derived from closure-compiler Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ ### Where to ask for help? Join our [Chatroom](https://gitter.im/eslint/doctrine) [npm-image]: https://img.shields.io/npm/v/doctrine.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/doctrine [travis-image]: https://img.shields.io/travis/eslint/doctrine/master.svg?style=flat-square [travis-url]: https://travis-ci.org/eslint/doctrine [coveralls-image]: https://img.shields.io/coveralls/eslint/doctrine/master.svg?style=flat-square [coveralls-url]: https://coveralls.io/r/eslint/doctrine?branch=master [downloads-image]: http://img.shields.io/npm/dm/doctrine.svg?style=flat-square [downloads-url]: https://www.npmjs.com/package/doctrine [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) ### Status: Work in progress Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ # clone-response > Clone a Node.js HTTP response stream [![Build Status](https://travis-ci.org/lukechilds/clone-response.svg?branch=master)](https://travis-ci.org/lukechilds/clone-response) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/clone-response/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/clone-response?branch=master) [![npm](https://img.shields.io/npm/dm/clone-response.svg)](https://www.npmjs.com/package/clone-response) [![npm](https://img.shields.io/npm/v/clone-response.svg)](https://www.npmjs.com/package/clone-response) Returns a new stream and copies over all properties and methods from the original response giving you a complete duplicate. This is useful in situations where you need to consume the response stream but also want to pass an unconsumed stream somewhere else to be consumed later. ## Install ```shell npm install --save clone-response ``` ## Usage ```js const http = require('http'); const cloneResponse = require('clone-response'); http.get('http://example.com', response => { const clonedResponse = cloneResponse(response); response.pipe(process.stdout); setImmediate(() => { // The response stream has already been consumed by the time this executes, // however the cloned response stream is still available. doSomethingWithResponse(clonedResponse); }); }); ``` Please bear in mind that the process of cloning a stream consumes it. However, you can consume a stream multiple times in the same tick, therefore allowing you to create multiple clones. e.g: ```js const clone1 = cloneResponse(response); const clone2 = cloneResponse(response); // response can still be consumed in this tick but cannot be consumed if passed // into any async callbacks. clone1 and clone2 can be passed around and be // consumed in the future. ``` ## API ### cloneResponse(response) Returns a clone of the passed in response. #### response Type: `stream` A [Node.js HTTP response stream](https://nodejs.org/api/http.html#http_class_http_incomingmessage) to clone. ## License MIT © Luke Childs # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value = <JSON.Value>valueOrNull; // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let innerString = (<JSON.Str>value).valueOf(); let jsonString = (<JSON.Str>value).stringify(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.stringify(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0` * `^0.2.3` := `>=0.2.3 <0.3.0` * `^0.0.3` := `>=0.0.3 <0.0.4` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0` * `^0.0.x` := `>=0.0.0 <0.1.0` * `^0.0` := `>=0.0.0 <0.1.0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0` * `^0.x` := `>=0.0.0 <1.0.0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Integer.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` # end-of-stream A node module that calls a callback when a readable/writable/duplex stream has completed or failed. npm install end-of-stream [![Build status](https://travis-ci.org/mafintosh/end-of-stream.svg?branch=master)](https://travis-ci.org/mafintosh/end-of-stream) ## Usage Simply pass a stream and a callback to the `eos`. Both legacy streams, streams2 and stream3 are supported. ``` js var eos = require('end-of-stream'); eos(readableStream, function(err) { // this will be set to the stream instance if (err) return console.log('stream had an error or closed early'); console.log('stream has ended', this === readableStream); }); eos(writableStream, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has finished', this === writableStream); }); eos(duplexStream, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has ended and finished', this === duplexStream); }); eos(duplexStream, {readable:false}, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has finished but might still be readable'); }); eos(duplexStream, {writable:false}, function(err) { if (err) return console.log('stream had an error or closed early'); console.log('stream has ended but might still be writable'); }); eos(readableStream, {error:false}, function(err) { // do not treat emit('error', err) as a end-of-stream }); ``` ## License MIT ## Related `end-of-stream` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows command prompt notes ##### CMD On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Example: ```cmd set DEBUG=* & node app.js ``` ##### PowerShell (VS Code default) PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Example: ```cmd $env:DEBUG='app';node app.js ``` Then, run the program to be debugged as usual. npm script example: ```js "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js", ``` ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Extend You can simply extend debugger ```js const log = require('debug')('auth'); //creates new debug instance with extended namespace const logSign = log.extend('sign'); const logLogin = log.extend('login'); log('hello'); // auth hello logSign('hello'); //auth:sign hello logLogin('hello'); //auth:login hello ``` ## Set dynamically You can also enable debug dynamically by calling the `enable()` method : ```js let debug = require('debug'); console.log(1, debug.enabled('test')); debug.enable('test'); console.log(2, debug.enabled('test')); debug.disable(); console.log(3, debug.enabled('test')); ``` print : ``` 1 false 2 true 3 false ``` Usage : `enable(namespaces)` `namespaces` can include modes separated by a colon and wildcards. Note that calling `enable()` completely overrides previously set DEBUG variable : ``` $ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))' => false ``` `disable()` Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with. For example: ```js let debug = require('debug'); debug.enable('foo:*,-foo:bar'); let namespaces = debug.disable(); debug.enable(namespaces); ``` Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical. ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # Platform.js v1.3.5 A platform detection library that works on nearly all JavaScript platforms. ## Disclaimer Platform.js is for informational purposes only & **not** intended as a substitution for feature detection/inference checks. ## Documentation * [doc/README.md](https://github.com/bestiejs/platform.js/blob/master/doc/README.md#readme) * [wiki/Changelog](https://github.com/bestiejs/platform.js/wiki/Changelog) * [wiki/Roadmap](https://github.com/bestiejs/platform.js/wiki/Roadmap) * [platform.js demo](https://bestiejs.github.io/platform.js/) (See also [whatsmyua.info](https://www.whatsmyua.info/) for comparisons between platform.js and other platform detection libraries) ## Installation In a browser: ```html <script src="platform.js"></script> ``` In an AMD loader: ```js require(['platform'], function(platform) {/*…*/}); ``` Using npm: ```shell $ npm i --save platform ``` In Node.js: ```js var platform = require('platform'); ``` Usage example: ```js // on IE10 x86 platform preview running in IE7 compatibility mode on Windows 7 64 bit edition platform.name; // 'IE' platform.version; // '10.0' platform.layout; // 'Trident' platform.os; // 'Windows Server 2008 R2 / 7 x64' platform.description; // 'IE 10.0 x86 (platform preview; running in IE 7 mode) on Windows Server 2008 R2 / 7 x64' // or on an iPad platform.name; // 'Safari' platform.version; // '5.1' platform.product; // 'iPad' platform.manufacturer; // 'Apple' platform.layout; // 'WebKit' platform.os; // 'iOS 5.0' platform.description; // 'Safari 5.1 on Apple iPad (iOS 5.0)' // or parsing a given UA string var info = platform.parse('Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7.2; en; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 11.52'); info.name; // 'Opera' info.version; // '11.52' info.layout; // 'Presto' info.os; // 'Mac OS X 10.7.2' info.description; // 'Opera 11.52 (identifying as Firefox 4.0) on Mac OS X 10.7.2' ``` ## Support Tested in Chrome 82-83, Firefox 77-78, IE 11, Edge 82-83, Safari 12-13, Node.js 4-14, & PhantomJS 2.1.1. ## BestieJS Platform.js is part of the BestieJS *“Best in Class”* module collection. This means we promote solid browser/environment support, ES5+ precedents, unit testing, & plenty of documentation. # simple-concat [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/simple-concat/master.svg [travis-url]: https://travis-ci.org/feross/simple-concat [npm-image]: https://img.shields.io/npm/v/simple-concat.svg [npm-url]: https://npmjs.org/package/simple-concat [downloads-image]: https://img.shields.io/npm/dm/simple-concat.svg [downloads-url]: https://npmjs.org/package/simple-concat [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com ### Super-minimalist version of [`concat-stream`](https://github.com/maxogden/concat-stream). Less than 15 lines! ## install ``` npm install simple-concat ``` ## usage This example is longer than the implementation. ```js var s = new stream.PassThrough() concat(s, function (err, buf) { if (err) throw err console.error(buf) }) s.write('abc') setTimeout(function () { s.write('123') }, 10) setTimeout(function () { s.write('456') }, 20) setTimeout(function () { s.end('789') }, 30) ``` ## license MIT. Copyright (c) [Feross Aboukhadijeh](http://feross.org). # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl # Acorn-JSX [![Build Status](https://travis-ci.org/acornjs/acorn-jsx.svg?branch=master)](https://travis-ci.org/acornjs/acorn-jsx) [![NPM version](https://img.shields.io/npm/v/acorn-jsx.svg)](https://www.npmjs.org/package/acorn-jsx) This is plugin for [Acorn](http://marijnhaverbeke.nl/acorn/) - a tiny, fast JavaScript parser, written completely in JavaScript. It was created as an experimental alternative, faster [React.js JSX](http://facebook.github.io/react/docs/jsx-in-depth.html) parser. Later, it replaced the [official parser](https://github.com/facebookarchive/esprima) and these days is used by many prominent development tools. ## Transpiler Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out [Babel](https://babeljs.io/) and [Buble](https://buble.surge.sh/) transpilers which use `acorn-jsx` under the hood. ## Usage Requiring this module provides you with an Acorn plugin that you can use like this: ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');"); ``` Note that official spec doesn't support mix of XML namespaces and object-style access in tag names (#27) like in `<namespace:Object.Property />`, so it was deprecated in `acorn-jsx@3.0`. If you still want to opt-in to support of such constructions, you can pass the following option: ```javascript acorn.Parser.extend(jsx({ allowNamespacedObjects: true })) ``` Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely: ```javascript acorn.Parser.extend(jsx({ allowNamespaces: false })) ``` Note that by default `allowNamespaces` is enabled for spec compliancy. ## License This plugin is issued under the [MIT license](./LICENSE). binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` iMurmurHash.js ============== An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on [Gary Court's implementation](https://github.com/garycourt/murmurhash-js) with [kazuyukitanimura's modifications](https://github.com/kazuyukitanimura/murmurhash-js). This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing. Installation ------------ To use iMurmurHash in the browser, [download the latest version](https://raw.github.com/jensyt/imurmurhash-js/master/imurmurhash.min.js) and include it as a script on your site. ```html <script type="text/javascript" src="/scripts/imurmurhash.min.js"></script> <script> // Your code here, access iMurmurHash using the global object MurmurHash3 </script> ``` --- To use iMurmurHash in Node.js, install the module using NPM: ```bash npm install imurmurhash ``` Then simply include it in your scripts: ```javascript MurmurHash3 = require('imurmurhash'); ``` Quick Example ------------- ```javascript // Create the initial hash var hashState = MurmurHash3('string'); // Incrementally add text hashState.hash('more strings'); hashState.hash('even more strings'); // All calls can be chained if desired hashState.hash('and').hash('some').hash('more'); // Get a result hashState.result(); // returns 0xe4ccfe6b ``` Functions --------- ### MurmurHash3 ([string], [seed]) Get a hash state object, optionally initialized with the given _string_ and _seed_. _Seed_ must be a positive integer if provided. Calling this function without the `new` keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use `new` to create a new state object. For example: ```javascript // Use the cached object, calling the function again will return the same // object (but reset, so the current state would be lost) hashState = MurmurHash3(); ... // Create a new object that can be safely used however you wish. Calling the // function again will simply return a new state object, and no state loss // will occur, at the cost of creating more objects. hashState = new MurmurHash3(); ``` Both methods can be mixed however you like if you have different use cases. --- ### MurmurHash3.prototype.hash (string) Incrementally add _string_ to the hash. This can be called as many times as you want for the hash state object, including after a call to `result()`. Returns `this` so calls can be chained. --- ### MurmurHash3.prototype.result () Get the result of the hash as a 32-bit positive integer. This performs the tail and finalizer portions of the algorithm, but does not store the result in the state object. This means that it is perfectly safe to get results and then continue adding strings via `hash`. ```javascript // Do the whole string at once MurmurHash3('this is a test string').result(); // 0x70529328 // Do part of the string, get a result, then the other part var m = MurmurHash3('this is a'); m.result(); // 0xbfc4f834 m.hash(' test string').result(); // 0x70529328 (same as above) ``` --- ### MurmurHash3.prototype.reset ([seed]) Reset the state object for reuse, optionally using the given _seed_ (defaults to 0 like the constructor). Returns `this` so calls can be chained. --- License (MIT) ------------- Copyright (c) 2013 Gary Court, Jens Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # core-util-is The `util.is*` functions introduced in Node v0.12. <!-- -- This file is auto-generated from README_js.md. Changes should be made there. --> # uuid [![CI](https://github.com/uuidjs/uuid/workflows/CI/badge.svg)](https://github.com/uuidjs/uuid/actions?query=workflow%3ACI) [![Browser](https://github.com/uuidjs/uuid/workflows/Browser/badge.svg)](https://github.com/uuidjs/uuid/actions?query=workflow%3ABrowser) For the creation of [RFC4122](http://www.ietf.org/rfc/rfc4122.txt) UUIDs - **Complete** - Support for RFC4122 version 1, 3, 4, and 5 UUIDs - **Cross-platform** - Support for ... - CommonJS, [ECMAScript Modules](#ecmascript-modules) and [CDN builds](#cdn-builds) - Node 8, 10, 12, 14 - Chrome, Safari, Firefox, Edge, IE 11 browsers - Webpack and rollup.js module bundlers - [React Native / Expo](#react-native--expo) - **Secure** - Cryptographically-strong random values - **Small** - Zero-dependency, small footprint, plays nice with "tree shaking" packagers - **CLI** - Includes the [`uuid` command line](#command-line) utility **Upgrading from `uuid@3.x`?** Your code is probably okay, but check out [Upgrading From `uuid@3.x`](#upgrading-from-uuid3x) for details. ## Quickstart To create a random UUID... **1. Install** ```shell npm install uuid ``` **2. Create a UUID** (ES6 module syntax) ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); // ⇨ '9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d' ``` ... or using CommonJS syntax: ```javascript const { v4: uuidv4 } = require('uuid'); uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' ``` For timestamp UUIDs, namespace UUIDs, and other options read on ... ## API Summary | | | | | --- | --- | --- | | [`uuid.NIL`](#uuidnil) | The nil UUID string (all zeros) | New in `uuid@8.3` | | [`uuid.parse()`](#uuidparsestr) | Convert UUID string to array of bytes | New in `uuid@8.3` | | [`uuid.stringify()`](#uuidstringifyarr-offset) | Convert array of bytes to UUID string | New in `uuid@8.3` | | [`uuid.v1()`](#uuidv1options-buffer-offset) | Create a version 1 (timestamp) UUID | | | [`uuid.v3()`](#uuidv3name-namespace-buffer-offset) | Create a version 3 (namespace w/ MD5) UUID | | | [`uuid.v4()`](#uuidv4options-buffer-offset) | Create a version 4 (random) UUID | | | [`uuid.v5()`](#uuidv5name-namespace-buffer-offset) | Create a version 5 (namespace w/ SHA-1) UUID | | | [`uuid.validate()`](#uuidvalidatestr) | Test a string to see if it is a valid UUID | New in `uuid@8.3` | | [`uuid.version()`](#uuidversionstr) | Detect RFC version of a UUID | New in `uuid@8.3` | ## API ### uuid.NIL The nil UUID string (all zeros). Example: ```javascript import { NIL as NIL_UUID } from 'uuid'; NIL_UUID; // ⇨ '00000000-0000-0000-0000-000000000000' ``` ### uuid.parse(str) Convert UUID string to array of bytes | | | | --------- | ---------------------------------------- | | `str` | A valid UUID `String` | | _returns_ | `Uint8Array[16]` | | _throws_ | `TypeError` if `str` is not a valid UUID | Note: Ordering of values in the byte arrays used by `parse()` and `stringify()` follows the left &Rarr; right order of hex-pairs in UUID strings. As shown in the example below. Example: ```javascript import { parse as uuidParse } from 'uuid'; // Parse a UUID const bytes = uuidParse('6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b'); // Convert to hex strings to show byte order (for documentation purposes) [...bytes].map((v) => v.toString(16).padStart(2, '0')); // ⇨ // [ // '6e', 'c0', 'bd', '7f', // '11', 'c0', '43', 'da', // '97', '5e', '2a', '8a', // 'd9', 'eb', 'ae', '0b' // ] ``` ### uuid.stringify(arr[, offset]) Convert array of bytes to UUID string | | | | -------------- | ---------------------------------------------------------------------------- | | `arr` | `Array`-like collection of 16 values (starting from `offset`) between 0-255. | | [`offset` = 0] | `Number` Starting index in the Array | | _returns_ | `String` | | _throws_ | `TypeError` if a valid UUID string cannot be generated | Note: Ordering of values in the byte arrays used by `parse()` and `stringify()` follows the left &Rarr; right order of hex-pairs in UUID strings. As shown in the example below. Example: ```javascript import { stringify as uuidStringify } from 'uuid'; const uuidBytes = [ 0x6e, 0xc0, 0xbd, 0x7f, 0x11, 0xc0, 0x43, 0xda, 0x97, 0x5e, 0x2a, 0x8a, 0xd9, 0xeb, 0xae, 0x0b, ]; uuidStringify(uuidBytes); // ⇨ '6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b' ``` ### uuid.v1([options[, buffer[, offset]]]) Create an RFC version 1 (timestamp) UUID | | | | --- | --- | | [`options`] | `Object` with one or more of the following properties: | | [`options.node` ] | RFC "node" field as an `Array[6]` of byte values (per 4.1.6) | | [`options.clockseq`] | RFC "clock sequence" as a `Number` between 0 - 0x3fff | | [`options.msecs`] | RFC "timestamp" field (`Number` of milliseconds, unix epoch) | | [`options.nsecs`] | RFC "timestamp" field (`Number` of nanseconds to add to `msecs`, should be 0-10,000) | | [`options.random`] | `Array` of 16 random bytes (0-255) | | [`options.rng`] | Alternative to `options.random`, a `Function` that returns an `Array` of 16 random bytes (0-255) | | [`buffer`] | `Array \| Buffer` If specified, uuid will be written here in byte-form, starting at `offset` | | [`offset` = 0] | `Number` Index to start writing UUID bytes in `buffer` | | _returns_ | UUID `String` if no `buffer` is specified, otherwise returns `buffer` | | _throws_ | `Error` if more than 10M UUIDs/sec are requested | Note: The default [node id](https://tools.ietf.org/html/rfc4122#section-4.1.6) (the last 12 digits in the UUID) is generated once, randomly, on process startup, and then remains unchanged for the duration of the process. Note: `options.random` and `options.rng` are only meaningful on the very first call to `v1()`, where they may be passed to initialize the internal `node` and `clockseq` fields. Example: ```javascript import { v1 as uuidv1 } from 'uuid'; uuidv1(); // ⇨ '2c5ea4c0-4067-11e9-8bad-9b1deb4d3b7d' ``` Example using `options`: ```javascript import { v1 as uuidv1 } from 'uuid'; const v1options = { node: [0x01, 0x23, 0x45, 0x67, 0x89, 0xab], clockseq: 0x1234, msecs: new Date('2011-11-01').getTime(), nsecs: 5678, }; uuidv1(v1options); // ⇨ '710b962e-041c-11e1-9234-0123456789ab' ``` ### uuid.v3(name, namespace[, buffer[, offset]]) Create an RFC version 3 (namespace w/ MD5) UUID API is identical to `v5()`, but uses "v3" instead. &#x26a0;&#xfe0f; Note: Per the RFC, "_If backward compatibility is not an issue, SHA-1 [Version 5] is preferred_." ### uuid.v4([options[, buffer[, offset]]]) Create an RFC version 4 (random) UUID | | | | --- | --- | | [`options`] | `Object` with one or more of the following properties: | | [`options.random`] | `Array` of 16 random bytes (0-255) | | [`options.rng`] | Alternative to `options.random`, a `Function` that returns an `Array` of 16 random bytes (0-255) | | [`buffer`] | `Array \| Buffer` If specified, uuid will be written here in byte-form, starting at `offset` | | [`offset` = 0] | `Number` Index to start writing UUID bytes in `buffer` | | _returns_ | UUID `String` if no `buffer` is specified, otherwise returns `buffer` | Example: ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' ``` Example using predefined `random` values: ```javascript import { v4 as uuidv4 } from 'uuid'; const v4options = { random: [ 0x10, 0x91, 0x56, 0xbe, 0xc4, 0xfb, 0xc1, 0xea, 0x71, 0xb4, 0xef, 0xe1, 0x67, 0x1c, 0x58, 0x36, ], }; uuidv4(v4options); // ⇨ '109156be-c4fb-41ea-b1b4-efe1671c5836' ``` ### uuid.v5(name, namespace[, buffer[, offset]]) Create an RFC version 5 (namespace w/ SHA-1) UUID | | | | --- | --- | | `name` | `String \| Array` | | `namespace` | `String \| Array[16]` Namespace UUID | | [`buffer`] | `Array \| Buffer` If specified, uuid will be written here in byte-form, starting at `offset` | | [`offset` = 0] | `Number` Index to start writing UUID bytes in `buffer` | | _returns_ | UUID `String` if no `buffer` is specified, otherwise returns `buffer` | Note: The RFC `DNS` and `URL` namespaces are available as `v5.DNS` and `v5.URL`. Example with custom namespace: ```javascript import { v5 as uuidv5 } from 'uuid'; // Define a custom namespace. Readers, create your own using something like // https://www.uuidgenerator.net/ const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341'; uuidv5('Hello, World!', MY_NAMESPACE); // ⇨ '630eb68f-e0fa-5ecc-887a-7c7a62614681' ``` Example with RFC `URL` namespace: ```javascript import { v5 as uuidv5 } from 'uuid'; uuidv5('https://www.w3.org/', uuidv5.URL); // ⇨ 'c106a26a-21bb-5538-8bf2-57095d1976c1' ``` ### uuid.validate(str) Test a string to see if it is a valid UUID | | | | --------- | --------------------------------------------------- | | `str` | `String` to validate | | _returns_ | `true` if string is a valid UUID, `false` otherwise | Example: ```javascript import { validate as uuidValidate } from 'uuid'; uuidValidate('not a UUID'); // ⇨ false uuidValidate('6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b'); // ⇨ true ``` Using `validate` and `version` together it is possible to do per-version validation, e.g. validate for only v4 UUIds. ```javascript import { version as uuidVersion } from 'uuid'; import { validate as uuidValidate } from 'uuid'; function uuidValidateV4(uuid) { return uuidValidate(uuid) && uuidVersion(uuid) === 4; } const v1Uuid = 'd9428888-122b-11e1-b85c-61cd3cbb3210'; const v4Uuid = '109156be-c4fb-41ea-b1b4-efe1671c5836'; uuidValidateV4(v4Uuid); // ⇨ true uuidValidateV4(v1Uuid); // ⇨ false ``` ### uuid.version(str) Detect RFC version of a UUID | | | | --------- | ---------------------------------------- | | `str` | A valid UUID `String` | | _returns_ | `Number` The RFC version of the UUID | | _throws_ | `TypeError` if `str` is not a valid UUID | Example: ```javascript import { version as uuidVersion } from 'uuid'; uuidVersion('45637ec4-c85f-11ea-87d0-0242ac130003'); // ⇨ 1 uuidVersion('6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b'); // ⇨ 4 ``` ## Command Line UUIDs can be generated from the command line using `uuid`. ```shell $ uuid ddeb27fb-d9a0-4624-be4d-4615062daed4 ``` The default is to generate version 4 UUIDS, however the other versions are supported. Type `uuid --help` for details: ```shell $ uuid --help Usage: uuid uuid v1 uuid v3 <name> <namespace uuid> uuid v4 uuid v5 <name> <namespace uuid> uuid --help Note: <namespace uuid> may be "URL" or "DNS" to use the corresponding UUIDs defined by RFC4122 ``` ## ECMAScript Modules This library comes with [ECMAScript Modules](https://www.ecma-international.org/ecma-262/6.0/#sec-modules) (ESM) support for Node.js versions that support it ([example](./examples/node-esmodules/)) as well as bundlers like [rollup.js](https://rollupjs.org/guide/en/#tree-shaking) ([example](./examples/browser-rollup/)) and [webpack](https://webpack.js.org/guides/tree-shaking/) ([example](./examples/browser-webpack/)) (targeting both, Node.js and browser environments). ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' ``` To run the examples you must first create a dist build of this library in the module root: ```shell npm run build ``` ## CDN Builds ### ECMAScript Modules To load this module directly into modern browsers that [support loading ECMAScript Modules](https://caniuse.com/#feat=es6-module) you can make use of [jspm](https://jspm.org/): ```html <script type="module"> import { v4 as uuidv4 } from 'https://jspm.dev/uuid'; console.log(uuidv4()); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' </script> ``` ### UMD To load this module directly into older browsers you can use the [UMD (Universal Module Definition)](https://github.com/umdjs/umd) builds from any of the following CDNs: **Using [UNPKG](https://unpkg.com/uuid@latest/dist/umd/)**: ```html <script src="https://unpkg.com/uuid@latest/dist/umd/uuidv4.min.js"></script> ``` **Using [jsDelivr](https://cdn.jsdelivr.net/npm/uuid@latest/dist/umd/)**: ```html <script src="https://cdn.jsdelivr.net/npm/uuid@latest/dist/umd/uuidv4.min.js"></script> ``` **Using [cdnjs](https://cdnjs.com/libraries/uuid)**: ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/uuid/8.1.0/uuidv4.min.js"></script> ``` These CDNs all provide the same [`uuidv4()`](#uuidv4options-buffer-offset) method: ```html <script> uuidv4(); // ⇨ '55af1e37-0734-46d8-b070-a1e42e4fc392' </script> ``` Methods for the other algorithms ([`uuidv1()`](#uuidv1options-buffer-offset), [`uuidv3()`](#uuidv3name-namespace-buffer-offset) and [`uuidv5()`](#uuidv5name-namespace-buffer-offset)) are available from the files `uuidv1.min.js`, `uuidv3.min.js` and `uuidv5.min.js` respectively. ## "getRandomValues() not supported" This error occurs in environments where the standard [`crypto.getRandomValues()`](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getRandomValues) API is not supported. This issue can be resolved by adding an appropriate polyfill: ### React Native / Expo 1. Install [`react-native-get-random-values`](https://github.com/LinusU/react-native-get-random-values#readme) 1. Import it _before_ `uuid`. Since `uuid` might also appear as a transitive dependency of some other imports it's safest to just import `react-native-get-random-values` as the very first thing in your entry point: ```javascript import 'react-native-get-random-values'; import { v4 as uuidv4 } from 'uuid'; ``` Note: If you are using Expo, you must be using at least `react-native-get-random-values@1.5.0` and `expo@39.0.0`. ### Web Workers / Service Workers (Edge <= 18) [In Edge <= 18, Web Crypto is not supported in Web Workers or Service Workers](https://caniuse.com/#feat=cryptography) and we are not aware of a polyfill (let us know if you find one, please). ## Upgrading From `uuid@7.x` ### Only Named Exports Supported When Using with Node.js ESM `uuid@7.x` did not come with native ECMAScript Module (ESM) support for Node.js. Importing it in Node.js ESM consequently imported the CommonJS source with a default export. This library now comes with true Node.js ESM support and only provides named exports. Instead of doing: ```javascript import uuid from 'uuid'; uuid.v4(); ``` you will now have to use the named exports: ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); ``` ### Deep Requires No Longer Supported Deep requires like `require('uuid/v4')` [which have been deprecated in `uuid@7.x`](#deep-requires-now-deprecated) are no longer supported. ## Upgrading From `uuid@3.x` "_Wait... what happened to `uuid@4.x` - `uuid@6.x`?!?_" In order to avoid confusion with RFC [version 4](#uuidv4options-buffer-offset) and [version 5](#uuidv5name-namespace-buffer-offset) UUIDs, and a possible [version 6](http://gh.peabody.io/uuidv6/), releases 4 thru 6 of this module have been skipped. ### Deep Requires Now Deprecated `uuid@3.x` encouraged the use of deep requires to minimize the bundle size of browser builds: ```javascript const uuidv4 = require('uuid/v4'); // <== NOW DEPRECATED! uuidv4(); ``` As of `uuid@7.x` this library now provides ECMAScript modules builds, which allow packagers like Webpack and Rollup to do "tree-shaking" to remove dead code. Instead, use the `import` syntax: ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); ``` ... or for CommonJS: ```javascript const { v4: uuidv4 } = require('uuid'); uuidv4(); ``` ### Default Export Removed `uuid@3.x` was exporting the Version 4 UUID method as a default export: ```javascript const uuid = require('uuid'); // <== REMOVED! ``` This usage pattern was already discouraged in `uuid@3.x` and has been removed in `uuid@7.x`. ---- Markdown generated from [README_js.md](README_js.md) by [![RunMD Logo](http://i.imgur.com/h0FVyzU.png)](https://github.com/broofa/runmd) # Tools ## clang-format The clang-format checking tools is designed to check changed lines of code compared to given git-refs. ## Migration Script The migration tool is designed to reduce repetitive work in the migration process. However, the script is not aiming to convert every thing for you. There are usually some small fixes and major reconstruction required. ### How To Use To run the conversion script, first make sure you have the latest `node-addon-api` in your `node_modules` directory. ``` npm install node-addon-api ``` Then run the script passing your project directory ``` node ./node_modules/node-addon-api/tools/conversion.js ./ ``` After finish, recompile and debug things that are missed by the script. ### Quick Fixes Here is the list of things that can be fixed easily. 1. Change your methods' return value to void if it doesn't return value to JavaScript. 2. Use `.` to access attribute or to invoke member function in Napi::Object instead of `->`. 3. `Napi::New(env, value);` to `Napi::[Type]::New(env, value); ### Major Reconstructions The implementation of `Napi::ObjectWrap` is significantly different from NAN's. `Napi::ObjectWrap` takes a pointer to the wrapped object and creates a reference to the wrapped object inside ObjectWrap constructor. `Napi::ObjectWrap` also associates wrapped object's instance methods to Javascript module instead of static methods like NAN. So if you use Nan::ObjectWrap in your module, you will need to execute the following steps. 1. Convert your [ClassName]::New function to a constructor function that takes a `Napi::CallbackInfo`. Declare it as ``` [ClassName](const Napi::CallbackInfo& info); ``` and define it as ``` [ClassName]::[ClassName](const Napi::CallbackInfo& info) : Napi::ObjectWrap<[ClassName]>(info){ ... } ``` This way, the `Napi::ObjectWrap` constructor will be invoked after the object has been instantiated and `Napi::ObjectWrap` can use the `this` pointer to create a reference to the wrapped object. 2. Move your original constructor code into the new constructor. Delete your original constructor. 3. In your class initialization function, associate native methods in the following way. ``` Napi::FunctionReference constructor; void [ClassName]::Init(Napi::Env env, Napi::Object exports, Napi::Object module) { Napi::HandleScope scope(env); Napi::Function ctor = DefineClass(env, "Canvas", { InstanceMethod<&[ClassName]::Func1>("Func1"), InstanceMethod<&[ClassName]::Func2>("Func2"), InstanceAccessor<&[ClassName]::ValueGetter>("Value"), StaticMethod<&[ClassName]::StaticMethod>("MethodName"), InstanceValue("Value", Napi::[Type]::New(env, value)), }); constructor = Napi::Persistent(ctor); constructor .SuppressDestruct(); exports.Set("[ClassName]", ctor); } ``` 4. In function where you need to Unwrap the ObjectWrap in NAN like `[ClassName]* native = Nan::ObjectWrap::Unwrap<[ClassName]>(info.This());`, use `this` pointer directly as the unwrapped object as each ObjectWrap instance is associated with a unique object instance. If you still find issues after following this guide, please leave us an issue describing your problem and we will try to resolve it. # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 10.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # word-wrap [![NPM version](https://img.shields.io/npm/v/word-wrap.svg?style=flat)](https://www.npmjs.com/package/word-wrap) [![NPM monthly downloads](https://img.shields.io/npm/dm/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![NPM total downloads](https://img.shields.io/npm/dt/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/word-wrap.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/word-wrap) > Wrap words to a specified length. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save word-wrap ``` ## Usage ```js var wrap = require('word-wrap'); wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'); ``` Results in: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. ``` ## Options ![image](https://cloud.githubusercontent.com/assets/383994/6543728/7a381c08-c4f6-11e4-8b7d-b6ba197569c9.png) ### options.width Type: `Number` Default: `50` The width of the text before wrapping to a new line. **Example:** ```js wrap(str, {width: 60}); ``` ### options.indent Type: `String` Default: `` (two spaces) The string to use at the beginning of each line. **Example:** ```js wrap(str, {indent: ' '}); ``` ### options.newline Type: `String` Default: `\n` The string to use at the end of each line. **Example:** ```js wrap(str, {newline: '\n\n'}); ``` ### options.escape Type: `function` Default: `function(str){return str;}` An escape function to run on each line after splitting them. **Example:** ```js var xmlescape = require('xml-escape'); wrap(str, { escape: function(string){ return xmlescape(string); } }); ``` ### options.trim Type: `Boolean` Default: `false` Trim trailing whitespace from the returned string. This option is included since `.trim()` would also strip the leading indentation from the first line. **Example:** ```js wrap(str, {trim: true}); ``` ### options.cut Type: `Boolean` Default: `false` Break a word between any two letters when the word is longer than the specified width. **Example:** ```js wrap(str, {cut: true}); ``` ## About ### Related projects * [common-words](https://www.npmjs.com/package/common-words): Updated list (JSON) of the 100 most common words in the English language. Useful for… [more](https://github.com/jonschlinkert/common-words) | [homepage](https://github.com/jonschlinkert/common-words "Updated list (JSON) of the 100 most common words in the English language. Useful for excluding these words from arrays.") * [shuffle-words](https://www.npmjs.com/package/shuffle-words): Shuffle the words in a string and optionally the letters in each word using the… [more](https://github.com/jonschlinkert/shuffle-words) | [homepage](https://github.com/jonschlinkert/shuffle-words "Shuffle the words in a string and optionally the letters in each word using the Fisher-Yates algorithm. Useful for creating test fixtures, benchmarking samples, etc.") * [unique-words](https://www.npmjs.com/package/unique-words): Return the unique words in a string or array. | [homepage](https://github.com/jonschlinkert/unique-words "Return the unique words in a string or array.") * [wordcount](https://www.npmjs.com/package/wordcount): Count the words in a string. Support for english, CJK and Cyrillic. | [homepage](https://github.com/jonschlinkert/wordcount "Count the words in a string. Support for english, CJK and Cyrillic.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Contributors | **Commits** | **Contributor** | | --- | --- | | 43 | [jonschlinkert](https://github.com/jonschlinkert) | | 2 | [lordvlad](https://github.com/lordvlad) | | 2 | [hildjj](https://github.com/hildjj) | | 1 | [danilosampaio](https://github.com/danilosampaio) | | 1 | [2fd](https://github.com/2fd) | | 1 | [toddself](https://github.com/toddself) | | 1 | [wolfgang42](https://github.com/wolfgang42) | | 1 | [zachhale](https://github.com/zachhale) | ### Building docs _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` ### Running tests Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](https://twitter.com/jonschlinkert) ### License Copyright © 2017, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 02, 2017._ # Short Film Funding Project This project emerged with the understanding that social development is possible by revealing the potential of individuals and sharing inspiring stories with the society. With this project, which can also be seen as a social responsibility, it is aimed to reveal the importance of short films and to fund them. https://www.loom.com/share/4ff317d682124c199721b89a444b6a3f # Usage To deploy the contract for development, follow these steps: 1. clone this repo locally 2. run yarn to install dependencies 3. run ./scripts/1.dev-deploy.sh to deploy the contract (this uses near dev-deploy) Your contract is now ready to use. # To use the contract you can do any of the following: ---------------------------------------------------------------------------------------------- - Run ./scripts/2.create-shortfilm.sh to create the short film by the director who wants people to fund - Run ./scripts/3.remove-shortfilm.sh to delete the short film by the director - Run ./scripts/4.fund-shortfilm.sh to fund the short film by the funder - Run ./scripts/5.send-fund.sh for transferring the accumulated funds to the wallet of the director - Run ./scripts/6.find-shortfilm.sh to find the short film - Run ./scripts/7.list-shortfilms.sh to list the short films # ansi-colors [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/ansi-colors.svg?style=flat)](https://www.npmjs.com/package/ansi-colors) [![NPM monthly downloads](https://img.shields.io/npm/dm/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![NPM total downloads](https://img.shields.io/npm/dt/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![Linux Build Status](https://img.shields.io/travis/doowb/ansi-colors.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/ansi-colors) > Easily add ANSI colors to your text and symbols in the terminal. A faster drop-in replacement for chalk, kleur and turbocolor (without the dependencies and rendering bugs). Please consider following this project's author, [Brian Woodward](https://github.com/doowb), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save ansi-colors ``` ![image](https://user-images.githubusercontent.com/383994/39635445-8a98a3a6-4f8b-11e8-89c1-068c45d4fff8.png) ## Why use this? ansi-colors is _the fastest Node.js library for terminal styling_. A more performant drop-in replacement for chalk, with no dependencies. * _Blazing fast_ - Fastest terminal styling library in node.js, 10-20x faster than chalk! * _Drop-in replacement_ for [chalk](https://github.com/chalk/chalk). * _No dependencies_ (Chalk has 7 dependencies in its tree!) * _Safe_ - Does not modify the `String.prototype` like [colors](https://github.com/Marak/colors.js). * Supports [nested colors](#nested-colors), **and does not have the [nested styling bug](#nested-styling-bug) that is present in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur)**. * Supports [chained colors](#chained-colors). * [Toggle color support](#toggle-color-support) on or off. ## Usage ```js const c = require('ansi-colors'); console.log(c.red('This is a red string!')); console.log(c.green('This is a red string!')); console.log(c.cyan('This is a cyan string!')); console.log(c.yellow('This is a yellow string!')); ``` ![image](https://user-images.githubusercontent.com/383994/39653848-a38e67da-4fc0-11e8-89ae-98c65ebe9dcf.png) ## Chained colors ```js console.log(c.bold.red('this is a bold red message')); console.log(c.bold.yellow.italic('this is a bold yellow italicized message')); console.log(c.green.bold.underline('this is a bold green underlined message')); ``` ![image](https://user-images.githubusercontent.com/383994/39635780-7617246a-4f8c-11e8-89e9-05216cc54e38.png) ## Nested colors ```js console.log(c.yellow(`foo ${c.red.bold('red')} bar ${c.cyan('cyan')} baz`)); ``` ![image](https://user-images.githubusercontent.com/383994/39635817-8ed93d44-4f8c-11e8-8afd-8c3ea35f5fbe.png) ### Nested styling bug `ansi-colors` does not have the nested styling bug found in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur). ```js const { bold, red } = require('ansi-styles'); console.log(bold(`foo ${red.dim('bar')} baz`)); const colorette = require('colorette'); console.log(colorette.bold(`foo ${colorette.red(colorette.dim('bar'))} baz`)); const kleur = require('kleur'); console.log(kleur.bold(`foo ${kleur.red.dim('bar')} baz`)); const chalk = require('chalk'); console.log(chalk.bold(`foo ${chalk.red.dim('bar')} baz`)); ``` **Results in the following** (sans icons and labels) ![image](https://user-images.githubusercontent.com/383994/47280326-d2ee0580-d5a3-11e8-9611-ea6010f0a253.png) ## Toggle color support Easily enable/disable colors. ```js const c = require('ansi-colors'); // disable colors manually c.enabled = false; // or use a library to automatically detect support c.enabled = require('color-support').hasBasic; console.log(c.red('I will only be colored red if the terminal supports colors')); ``` ## Strip ANSI codes Use the `.unstyle` method to strip ANSI codes from a string. ```js console.log(c.unstyle(c.blue.bold('foo bar baz'))); //=> 'foo bar baz' ``` ## Available styles **Note** that bright and bright-background colors are not always supported. | Colors | Background Colors | Bright Colors | Bright Background Colors | | ------- | ----------------- | ------------- | ------------------------ | | black | bgBlack | blackBright | bgBlackBright | | red | bgRed | redBright | bgRedBright | | green | bgGreen | greenBright | bgGreenBright | | yellow | bgYellow | yellowBright | bgYellowBright | | blue | bgBlue | blueBright | bgBlueBright | | magenta | bgMagenta | magentaBright | bgMagentaBright | | cyan | bgCyan | cyanBright | bgCyanBright | | white | bgWhite | whiteBright | bgWhiteBright | | gray | | | | | grey | | | | _(`gray` is the U.S. spelling, `grey` is more commonly used in the Canada and U.K.)_ ### Style modifiers * dim * **bold** * hidden * _italic_ * underline * inverse * ~~strikethrough~~ * reset ## Aliases Create custom aliases for styles. ```js const colors = require('ansi-colors'); colors.alias('primary', colors.yellow); colors.alias('secondary', colors.bold); console.log(colors.primary.secondary('Foo')); ``` ## Themes A theme is an object of custom aliases. ```js const colors = require('ansi-colors'); colors.theme({ danger: colors.red, dark: colors.dim.gray, disabled: colors.gray, em: colors.italic, heading: colors.bold.underline, info: colors.cyan, muted: colors.dim, primary: colors.blue, strong: colors.bold, success: colors.green, underline: colors.underline, warning: colors.yellow }); // Now, we can use our custom styles alongside the built-in styles! console.log(colors.danger.strong.em('Error!')); console.log(colors.warning('Heads up!')); console.log(colors.info('Did you know...')); console.log(colors.success.bold('It worked!')); ``` ## Performance **Libraries tested** * ansi-colors v3.0.4 * chalk v2.4.1 ### Mac > MacBook Pro, Intel Core i7, 2.3 GHz, 16 GB. **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.915ms` * chalk - `12.437ms` **Benchmarks** ``` # All Colors ansi-colors x 173,851 ops/sec ±0.42% (91 runs sampled) chalk x 9,944 ops/sec ±2.53% (81 runs sampled))) # Chained colors ansi-colors x 20,791 ops/sec ±0.60% (88 runs sampled) chalk x 2,111 ops/sec ±2.34% (83 runs sampled) # Nested colors ansi-colors x 59,304 ops/sec ±0.98% (92 runs sampled) chalk x 4,590 ops/sec ±2.08% (82 runs sampled) ``` ### Windows > Windows 10, Intel Core i7-7700k CPU @ 4.2 GHz, 32 GB **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.494ms` * chalk - `11.523ms` **Benchmarks** ``` # All Colors ansi-colors x 193,088 ops/sec ±0.51% (95 runs sampled)) chalk x 9,612 ops/sec ±3.31% (77 runs sampled))) # Chained colors ansi-colors x 26,093 ops/sec ±1.13% (94 runs sampled) chalk x 2,267 ops/sec ±2.88% (80 runs sampled)) # Nested colors ansi-colors x 67,747 ops/sec ±0.49% (93 runs sampled) chalk x 4,446 ops/sec ±3.01% (82 runs sampled)) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [ansi-wrap](https://www.npmjs.com/package/ansi-wrap): Create ansi colors by passing the open and close codes. | [homepage](https://github.com/jonschlinkert/ansi-wrap "Create ansi colors by passing the open and close codes.") * [strip-color](https://www.npmjs.com/package/strip-color): Strip ANSI color codes from a string. No dependencies. | [homepage](https://github.com/jonschlinkert/strip-color "Strip ANSI color codes from a string. No dependencies.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 48 | [jonschlinkert](https://github.com/jonschlinkert) | | 42 | [doowb](https://github.com/doowb) | | 6 | [lukeed](https://github.com/lukeed) | | 2 | [Silic0nS0ldier](https://github.com/Silic0nS0ldier) | | 1 | [dwieeb](https://github.com/dwieeb) | | 1 | [jorgebucaran](https://github.com/jorgebucaran) | | 1 | [madhavarshney](https://github.com/madhavarshney) | | 1 | [chapterjason](https://github.com/chapterjason) | ### Author **Brian Woodward** * [GitHub Profile](https://github.com/doowb) * [Twitter Profile](https://twitter.com/doowb) * [LinkedIn Profile](https://linkedin.com/in/woodwardbrian) ### License Copyright © 2019, [Brian Woodward](https://github.com/doowb). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on July 01, 2019._ # pump pump is a small node module that pipes streams together and destroys all of them if one of them closes. ``` npm install pump ``` [![build status](http://img.shields.io/travis/mafintosh/pump.svg?style=flat)](http://travis-ci.org/mafintosh/pump) ## What problem does it solve? When using standard `source.pipe(dest)` source will _not_ be destroyed if dest emits close or an error. You are also not able to provide a callback to tell when then pipe has finished. pump does these two things for you ## Usage Simply pass the streams you want to pipe together to pump and add an optional callback ``` js var pump = require('pump') var fs = require('fs') var source = fs.createReadStream('/dev/random') var dest = fs.createWriteStream('/dev/null') pump(source, dest, function(err) { console.log('pipe finished', err) }) setTimeout(function() { dest.destroy() // when dest is closed pump will destroy source }, 1000) ``` You can use pump to pipe more than two streams together as well ``` js var transform = someTransformStream() pump(source, transform, anotherTransform, dest, function(err) { console.log('pipe finished', err) }) ``` If `source`, `transform`, `anotherTransform` or `dest` closes all of them will be destroyed. Similarly to `stream.pipe()`, `pump()` returns the last stream passed in, so you can do: ``` return pump(s1, s2) // returns s2 ``` If you want to return a stream that combines *both* s1 and s2 to a single stream use [pumpify](https://github.com/mafintosh/pumpify) instead. ## License MIT ## Related `pump` is part of the [mississippi stream utility collection](https://github.com/maxogden/mississippi) which includes more useful stream modules similar to this one. agent-base ========== ### Turn a function into an [`http.Agent`][http.Agent] instance [![Build Status](https://github.com/TooTallNate/node-agent-base/workflows/Node%20CI/badge.svg)](https://github.com/TooTallNate/node-agent-base/actions?workflow=Node+CI) This module provides an `http.Agent` generator. That is, you pass it an async callback function, and it returns a new `http.Agent` instance that will invoke the given callback function when sending outbound HTTP requests. #### Some subclasses: Here's some more interesting uses of `agent-base`. Send a pull request to list yours! * [`http-proxy-agent`][http-proxy-agent]: An HTTP(s) proxy `http.Agent` implementation for HTTP endpoints * [`https-proxy-agent`][https-proxy-agent]: An HTTP(s) proxy `http.Agent` implementation for HTTPS endpoints * [`pac-proxy-agent`][pac-proxy-agent]: A PAC file proxy `http.Agent` implementation for HTTP and HTTPS * [`socks-proxy-agent`][socks-proxy-agent]: A SOCKS proxy `http.Agent` implementation for HTTP and HTTPS Installation ------------ Install with `npm`: ``` bash $ npm install agent-base ``` Example ------- Here's a minimal example that creates a new `net.Socket` connection to the server for every HTTP request (i.e. the equivalent of `agent: false` option): ```js var net = require('net'); var tls = require('tls'); var url = require('url'); var http = require('http'); var agent = require('agent-base'); var endpoint = 'http://nodejs.org/api/'; var parsed = url.parse(endpoint); // This is the important part! parsed.agent = agent(function (req, opts) { var socket; // `secureEndpoint` is true when using the https module if (opts.secureEndpoint) { socket = tls.connect(opts); } else { socket = net.connect(opts); } return socket; }); // Everything else works just like normal... http.get(parsed, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` Returning a Promise or using an `async` function is also supported: ```js agent(async function (req, opts) { await sleep(1000); // etc… }); ``` Return another `http.Agent` instance to "pass through" the responsibility for that HTTP request to that agent: ```js agent(function (req, opts) { return opts.secureEndpoint ? https.globalAgent : http.globalAgent; }); ``` API --- ## Agent(Function callback[, Object options]) → [http.Agent][] Creates a base `http.Agent` that will execute the callback function `callback` for every HTTP request that it is used as the `agent` for. The callback function is responsible for creating a `stream.Duplex` instance of some kind that will be used as the underlying socket in the HTTP request. The `options` object accepts the following properties: * `timeout` - Number - Timeout for the `callback()` function in milliseconds. Defaults to Infinity (optional). The callback function should have the following signature: ### callback(http.ClientRequest req, Object options, Function cb) → undefined The ClientRequest `req` can be accessed to read request headers and and the path, etc. The `options` object contains the options passed to the `http.request()`/`https.request()` function call, and is formatted to be directly passed to `net.connect()`/`tls.connect()`, or however else you want a Socket to be created. Pass the created socket to the callback function `cb` once created, and the HTTP request will continue to proceed. If the `https` module is used to invoke the HTTP request, then the `secureEndpoint` property on `options` _will be set to `true`_. License ------- (The MIT License) Copyright (c) 2013 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [http-proxy-agent]: https://github.com/TooTallNate/node-http-proxy-agent [https-proxy-agent]: https://github.com/TooTallNate/node-https-proxy-agent [pac-proxy-agent]: https://github.com/TooTallNate/node-pac-proxy-agent [socks-proxy-agent]: https://github.com/TooTallNate/node-socks-proxy-agent [http.Agent]: https://nodejs.org/api/http.html#http_class_http_agent <h1 align="center">Picomatch</h1> <p align="center"> <a href="https://npmjs.org/package/picomatch"> <img src="https://img.shields.io/npm/v/picomatch.svg" alt="version"> </a> <a href="https://github.com/micromatch/picomatch/actions?workflow=Tests"> <img src="https://github.com/micromatch/picomatch/workflows/Tests/badge.svg" alt="test status"> </a> <a href="https://coveralls.io/github/micromatch/picomatch"> <img src="https://img.shields.io/coveralls/github/micromatch/picomatch/master.svg" alt="coverage status"> </a> <a href="https://npmjs.org/package/picomatch"> <img src="https://img.shields.io/npm/dm/picomatch.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <strong>Blazing fast and accurate glob matcher written in JavaScript.</strong></br> <em>No dependencies and full support for standard and extended Bash glob features, including braces, extglobs, POSIX brackets, and regular expressions.</em> </p> <br> <br> ## Why picomatch? * **Lightweight** - No dependencies * **Minimal** - Tiny API surface. Main export is a function that takes a glob pattern and returns a matcher function. * **Fast** - Loads in about 2ms (that's several times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps) * **Performant** - Use the returned matcher function to speed up repeat matching (like when watching files) * **Accurate matching** - Using wildcards (`*` and `?`), globstars (`**`) for nested directories, [advanced globbing](#advanced-globbing) with extglobs, braces, and POSIX brackets, and support for escaping special characters with `\` or quotes. * **Well tested** - Thousands of unit tests See the [library comparison](#library-comparisons) to other libraries. <br> <br> ## Table of Contents <details><summary> Click to expand </summary> - [Install](#install) - [Usage](#usage) - [API](#api) * [picomatch](#picomatch) * [.test](#test) * [.matchBase](#matchbase) * [.isMatch](#ismatch) * [.parse](#parse) * [.scan](#scan) * [.compileRe](#compilere) * [.makeRe](#makere) * [.toRegex](#toregex) - [Options](#options) * [Picomatch options](#picomatch-options) * [Scan Options](#scan-options) * [Options Examples](#options-examples) - [Globbing features](#globbing-features) * [Basic globbing](#basic-globbing) * [Advanced globbing](#advanced-globbing) * [Braces](#braces) * [Matching special characters as literals](#matching-special-characters-as-literals) - [Library Comparisons](#library-comparisons) - [Benchmarks](#benchmarks) - [Philosophies](#philosophies) - [About](#about) * [Author](#author) * [License](#license) _(TOC generated by [verb](https://github.com/verbose/verb) using [markdown-toc](https://github.com/jonschlinkert/markdown-toc))_ </details> <br> <br> ## Install Install with [npm](https://www.npmjs.com/): ```sh npm install --save picomatch ``` <br> ## Usage The main export is a function that takes a glob pattern and an options object and returns a function for matching strings. ```js const pm = require('picomatch'); const isMatch = pm('*.js'); console.log(isMatch('abcd')); //=> false console.log(isMatch('a.js')); //=> true console.log(isMatch('a.md')); //=> false console.log(isMatch('a/b.js')); //=> false ``` <br> ## API ### [picomatch](lib/picomatch.js#L32) Creates a matcher function from one or more glob patterns. The returned function takes a string to match as its first argument, and returns true if the string is a match. The returned matcher function also takes a boolean as the second argument that, when true, returns an object with additional information. **Params** * `globs` **{String|Array}**: One or more glob patterns. * `options` **{Object=}** * `returns` **{Function=}**: Returns a matcher function. **Example** ```js const picomatch = require('picomatch'); // picomatch(glob[, options]); const isMatch = picomatch('*.!(*a)'); console.log(isMatch('a.a')); //=> false console.log(isMatch('a.b')); //=> true ``` ### [.test](lib/picomatch.js#L117) Test `input` with the given `regex`. This is used by the main `picomatch()` function to test the input string. **Params** * `input` **{String}**: String to test. * `regex` **{RegExp}** * `returns` **{Object}**: Returns an object with matching info. **Example** ```js const picomatch = require('picomatch'); // picomatch.test(input, regex[, options]); console.log(picomatch.test('foo/bar', /^(?:([^/]*?)\/([^/]*?))$/)); // { isMatch: true, match: [ 'foo/', 'foo', 'bar' ], output: 'foo/bar' } ``` ### [.matchBase](lib/picomatch.js#L161) Match the basename of a filepath. **Params** * `input` **{String}**: String to test. * `glob` **{RegExp|String}**: Glob pattern or regex created by [.makeRe](#makeRe). * `returns` **{Boolean}** **Example** ```js const picomatch = require('picomatch'); // picomatch.matchBase(input, glob[, options]); console.log(picomatch.matchBase('foo/bar.js', '*.js'); // true ``` ### [.isMatch](lib/picomatch.js#L183) Returns true if **any** of the given glob `patterns` match the specified `string`. **Params** * **{String|Array}**: str The string to test. * **{String|Array}**: patterns One or more glob patterns to use for matching. * **{Object}**: See available [options](#options). * `returns` **{Boolean}**: Returns true if any patterns match `str` **Example** ```js const picomatch = require('picomatch'); // picomatch.isMatch(string, patterns[, options]); console.log(picomatch.isMatch('a.a', ['b.*', '*.a'])); //=> true console.log(picomatch.isMatch('a.a', 'b.*')); //=> false ``` ### [.parse](lib/picomatch.js#L199) Parse a glob pattern to create the source string for a regular expression. **Params** * `pattern` **{String}** * `options` **{Object}** * `returns` **{Object}**: Returns an object with useful properties and output to be used as a regex source string. **Example** ```js const picomatch = require('picomatch'); const result = picomatch.parse(pattern[, options]); ``` ### [.scan](lib/picomatch.js#L231) Scan a glob pattern to separate the pattern into segments. **Params** * `input` **{String}**: Glob pattern to scan. * `options` **{Object}** * `returns` **{Object}**: Returns an object with **Example** ```js const picomatch = require('picomatch'); // picomatch.scan(input[, options]); const result = picomatch.scan('!./foo/*.js'); console.log(result); { prefix: '!./', input: '!./foo/*.js', start: 3, base: 'foo', glob: '*.js', isBrace: false, isBracket: false, isGlob: true, isExtglob: false, isGlobstar: false, negated: true } ``` ### [.compileRe](lib/picomatch.js#L245) Compile a regular expression from the `state` object returned by the [parse()](#parse) method. **Params** * `state` **{Object}** * `options` **{Object}** * `returnOutput` **{Boolean}**: Intended for implementors, this argument allows you to return the raw output from the parser. * `returnState` **{Boolean}**: Adds the state to a `state` property on the returned regex. Useful for implementors and debugging. * `returns` **{RegExp}** ### [.makeRe](lib/picomatch.js#L286) Create a regular expression from a parsed glob pattern. **Params** * `state` **{String}**: The object returned from the `.parse` method. * `options` **{Object}** * `returnOutput` **{Boolean}**: Implementors may use this argument to return the compiled output, instead of a regular expression. This is not exposed on the options to prevent end-users from mutating the result. * `returnState` **{Boolean}**: Implementors may use this argument to return the state from the parsed glob with the returned regular expression. * `returns` **{RegExp}**: Returns a regex created from the given pattern. **Example** ```js const picomatch = require('picomatch'); const state = picomatch.parse('*.js'); // picomatch.compileRe(state[, options]); console.log(picomatch.compileRe(state)); //=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/ ``` ### [.toRegex](lib/picomatch.js#L321) Create a regular expression from the given regex source string. **Params** * `source` **{String}**: Regular expression source string. * `options` **{Object}** * `returns` **{RegExp}** **Example** ```js const picomatch = require('picomatch'); // picomatch.toRegex(source[, options]); const { output } = picomatch.parse('*.js'); console.log(picomatch.toRegex(output)); //=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/ ``` <br> ## Options ### Picomatch options The following options may be used with the main `picomatch()` function or any of the methods on the picomatch API. | **Option** | **Type** | **Default value** | **Description** | | --- | --- | --- | --- | | `basename` | `boolean` | `false` | If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. | | `bash` | `boolean` | `false` | Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (`**`). | | `capture` | `boolean` | `undefined` | Return regex matches in supporting methods. | | `contains` | `boolean` | `undefined` | Allows glob to match any part of the given string(s). | | `cwd` | `string` | `process.cwd()` | Current working directory. Used by `picomatch.split()` | | `debug` | `boolean` | `undefined` | Debug regular expressions when an error is thrown. | | `dot` | `boolean` | `false` | Enable dotfile matching. By default, dotfiles are ignored unless a `.` is explicitly defined in the pattern, or `options.dot` is true | | `expandRange` | `function` | `undefined` | Custom function for expanding ranges in brace patterns, such as `{a..z}`. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It's recommended that returned strings be wrapped in parentheses. | | `failglob` | `boolean` | `false` | Throws an error if no matches are found. Based on the bash option of the same name. | | `fastpaths` | `boolean` | `true` | To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to `false`. | | `flags` | `string` | `undefined` | Regex flags to use in the generated regex. If defined, the `nocase` option will be overridden. | | [format](#optionsformat) | `function` | `undefined` | Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc. | | `ignore` | `array\|string` | `undefined` | One or more glob patterns for excluding strings that should not be matched from the result. | | `keepQuotes` | `boolean` | `false` | Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes. | | `literalBrackets` | `boolean` | `undefined` | When `true`, brackets in the glob pattern will be escaped so that only literal brackets will be matched. | | `matchBase` | `boolean` | `false` | Alias for `basename` | | `maxLength` | `boolean` | `65536` | Limit the max length of the input string. An error is thrown if the input string is longer than this value. | | `nobrace` | `boolean` | `false` | Disable brace matching, so that `{a,b}` and `{1..3}` would be treated as literal characters. | | `nobracket` | `boolean` | `undefined` | Disable matching with regex brackets. | | `nocase` | `boolean` | `false` | Make matching case-insensitive. Equivalent to the regex `i` flag. Note that this option is overridden by the `flags` option. | | `nodupes` | `boolean` | `true` | Deprecated, use `nounique` instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false. | | `noext` | `boolean` | `false` | Alias for `noextglob` | | `noextglob` | `boolean` | `false` | Disable support for matching with extglobs (like `+(a\|b)`) | | `noglobstar` | `boolean` | `false` | Disable support for matching nested directories with globstars (`**`) | | `nonegate` | `boolean` | `false` | Disable support for negating with leading `!` | | `noquantifiers` | `boolean` | `false` | Disable support for regex quantifiers (like `a{1,2}`) and treat them as brace patterns to be expanded. | | [onIgnore](#optionsonIgnore) | `function` | `undefined` | Function to be called on ignored items. | | [onMatch](#optionsonMatch) | `function` | `undefined` | Function to be called on matched items. | | [onResult](#optionsonResult) | `function` | `undefined` | Function to be called on all items, regardless of whether or not they are matched or ignored. | | `posix` | `boolean` | `false` | Support POSIX character classes ("posix brackets"). | | `posixSlashes` | `boolean` | `undefined` | Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself | | `prepend` | `boolean` | `undefined` | String to prepend to the generated regex used for matching. | | `regex` | `boolean` | `false` | Use regular expression rules for `+` (instead of matching literal `+`), and for stars that follow closing parentheses or brackets (as in `)*` and `]*`). | | `strictBrackets` | `boolean` | `undefined` | Throw an error if brackets, braces, or parens are imbalanced. | | `strictSlashes` | `boolean` | `undefined` | When true, picomatch won't match trailing slashes with single stars. | | `unescape` | `boolean` | `undefined` | Remove backslashes preceding escaped characters in the glob pattern. By default, backslashes are retained. | | `unixify` | `boolean` | `undefined` | Alias for `posixSlashes`, for backwards compatibility. | picomatch has automatic detection for regex positive and negative lookbehinds. If the pattern contains a negative lookbehind, you must be using Node.js >= 8.10 or else picomatch will throw an error. ### Scan Options In addition to the main [picomatch options](#picomatch-options), the following options may also be used with the [.scan](#scan) method. | **Option** | **Type** | **Default value** | **Description** | | --- | --- | --- | --- | | `tokens` | `boolean` | `false` | When `true`, the returned object will include an array of tokens (objects), representing each path "segment" in the scanned glob pattern | | `parts` | `boolean` | `false` | When `true`, the returned object will include an array of strings representing each path "segment" in the scanned glob pattern. This is automatically enabled when `options.tokens` is true | **Example** ```js const picomatch = require('picomatch'); const result = picomatch.scan('!./foo/*.js', { tokens: true }); console.log(result); // { // prefix: '!./', // input: '!./foo/*.js', // start: 3, // base: 'foo', // glob: '*.js', // isBrace: false, // isBracket: false, // isGlob: true, // isExtglob: false, // isGlobstar: false, // negated: true, // maxDepth: 2, // tokens: [ // { value: '!./', depth: 0, isGlob: false, negated: true, isPrefix: true }, // { value: 'foo', depth: 1, isGlob: false }, // { value: '*.js', depth: 1, isGlob: true } // ], // slashes: [ 2, 6 ], // parts: [ 'foo', '*.js' ] // } ``` <br> ### Options Examples #### options.expandRange **Type**: `function` **Default**: `undefined` Custom function for expanding ranges in brace patterns. The [fill-range](https://github.com/jonschlinkert/fill-range) library is ideal for this purpose, or you can use custom code to do whatever you need. **Example** The following example shows how to create a glob that matches a folder ```js const fill = require('fill-range'); const regex = pm.makeRe('foo/{01..25}/bar', { expandRange(a, b) { return `(${fill(a, b, { toRegex: true })})`; } }); console.log(regex); //=> /^(?:foo\/((?:0[1-9]|1[0-9]|2[0-5]))\/bar)$/ console.log(regex.test('foo/00/bar')) // false console.log(regex.test('foo/01/bar')) // true console.log(regex.test('foo/10/bar')) // true console.log(regex.test('foo/22/bar')) // true console.log(regex.test('foo/25/bar')) // true console.log(regex.test('foo/26/bar')) // false ``` #### options.format **Type**: `function` **Default**: `undefined` Custom function for formatting strings before they're matched. **Example** ```js // strip leading './' from strings const format = str => str.replace(/^\.\//, ''); const isMatch = picomatch('foo/*.js', { format }); console.log(isMatch('./foo/bar.js')); //=> true ``` #### options.onMatch ```js const onMatch = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = picomatch('*', { onMatch }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` #### options.onIgnore ```js const onIgnore = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = picomatch('*', { onIgnore, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` #### options.onResult ```js const onResult = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = picomatch('*', { onResult, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` <br> <br> ## Globbing features * [Basic globbing](#basic-globbing) (Wildcard matching) * [Advanced globbing](#advanced-globbing) (extglobs, posix brackets, brace matching) ### Basic globbing | **Character** | **Description** | | --- | --- | | `*` | Matches any character zero or more times, excluding path separators. Does _not match_ path separators or hidden files or directories ("dotfiles"), unless explicitly enabled by setting the `dot` option to `true`. | | `**` | Matches any character zero or more times, including path separators. Note that `**` will only match path separators (`/`, and `\\` on Windows) when they are the only characters in a path segment. Thus, `foo**/bar` is equivalent to `foo*/bar`, and `foo/a**b/bar` is equivalent to `foo/a*b/bar`, and _more than two_ consecutive stars in a glob path segment are regarded as _a single star_. Thus, `foo/***/bar` is equivalent to `foo/*/bar`. | | `?` | Matches any character excluding path separators one time. Does _not match_ path separators or leading dots. | | `[abc]` | Matches any characters inside the brackets. For example, `[abc]` would match the characters `a`, `b` or `c`, and nothing else. | #### Matching behavior vs. Bash Picomatch's matching features and expected results in unit tests are based on Bash's unit tests and the Bash 4.3 specification, with the following exceptions: * Bash will match `foo/bar/baz` with `*`. Picomatch only matches nested directories with `**`. * Bash greedily matches with negated extglobs. For example, Bash 4.3 says that `!(foo)*` should match `foo` and `foobar`, since the trailing `*` bracktracks to match the preceding pattern. This is very memory-inefficient, and IMHO, also incorrect. Picomatch would return `false` for both `foo` and `foobar`. <br> ### Advanced globbing * [extglobs](#extglobs) * [POSIX brackets](#posix-brackets) * [Braces](#brace-expansion) #### Extglobs | **Pattern** | **Description** | | --- | --- | | `@(pattern)` | Match _only one_ consecutive occurrence of `pattern` | | `*(pattern)` | Match _zero or more_ consecutive occurrences of `pattern` | | `+(pattern)` | Match _one or more_ consecutive occurrences of `pattern` | | `?(pattern)` | Match _zero or **one**_ consecutive occurrences of `pattern` | | `!(pattern)` | Match _anything but_ `pattern` | **Examples** ```js const pm = require('picomatch'); // *(pattern) matches ZERO or more of "pattern" console.log(pm.isMatch('a', 'a*(z)')); // true console.log(pm.isMatch('az', 'a*(z)')); // true console.log(pm.isMatch('azzz', 'a*(z)')); // true // +(pattern) matches ONE or more of "pattern" console.log(pm.isMatch('a', 'a*(z)')); // true console.log(pm.isMatch('az', 'a*(z)')); // true console.log(pm.isMatch('azzz', 'a*(z)')); // true // supports multiple extglobs console.log(pm.isMatch('foo.bar', '!(foo).!(bar)')); // false // supports nested extglobs console.log(pm.isMatch('foo.bar', '!(!(foo)).!(!(bar))')); // true ``` #### POSIX brackets POSIX classes are disabled by default. Enable this feature by setting the `posix` option to true. **Enable POSIX bracket support** ```js console.log(pm.makeRe('[[:word:]]+', { posix: true })); //=> /^(?:(?=.)[A-Za-z0-9_]+\/?)$/ ``` **Supported POSIX classes** The following named POSIX bracket expressions are supported: * `[:alnum:]` - Alphanumeric characters, equ `[a-zA-Z0-9]` * `[:alpha:]` - Alphabetical characters, equivalent to `[a-zA-Z]`. * `[:ascii:]` - ASCII characters, equivalent to `[\\x00-\\x7F]`. * `[:blank:]` - Space and tab characters, equivalent to `[ \\t]`. * `[:cntrl:]` - Control characters, equivalent to `[\\x00-\\x1F\\x7F]`. * `[:digit:]` - Numerical digits, equivalent to `[0-9]`. * `[:graph:]` - Graph characters, equivalent to `[\\x21-\\x7E]`. * `[:lower:]` - Lowercase letters, equivalent to `[a-z]`. * `[:print:]` - Print characters, equivalent to `[\\x20-\\x7E ]`. * `[:punct:]` - Punctuation and symbols, equivalent to `[\\-!"#$%&\'()\\*+,./:;<=>?@[\\]^_`{|}~]`. * `[:space:]` - Extended space characters, equivalent to `[ \\t\\r\\n\\v\\f]`. * `[:upper:]` - Uppercase letters, equivalent to `[A-Z]`. * `[:word:]` - Word characters (letters, numbers and underscores), equivalent to `[A-Za-z0-9_]`. * `[:xdigit:]` - Hexadecimal digits, equivalent to `[A-Fa-f0-9]`. See the [Bash Reference Manual](https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html) for more information. ### Braces Picomatch does not do brace expansion. For [brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html) and advanced matching with braces, use [micromatch](https://github.com/micromatch/micromatch) instead. Picomatch has very basic support for braces. ### Matching special characters as literals If you wish to match the following special characters in a filepath, and you want to use these characters in your glob pattern, they must be escaped with backslashes or quotes: **Special Characters** Some characters that are used for matching in regular expressions are also regarded as valid file path characters on some platforms. To match any of the following characters as literals: `$^*+?()[] Examples: ```js console.log(pm.makeRe('foo/bar \\(1\\)')); console.log(pm.makeRe('foo/bar \\(1\\)')); ``` <br> <br> ## Library Comparisons The following table shows which features are supported by [minimatch](https://github.com/isaacs/minimatch), [micromatch](https://github.com/micromatch/micromatch), [picomatch](https://github.com/micromatch/picomatch), [nanomatch](https://github.com/micromatch/nanomatch), [extglob](https://github.com/micromatch/extglob), [braces](https://github.com/micromatch/braces), and [expand-brackets](https://github.com/micromatch/expand-brackets). | **Feature** | `minimatch` | `micromatch` | `picomatch` | `nanomatch` | `extglob` | `braces` | `expand-brackets` | | --- | --- | --- | --- | --- | --- | --- | --- | | Wildcard matching (`*?+`) | ✔ | ✔ | ✔ | ✔ | - | - | - | | Advancing globbing | ✔ | ✔ | ✔ | - | - | - | - | | Brace _matching_ | ✔ | ✔ | ✔ | - | - | ✔ | - | | Brace _expansion_ | ✔ | ✔ | - | - | - | ✔ | - | | Extglobs | partial | ✔ | ✔ | - | ✔ | - | - | | Posix brackets | - | ✔ | ✔ | - | - | - | ✔ | | Regular expression syntax | - | ✔ | ✔ | ✔ | ✔ | - | ✔ | | File system operations | - | - | - | - | - | - | - | <br> <br> ## Benchmarks Performance comparison of picomatch and minimatch. ``` # .makeRe star picomatch x 1,993,050 ops/sec ±0.51% (91 runs sampled) minimatch x 627,206 ops/sec ±1.96% (87 runs sampled)) # .makeRe star; dot=true picomatch x 1,436,640 ops/sec ±0.62% (91 runs sampled) minimatch x 525,876 ops/sec ±0.60% (88 runs sampled) # .makeRe globstar picomatch x 1,592,742 ops/sec ±0.42% (90 runs sampled) minimatch x 962,043 ops/sec ±1.76% (91 runs sampled)d) # .makeRe globstars picomatch x 1,615,199 ops/sec ±0.35% (94 runs sampled) minimatch x 477,179 ops/sec ±1.33% (91 runs sampled) # .makeRe with leading star picomatch x 1,220,856 ops/sec ±0.40% (92 runs sampled) minimatch x 453,564 ops/sec ±1.43% (94 runs sampled) # .makeRe - basic braces picomatch x 392,067 ops/sec ±0.70% (90 runs sampled) minimatch x 99,532 ops/sec ±2.03% (87 runs sampled)) ``` <br> <br> ## Philosophies The goal of this library is to be blazing fast, without compromising on accuracy. **Accuracy** The number one of goal of this library is accuracy. However, it's not unusual for different glob implementations to have different rules for matching behavior, even with simple wildcard matching. It gets increasingly more complicated when combinations of different features are combined, like when extglobs are combined with globstars, braces, slashes, and so on: `!(**/{a,b,*/c})`. Thus, given that there is no canonical glob specification to use as a single source of truth when differences of opinion arise regarding behavior, sometimes we have to implement our best judgement and rely on feedback from users to make improvements. **Performance** Although this library performs well in benchmarks, and in most cases it's faster than other popular libraries we benchmarked against, we will always choose accuracy over performance. It's not helpful to anyone if our library is faster at returning the wrong answer. <br> <br> ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). Please read the [contributing guide](.github/contributing.md) for advice on opening issues, pull requests, and coding standards. </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2017-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). # <img src="docs_app/assets/Rx_Logo_S.png" alt="RxJS Logo" width="86" height="86"> RxJS: Reactive Extensions For JavaScript [![CircleCI](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x.svg?style=svg)](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x) [![npm version](https://badge.fury.io/js/%40reactivex%2Frxjs.svg)](http://badge.fury.io/js/%40reactivex%2Frxjs) [![Join the chat at https://gitter.im/Reactive-Extensions/RxJS](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/Reactive-Extensions/RxJS?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # RxJS 6 Stable ### MIGRATION AND RELEASE INFORMATION: Find out how to update to v6, **automatically update your TypeScript code**, and more! - [Current home is MIGRATION.md](./docs_app/content/guide/v6/migration.md) ### FOR V 5.X PLEASE GO TO [THE 5.0 BRANCH](https://github.com/ReactiveX/rxjs/tree/5.x) Reactive Extensions Library for JavaScript. This is a rewrite of [Reactive-Extensions/RxJS](https://github.com/Reactive-Extensions/RxJS) and is the latest production-ready version of RxJS. This rewrite is meant to have better performance, better modularity, better debuggable call stacks, while staying mostly backwards compatible, with some breaking changes that reduce the API surface. [Apache 2.0 License](LICENSE.txt) - [Code of Conduct](CODE_OF_CONDUCT.md) - [Contribution Guidelines](CONTRIBUTING.md) - [Maintainer Guidelines](doc_app/content/maintainer-guidelines.md) - [API Documentation](https://rxjs.dev/) ## Versions In This Repository - [master](https://github.com/ReactiveX/rxjs/commits/master) - This is all of the current, unreleased work, which is against v6 of RxJS right now - [stable](https://github.com/ReactiveX/rxjs/commits/stable) - This is the branch for the latest version you'd get if you do `npm install rxjs` ## Important By contributing or commenting on issues in this repository, whether you've read them or not, you're agreeing to the [Contributor Code of Conduct](CODE_OF_CONDUCT.md). Much like traffic laws, ignorance doesn't grant you immunity. ## Installation and Usage ### ES6 via npm ```sh npm install rxjs ``` It's recommended to pull in the Observable creation methods you need directly from `'rxjs'` as shown below with `range`. And you can pull in any operator you need from one spot, under `'rxjs/operators'`. ```ts import { range } from "rxjs"; import { map, filter } from "rxjs/operators"; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` Here, we're using the built-in `pipe` method on Observables to combine operators. See [pipeable operators](https://github.com/ReactiveX/rxjs/blob/master/doc/pipeable-operators.md) for more information. ### CommonJS via npm To install this library for CommonJS (CJS) usage, use the following command: ```sh npm install rxjs ``` (Note: destructuring available in Node 8+) ```js const { range } = require('rxjs'); const { map, filter } = require('rxjs/operators'); range(1, 200).pipe( filter(x => x % 2 === 1), map(x => x + x) ).subscribe(x => console.log(x)); ``` ### CDN For CDN, you can use [unpkg](https://unpkg.com/): https://unpkg.com/rxjs/bundles/rxjs.umd.min.js The global namespace for rxjs is `rxjs`: ```js const { range } = rxjs; const { map, filter } = rxjs.operators; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` ## Goals - Smaller overall bundles sizes - Provide better performance than preceding versions of RxJS - To model/follow the [Observable Spec Proposal](https://github.com/zenparsing/es-observable) to the observable - Provide more modular file structure in a variety of formats - Provide more debuggable call stacks than preceding versions of RxJS ## Building/Testing - `npm run build_all` - builds everything - `npm test` - runs tests - `npm run test_no_cache` - run test with `ts-node` set to false ## Performance Tests Run `npm run build_perf` or `npm run perf` to run the performance tests with `protractor`. Run `npm run perf_micro [operator]` to run micro performance test benchmarking operator. ## Adding documentation We appreciate all contributions to the documentation of any type. All of the information needed to get the docs app up and running locally as well as how to contribute can be found in the [documentation directory](./docs_app). ## Generating PNG marble diagrams The script `npm run tests2png` requires some native packages installed locally: `imagemagick`, `graphicsmagick`, and `ghostscript`. For Mac OS X with [Homebrew](http://brew.sh/): - `brew install imagemagick` - `brew install graphicsmagick` - `brew install ghostscript` - You may need to install the Ghostscript fonts manually: - Download the tarball from the [gs-fonts project](https://sourceforge.net/projects/gs-fonts) - `mkdir -p /usr/local/share/ghostscript && tar zxvf /path/to/ghostscript-fonts.tar.gz -C /usr/local/share/ghostscript` For Debian Linux: - `sudo add-apt-repository ppa:dhor/myway` - `apt-get install imagemagick` - `apt-get install graphicsmagick` - `apt-get install ghostscript` For Windows and other Operating Systems, check the download instructions here: - http://imagemagick.org - http://www.graphicsmagick.org - http://www.ghostscript.com/ # isarray `Array#isArray` for older browsers. [![build status](https://secure.travis-ci.org/juliangruber/isarray.svg)](http://travis-ci.org/juliangruber/isarray) [![downloads](https://img.shields.io/npm/dm/isarray.svg)](https://www.npmjs.org/package/isarray) [![browser support](https://ci.testling.com/juliangruber/isarray.png) ](https://ci.testling.com/juliangruber/isarray) ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # assemblyscript-regex A regex engine for AssemblyScript. [AssemblyScript](https://www.assemblyscript.org/) is a new language, based on TypeScript, that runs on WebAssembly. AssemblyScript has a lightweight standard library, but lacks support for Regular Expression. The project fills that gap! This project exposes an API that mirrors the JavaScript [RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp) class: ```javascript const regex = new RegExp("fo*", "g"); const str = "table football, foul"; let match: Match | null = regex.exec(str); while (match != null) { // first iteration // match.index = 6 // match.matches[0] = "foo" // second iteration // match.index = 16 // match.matches[0] = "fo" match = regex.exec(str); } ``` ## Project status The initial focus of this implementation has been feature support and functionality over performance. It currently supports a sufficient number of regex features to be considered useful, including most character classes, common assertions, groups, alternations, capturing groups and quantifiers. The next phase of development will focussed on more extensive testing and performance. The project currently has reasonable unit test coverage, focussed on positive and negative test cases on a per-feature basis. It also includes a more exhaustive test suite with test cases borrowed from another regex library. ### Feature support Based on the classfication within the [MDN cheatsheet](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions/Cheatsheet) **Character sets** - [x] . - [x] \d - [x] \D - [x] \w - [x] \W - [x] \s - [x] \S - [x] \t - [x] \r - [x] \n - [x] \v - [x] \f - [ ] [\b] - [ ] \0 - [ ] \cX - [x] \xhh - [x] \uhhhh - [ ] \u{hhhh} or \u{hhhhh} - [x] \ **Assertions** - [x] ^ - [x] $ - [ ] \b - [ ] \B **Other assertions** - [ ] x(?=y) Lookahead assertion - [ ] x(?!y) Negative lookahead assertion - [ ] (?<=y)x Lookbehind assertion - [ ] (?<!y)x Negative lookbehind assertion **Groups and ranges** - [x] x|y - [x] [xyz][a-c] - [x] [^xyz][^a-c] - [x] (x) capturing group - [ ] \n back reference - [ ] (?<Name>x) named capturing group - [x] (?:x) Non-capturing group **Quantifiers** - [x] x\* - [x] x+ - [x] x? - [x] x{n} - [x] x{n,} - [x] x{n,m} - [ ] x\*? / x+? / ... **RegExp** - [x] global - [ ] sticky - [x] case insensitive - [x] multiline - [x] dotAll - [ ] unicode ### Development This project is open source, MIT licenced and your contributions are very much welcomed. To get started, check out the repository and install dependencies: ``` $ npm install ``` A few general points about the tools and processes this project uses: - This project uses prettier for code formatting and eslint to provide additional syntactic checks. These are both run on `npm test` and as part of the CI build. - The unit tests are executed using [as-pect](https://github.com/jtenner/as-pect) - a native AssemblyScript test runner - The specification tests are within the `spec` folder. The `npm run test:generate` target transforms these tests into as-pect tests which execute as part of the standard build / test cycle - In order to support improved debugging you can execute this library as TypeScript (rather than WebAssembly), via the `npm run tsrun` target. # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard An ini format parser and serializer for node. Sections are treated as nested objects. Items before the first heading are saved on the object directly. ## Usage Consider an ini-file `config.ini` that looks like this: ; this comment is being ignored scope = global [database] user = dbuser password = dbpassword database = use_this_database [paths.default] datadir = /var/lib/data array[] = first value array[] = second value array[] = third value You can read, manipulate and write the ini-file like so: var fs = require('fs') , ini = require('ini') var config = ini.parse(fs.readFileSync('./config.ini', 'utf-8')) config.scope = 'local' config.database.database = 'use_another_database' config.paths.default.tmpdir = '/tmp' delete config.paths.default.datadir config.paths.default.array.push('fourth value') fs.writeFileSync('./config_modified.ini', ini.stringify(config, { section: 'section' })) This will result in a file called `config_modified.ini` being written to the filesystem with the following content: [section] scope=local [section.database] user=dbuser password=dbpassword database=use_another_database [section.paths.default] tmpdir=/tmp array[]=first value array[]=second value array[]=third value array[]=fourth value ## API ### decode(inistring) Decode the ini-style formatted `inistring` into a nested object. ### parse(inistring) Alias for `decode(inistring)` ### encode(object, [options]) Encode the object `object` into an ini-style formatted string. If the optional parameter `section` is given, then all top-level properties of the object are put into this section and the `section`-string is prepended to all sub-sections, see the usage example above. The `options` object may contain the following: * `section` A string which will be the first `section` in the encoded ini data. Defaults to none. * `whitespace` Boolean to specify whether to put whitespace around the `=` character. By default, whitespace is omitted, to be friendly to some persnickety old parsers that don't tolerate it well. But some find that it's more human-readable and pretty with the whitespace. For backwards compatibility reasons, if a `string` options is passed in, then it is assumed to be the `section` value. ### stringify(object, [options]) Alias for `encode(object, [options])` ### safe(val) Escapes the string `val` such that it is safe to be used as a key or value in an ini-file. Basically escapes quotes. For example ini.safe('"unsafe string"') would result in "\"unsafe string\"" ### unsafe(val) Unescapes the string `val` # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # ieee754 [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/ieee754/master.svg [travis-url]: https://travis-ci.org/feross/ieee754 [npm-image]: https://img.shields.io/npm/v/ieee754.svg [npm-url]: https://npmjs.org/package/ieee754 [downloads-image]: https://img.shields.io/npm/dm/ieee754.svg [downloads-url]: https://npmjs.org/package/ieee754 [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com [![saucelabs][saucelabs-image]][saucelabs-url] [saucelabs-image]: https://saucelabs.com/browser-matrix/ieee754.svg [saucelabs-url]: https://saucelabs.com/u/ieee754 ### Read/write IEEE754 floating point numbers from/to a Buffer or array-like object. ## install ``` npm install ieee754 ``` ## methods `var ieee754 = require('ieee754')` The `ieee754` object has the following functions: ``` ieee754.read = function (buffer, offset, isLE, mLen, nBytes) ieee754.write = function (buffer, value, offset, isLE, mLen, nBytes) ``` The arguments mean the following: - buffer = the buffer - offset = offset into the buffer - value = value to set (only for `write`) - isLe = is little endian? - mLen = mantissa length - nBytes = number of bytes ## what is ieee754? The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation. [Read more](http://en.wikipedia.org/wiki/IEEE_floating_point). ## license BSD 3 Clause. Copyright (c) 2008, Fair Oaks Labs, Inc. [![npm version](https://img.shields.io/npm/v/eslint.svg)](https://www.npmjs.com/package/eslint) [![Downloads](https://img.shields.io/npm/dm/eslint.svg)](https://www.npmjs.com/package/eslint) [![Build Status](https://github.com/eslint/eslint/workflows/CI/badge.svg)](https://github.com/eslint/eslint/actions) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_shield) <br /> [![Open Collective Backers](https://img.shields.io/opencollective/backers/eslint)](https://opencollective.com/eslint) [![Open Collective Sponsors](https://img.shields.io/opencollective/sponsors/eslint)](https://opencollective.com/eslint) [![Follow us on Twitter](https://img.shields.io/twitter/follow/geteslint?label=Follow&style=social)](https://twitter.com/intent/user?screen_name=geteslint) # ESLint [Website](https://eslint.org) | [Configuring](https://eslint.org/docs/user-guide/configuring) | [Rules](https://eslint.org/docs/rules/) | [Contributing](https://eslint.org/docs/developer-guide/contributing) | [Reporting Bugs](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) | [Code of Conduct](https://eslint.org/conduct) | [Twitter](https://twitter.com/geteslint) | [Mailing List](https://groups.google.com/group/eslint) | [Chat Room](https://eslint.org/chat) ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions: * ESLint uses [Espree](https://github.com/eslint/espree) for JavaScript parsing. * ESLint uses an AST to evaluate patterns in code. * ESLint is completely pluggable, every single rule is a plugin and you can add more at runtime. ## Table of Contents 1. [Installation and Usage](#installation-and-usage) 2. [Configuration](#configuration) 3. [Code of Conduct](#code-of-conduct) 4. [Filing Issues](#filing-issues) 5. [Frequently Asked Questions](#faq) 6. [Releases](#releases) 7. [Security Policy](#security-policy) 8. [Semantic Versioning Policy](#semantic-versioning-policy) 9. [Stylistic Rule Updates](#stylistic-rule-updates) 10. [License](#license) 11. [Team](#team) 12. [Sponsors](#sponsors) 13. [Technology Sponsors](#technology-sponsors) ## <a name="installation-and-usage"></a>Installation and Usage Prerequisites: [Node.js](https://nodejs.org/) (`^10.12.0`, or `>=12.0.0`) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.) You can install ESLint using npm: ``` $ npm install eslint --save-dev ``` You should then set up a configuration file: ``` $ ./node_modules/.bin/eslint --init ``` After that, you can run ESLint on any file or directory like this: ``` $ ./node_modules/.bin/eslint yourfile.js ``` ## <a name="configuration"></a>Configuration After running `eslint --init`, you'll have a `.eslintrc` file in your directory. In it, you'll see some rules configured like this: ```json { "rules": { "semi": ["error", "always"], "quotes": ["error", "double"] } } ``` The names `"semi"` and `"quotes"` are the names of [rules](https://eslint.org/docs/rules) in ESLint. The first value is the error level of the rule and can be one of these values: * `"off"` or `0` - turn the rule off * `"warn"` or `1` - turn the rule on as a warning (doesn't affect exit code) * `"error"` or `2` - turn the rule on as an error (exit code will be 1) The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the [configuration docs](https://eslint.org/docs/user-guide/configuring)). ## <a name="code-of-conduct"></a>Code of Conduct ESLint adheres to the [JS Foundation Code of Conduct](https://eslint.org/conduct). ## <a name="filing-issues"></a>Filing Issues Before filing an issue, please be sure to read the guidelines for what you're reporting: * [Bug Report](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) * [Propose a New Rule](https://eslint.org/docs/developer-guide/contributing/new-rules) * [Proposing a Rule Change](https://eslint.org/docs/developer-guide/contributing/rule-changes) * [Request a Change](https://eslint.org/docs/developer-guide/contributing/changes) ## <a name="faq"></a>Frequently Asked Questions ### I'm using JSCS, should I migrate to ESLint? Yes. [JSCS has reached end of life](https://eslint.org/blog/2016/07/jscs-end-of-life) and is no longer supported. We have prepared a [migration guide](https://eslint.org/docs/user-guide/migrating-from-jscs) to help you convert your JSCS settings to an ESLint configuration. We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process. ### Does Prettier replace ESLint? No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors. ### Why can't ESLint find my plugins? * Make sure your plugins (and ESLint) are both in your project's `package.json` as devDependencies (or dependencies, if your project uses ESLint at runtime). * Make sure you have run `npm install` and all your dependencies are installed. * Make sure your plugins' peerDependencies have been installed as well. You can use `npm view eslint-plugin-myplugin peerDependencies` to see what peer dependencies `eslint-plugin-myplugin` has. ### Does ESLint support JSX? Yes, ESLint natively supports parsing JSX syntax (this must be enabled in [configuration](https://eslint.org/docs/user-guide/configuring)). Please note that supporting JSX syntax *is not* the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn't recognize. We recommend using [eslint-plugin-react](https://www.npmjs.com/package/eslint-plugin-react) if you are using React and want React semantics. ### What ECMAScript versions does ESLint support? ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through [configuration](https://eslint.org/docs/user-guide/configuring). ### What about experimental features? ESLint's parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis. In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the [babel-eslint](https://github.com/babel/babel-eslint) parser and [eslint-plugin-babel](https://github.com/babel/eslint-plugin-babel) to use any option available in Babel. Once a language feature has been adopted into the ECMAScript standard (stage 4 according to the [TC39 process](https://tc39.github.io/process-document/)), we will accept issues and pull requests related to the new feature, subject to our [contributing guidelines](https://eslint.org/docs/developer-guide/contributing). Until then, please use the appropriate parser and plugin(s) for your experimental feature. ### Where to ask for help? Join our [Mailing List](https://groups.google.com/group/eslint) or [Chatroom](https://eslint.org/chat). ### Why doesn't ESLint lock dependency versions? Lock files like `package-lock.json` are helpful for deployed applications. They ensure that dependencies are consistent between environments and across deployments. Packages like `eslint` that get published to the npm registry do not include lock files. `npm install eslint` as a user will respect version constraints in ESLint's `package.json`. ESLint and its dependencies will be included in the user's lock file if one exists, but ESLint's own lock file would not be used. We intentionally don't lock dependency versions so that we have the latest compatible dependency versions in development and CI that our users get when installing ESLint in a project. The Twilio blog has a [deeper dive](https://www.twilio.com/blog/lockfiles-nodejs) to learn more. ## <a name="releases"></a>Releases We have scheduled releases every two weeks on Friday or Saturday. You can follow a [release issue](https://github.com/eslint/eslint/issues?q=is%3Aopen+is%3Aissue+label%3Arelease) for updates about the scheduling of any particular release. ## <a name="security-policy"></a>Security Policy ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## <a name="semantic-versioning-policy"></a>Semantic Versioning Policy ESLint follows [semantic versioning](https://semver.org). However, due to the nature of ESLint as a code quality tool, it's not always clear when a minor or major version bump occurs. To help clarify this for everyone, we've defined the following semantic versioning policy for ESLint: * Patch release (intended to not break your lint build) * A bug fix in a rule that results in ESLint reporting fewer linting errors. * A bug fix to the CLI or core (including formatters). * Improvements to documentation. * Non-user-facing changes such as refactoring code, adding, deleting, or modifying tests, and increasing test coverage. * Re-releasing after a failed release (i.e., publishing a release that doesn't work for anyone). * Minor release (might break your lint build) * A bug fix in a rule that results in ESLint reporting more linting errors. * A new rule is created. * A new option to an existing rule that does not result in ESLint reporting more linting errors by default. * A new addition to an existing rule to support a newly-added language feature (within the last 12 months) that will result in ESLint reporting more linting errors by default. * An existing rule is deprecated. * A new CLI capability is created. * New capabilities to the public API are added (new classes, new methods, new arguments to existing methods, etc.). * A new formatter is created. * `eslint:recommended` is updated and will result in strictly fewer linting errors (e.g., rule removals). * Major release (likely to break your lint build) * `eslint:recommended` is updated and may result in new linting errors (e.g., rule additions, most rule option updates). * A new option to an existing rule that results in ESLint reporting more linting errors by default. * An existing formatter is removed. * Part of the public API is removed or changed in an incompatible way. The public API includes: * Rule schemas * Configuration schema * Command-line options * Node.js API * Rule, formatter, parser, plugin APIs According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (`~`) in `package.json` e.g. `"eslint": "~3.1.0"` to guarantee the results of your builds. ## <a name="stylistic-rule-updates"></a>Stylistic Rule Updates Stylistic rules are frozen according to [our policy](https://eslint.org/blog/2020/05/changes-to-rules-policies) on how we evaluate new rules and rule changes. This means: * **Bug fixes**: We will still fix bugs in stylistic rules. * **New ECMAScript features**: We will also make sure stylistic rules are compatible with new ECMAScript features. * **New options**: We will **not** add any new options to stylistic rules unless an option is the only way to fix a bug or support a newly-added ECMAScript feature. ## <a name="license"></a>License [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=large)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_large) ## <a name="team"></a>Team These folks keep the project moving and are resources for help. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--teamstart--> ### Technical Steering Committee (TSC) The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/nzakas"> <img src="https://github.com/nzakas.png?s=75" width="75" height="75"><br /> Nicholas C. Zakas </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/btmills"> <img src="https://github.com/btmills.png?s=75" width="75" height="75"><br /> Brandon Mills </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/mdjermanovic"> <img src="https://github.com/mdjermanovic.png?s=75" width="75" height="75"><br /> Milos Djermanovic </a> </td></tr></tbody></table> ### Reviewers The people who review and implement new features. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/mysticatea"> <img src="https://github.com/mysticatea.png?s=75" width="75" height="75"><br /> Toru Nagashima </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/aladdin-add"> <img src="https://github.com/aladdin-add.png?s=75" width="75" height="75"><br /> 薛定谔的猫 </a> </td></tr></tbody></table> ### Committers The people who review and fix bugs and help triage issues. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/brettz9"> <img src="https://github.com/brettz9.png?s=75" width="75" height="75"><br /> Brett Zamir </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/bmish"> <img src="https://github.com/bmish.png?s=75" width="75" height="75"><br /> Bryan Mishkin </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/g-plane"> <img src="https://github.com/g-plane.png?s=75" width="75" height="75"><br /> Pig Fang </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/anikethsaha"> <img src="https://github.com/anikethsaha.png?s=75" width="75" height="75"><br /> Anix </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/yeonjuan"> <img src="https://github.com/yeonjuan.png?s=75" width="75" height="75"><br /> YeonJuan </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/snitin315"> <img src="https://github.com/snitin315.png?s=75" width="75" height="75"><br /> Nitin Kumar </a> </td></tr></tbody></table> <!--teamend--> ## <a name="sponsors"></a>Sponsors The following companies, organizations, and individuals support ESLint's ongoing maintenance and development. [Become a Sponsor](https://opencollective.com/eslint) to get your logo on our README and website. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--sponsorsstart--> <h3>Platinum Sponsors</h3> <p><a href="https://automattic.com"><img src="https://images.opencollective.com/photomatt/d0ef3e1/logo.png" alt="Automattic" height="undefined"></a></p><h3>Gold Sponsors</h3> <p><a href="https://nx.dev"><img src="https://images.opencollective.com/nx/0efbe42/logo.png" alt="Nx (by Nrwl)" height="96"></a> <a href="https://google.com/chrome"><img src="https://images.opencollective.com/chrome/dc55bd4/logo.png" alt="Chrome's Web Framework & Tools Performance Fund" height="96"></a> <a href="https://www.salesforce.com"><img src="https://images.opencollective.com/salesforce/ca8f997/logo.png" alt="Salesforce" height="96"></a> <a href="https://www.airbnb.com/"><img src="https://images.opencollective.com/airbnb/d327d66/logo.png" alt="Airbnb" height="96"></a> <a href="https://coinbase.com"><img src="https://avatars.githubusercontent.com/u/1885080?v=4" alt="Coinbase" height="96"></a> <a href="https://substack.com/"><img src="https://avatars.githubusercontent.com/u/53023767?v=4" alt="Substack" height="96"></a></p><h3>Silver Sponsors</h3> <p><a href="https://retool.com/"><img src="https://images.opencollective.com/retool/98ea68e/logo.png" alt="Retool" height="64"></a> <a href="https://liftoff.io/"><img src="https://images.opencollective.com/liftoff/5c4fa84/logo.png" alt="Liftoff" height="64"></a></p><h3>Bronze Sponsors</h3> <p><a href="https://www.crosswordsolver.org/anagram-solver/"><img src="https://images.opencollective.com/anagram-solver/2666271/logo.png" alt="Anagram Solver" height="32"></a> <a href="null"><img src="https://images.opencollective.com/bugsnag-stability-monitoring/c2cef36/logo.png" alt="Bugsnag Stability Monitoring" height="32"></a> <a href="https://mixpanel.com"><img src="https://images.opencollective.com/mixpanel/cd682f7/logo.png" alt="Mixpanel" height="32"></a> <a href="https://www.vpsserver.com"><img src="https://images.opencollective.com/vpsservercom/logo.png" alt="VPS Server" height="32"></a> <a href="https://icons8.com"><img src="https://images.opencollective.com/icons8/7fa1641/logo.png" alt="Icons8: free icons, photos, illustrations, and music" height="32"></a> <a href="https://discord.com"><img src="https://images.opencollective.com/discordapp/f9645d9/logo.png" alt="Discord" height="32"></a> <a href="https://themeisle.com"><img src="https://images.opencollective.com/themeisle/d5592fe/logo.png" alt="ThemeIsle" height="32"></a> <a href="https://www.firesticktricks.com"><img src="https://images.opencollective.com/fire-stick-tricks/b8fbe2c/logo.png" alt="Fire Stick Tricks" height="32"></a> <a href="https://www.practiceignition.com"><img src="https://avatars.githubusercontent.com/u/5753491?v=4" alt="Practice Ignition" height="32"></a></p> <!--sponsorsend--> ## <a name="technology-sponsors"></a>Technology Sponsors * Site search ([eslint.org](https://eslint.org)) is sponsored by [Algolia](https://www.algolia.com) * Hosting for ([eslint.org](https://eslint.org)) is sponsored by [Netlify](https://www.netlify.com) * Password management is sponsored by [1Password](https://www.1password.com) # eslint-utils [![npm version](https://img.shields.io/npm/v/eslint-utils.svg)](https://www.npmjs.com/package/eslint-utils) [![Downloads/month](https://img.shields.io/npm/dm/eslint-utils.svg)](http://www.npmtrends.com/eslint-utils) [![Build Status](https://github.com/mysticatea/eslint-utils/workflows/CI/badge.svg)](https://github.com/mysticatea/eslint-utils/actions) [![Coverage Status](https://codecov.io/gh/mysticatea/eslint-utils/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/eslint-utils) [![Dependency Status](https://david-dm.org/mysticatea/eslint-utils.svg)](https://david-dm.org/mysticatea/eslint-utils) ## 🏁 Goal This package provides utility functions and classes for make ESLint custom rules. For examples: - [getStaticValue](https://eslint-utils.mysticatea.dev/api/ast-utils.html#getstaticvalue) evaluates static value on AST. - [ReferenceTracker](https://eslint-utils.mysticatea.dev/api/scope-utils.html#referencetracker-class) checks the members of modules/globals as handling assignments and destructuring. ## 📖 Usage See [documentation](https://eslint-utils.mysticatea.dev/). ## 📰 Changelog See [releases](https://github.com/mysticatea/eslint-utils/releases). ## ❤️ Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run clean` removes the coverage result of `npm test` command. - `npm run coverage` shows the coverage result of the last `npm test` command. - `npm run lint` runs ESLint. - `npm run watch` runs tests on each file change. text-encoding-utf-8 ============== This is a **partial** polyfill for the [Encoding Living Standard](https://encoding.spec.whatwg.org/) API for the Web, allowing encoding and decoding of textual data to and from Typed Array buffers for binary data in JavaScript. This is fork of [text-encoding](https://github.com/inexorabletash/text-encoding) that **only** support **UTF-8**. Basic examples and tests are included. ### Install ### There are a few ways you can get the `text-encoding-utf-8` library. #### Node #### `text-encoding-utf-8` is on `npm`. Simply run: ```js npm install text-encoding-utf-8 ``` Or add it to your `package.json` dependencies. ### HTML Page Usage ### ```html <script src="encoding.js"></script> ``` ### API Overview ### Basic Usage ```js var uint8array = TextEncoder(encoding).encode(string); var string = TextDecoder(encoding).decode(uint8array); ``` Streaming Decode ```js var string = "", decoder = TextDecoder(encoding), buffer; while (buffer = next_chunk()) { string += decoder.decode(buffer, {stream:true}); } string += decoder.decode(); // finish the stream ``` ### Encodings ### Only `utf-8` and `UTF-8` are supported. ### Non-Standard Behavior ### Only `utf-8` and `UTF-8` are supported. ### Motivation Binary size matters, especially on a mobile phone. Safari on iOS does not support TextDecoder or TextEncoder. TweetNaCl.js ============ Port of [TweetNaCl](http://tweetnacl.cr.yp.to) / [NaCl](http://nacl.cr.yp.to/) to JavaScript for modern browsers and Node.js. Public domain. [![Build Status](https://travis-ci.org/dchest/tweetnacl-js.svg?branch=master) ](https://travis-ci.org/dchest/tweetnacl-js) Demo: <https://dchest.github.io/tweetnacl-js/> Documentation ============= * [Overview](#overview) * [Audits](#audits) * [Installation](#installation) * [Examples](#examples) * [Usage](#usage) * [Public-key authenticated encryption (box)](#public-key-authenticated-encryption-box) * [Secret-key authenticated encryption (secretbox)](#secret-key-authenticated-encryption-secretbox) * [Scalar multiplication](#scalar-multiplication) * [Signatures](#signatures) * [Hashing](#hashing) * [Random bytes generation](#random-bytes-generation) * [Constant-time comparison](#constant-time-comparison) * [System requirements](#system-requirements) * [Development and testing](#development-and-testing) * [Benchmarks](#benchmarks) * [Contributors](#contributors) * [Who uses it](#who-uses-it) Overview -------- The primary goal of this project is to produce a translation of TweetNaCl to JavaScript which is as close as possible to the original C implementation, plus a thin layer of idiomatic high-level API on top of it. There are two versions, you can use either of them: * `nacl.js` is the port of TweetNaCl with minimum differences from the original + high-level API. * `nacl-fast.js` is like `nacl.js`, but with some functions replaced with faster versions. (Used by default when importing NPM package.) Audits ------ TweetNaCl.js has been audited by [Cure53](https://cure53.de/) in January-February 2017 (audit was sponsored by [Deletype](https://deletype.com)): > The overall outcome of this audit signals a particularly positive assessment > for TweetNaCl-js, as the testing team was unable to find any security > problems in the library. It has to be noted that this is an exceptionally > rare result of a source code audit for any project and must be seen as a true > testament to a development proceeding with security at its core. > > To reiterate, the TweetNaCl-js project, the source code was found to be > bug-free at this point. > > [...] > > In sum, the testing team is happy to recommend the TweetNaCl-js project as > likely one of the safer and more secure cryptographic tools among its > competition. [Read full audit report](https://cure53.de/tweetnacl.pdf) Installation ------------ You can install TweetNaCl.js via a package manager: [Yarn](https://yarnpkg.com/): $ yarn add tweetnacl [NPM](https://www.npmjs.org/): $ npm install tweetnacl or [download source code](https://github.com/dchest/tweetnacl-js/releases). Examples -------- You can find usage examples in our [wiki](https://github.com/dchest/tweetnacl-js/wiki/Examples). Usage ----- All API functions accept and return bytes as `Uint8Array`s. If you need to encode or decode strings, use functions from <https://github.com/dchest/tweetnacl-util-js> or one of the more robust codec packages. In Node.js v4 and later `Buffer` objects are backed by `Uint8Array`s, so you can freely pass them to TweetNaCl.js functions as arguments. The returned objects are still `Uint8Array`s, so if you need `Buffer`s, you'll have to convert them manually; make sure to convert using copying: `Buffer.from(array)` (or `new Buffer(array)` in Node.js v4 or earlier), instead of sharing: `Buffer.from(array.buffer)` (or `new Buffer(array.buffer)` Node 4 or earlier), because some functions return subarrays of their buffers. ### Public-key authenticated encryption (box) Implements *x25519-xsalsa20-poly1305*. #### nacl.box.keyPair() Generates a new random key pair for box and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 32-byte secret key } #### nacl.box.keyPair.fromSecretKey(secretKey) Returns a key pair for box with public key corresponding to the given secret key. #### nacl.box(message, nonce, theirPublicKey, mySecretKey) Encrypts and authenticates message using peer's public key, our secret key, and the given nonce, which must be unique for each distinct message for a key pair. Returns an encrypted and authenticated message, which is `nacl.box.overheadLength` longer than the original message. #### nacl.box.open(box, nonce, theirPublicKey, mySecretKey) Authenticates and decrypts the given box with peer's public key, our secret key, and the given nonce. Returns the original message, or `null` if authentication fails. #### nacl.box.before(theirPublicKey, mySecretKey) Returns a precomputed shared key which can be used in `nacl.box.after` and `nacl.box.open.after`. #### nacl.box.after(message, nonce, sharedKey) Same as `nacl.box`, but uses a shared key precomputed with `nacl.box.before`. #### nacl.box.open.after(box, nonce, sharedKey) Same as `nacl.box.open`, but uses a shared key precomputed with `nacl.box.before`. #### Constants ##### nacl.box.publicKeyLength = 32 Length of public key in bytes. ##### nacl.box.secretKeyLength = 32 Length of secret key in bytes. ##### nacl.box.sharedKeyLength = 32 Length of precomputed shared key in bytes. ##### nacl.box.nonceLength = 24 Length of nonce in bytes. ##### nacl.box.overheadLength = 16 Length of overhead added to box compared to original message. ### Secret-key authenticated encryption (secretbox) Implements *xsalsa20-poly1305*. #### nacl.secretbox(message, nonce, key) Encrypts and authenticates message using the key and the nonce. The nonce must be unique for each distinct message for this key. Returns an encrypted and authenticated message, which is `nacl.secretbox.overheadLength` longer than the original message. #### nacl.secretbox.open(box, nonce, key) Authenticates and decrypts the given secret box using the key and the nonce. Returns the original message, or `null` if authentication fails. #### Constants ##### nacl.secretbox.keyLength = 32 Length of key in bytes. ##### nacl.secretbox.nonceLength = 24 Length of nonce in bytes. ##### nacl.secretbox.overheadLength = 16 Length of overhead added to secret box compared to original message. ### Scalar multiplication Implements *x25519*. #### nacl.scalarMult(n, p) Multiplies an integer `n` by a group element `p` and returns the resulting group element. #### nacl.scalarMult.base(n) Multiplies an integer `n` by a standard group element and returns the resulting group element. #### Constants ##### nacl.scalarMult.scalarLength = 32 Length of scalar in bytes. ##### nacl.scalarMult.groupElementLength = 32 Length of group element in bytes. ### Signatures Implements [ed25519](http://ed25519.cr.yp.to). #### nacl.sign.keyPair() Generates new random key pair for signing and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 64-byte secret key } #### nacl.sign.keyPair.fromSecretKey(secretKey) Returns a signing key pair with public key corresponding to the given 64-byte secret key. The secret key must have been generated by `nacl.sign.keyPair` or `nacl.sign.keyPair.fromSeed`. #### nacl.sign.keyPair.fromSeed(seed) Returns a new signing key pair generated deterministically from a 32-byte seed. The seed must contain enough entropy to be secure. This method is not recommended for general use: instead, use `nacl.sign.keyPair` to generate a new key pair from a random seed. #### nacl.sign(message, secretKey) Signs the message using the secret key and returns a signed message. #### nacl.sign.open(signedMessage, publicKey) Verifies the signed message and returns the message without signature. Returns `null` if verification failed. #### nacl.sign.detached(message, secretKey) Signs the message using the secret key and returns a signature. #### nacl.sign.detached.verify(message, signature, publicKey) Verifies the signature for the message and returns `true` if verification succeeded or `false` if it failed. #### Constants ##### nacl.sign.publicKeyLength = 32 Length of signing public key in bytes. ##### nacl.sign.secretKeyLength = 64 Length of signing secret key in bytes. ##### nacl.sign.seedLength = 32 Length of seed for `nacl.sign.keyPair.fromSeed` in bytes. ##### nacl.sign.signatureLength = 64 Length of signature in bytes. ### Hashing Implements *SHA-512*. #### nacl.hash(message) Returns SHA-512 hash of the message. #### Constants ##### nacl.hash.hashLength = 64 Length of hash in bytes. ### Random bytes generation #### nacl.randomBytes(length) Returns a `Uint8Array` of the given length containing random bytes of cryptographic quality. **Implementation note** TweetNaCl.js uses the following methods to generate random bytes, depending on the platform it runs on: * `window.crypto.getRandomValues` (WebCrypto standard) * `window.msCrypto.getRandomValues` (Internet Explorer 11) * `crypto.randomBytes` (Node.js) If the platform doesn't provide a suitable PRNG, the following functions, which require random numbers, will throw exception: * `nacl.randomBytes` * `nacl.box.keyPair` * `nacl.sign.keyPair` Other functions are deterministic and will continue working. If a platform you are targeting doesn't implement secure random number generator, but you somehow have a cryptographically-strong source of entropy (not `Math.random`!), and you know what you are doing, you can plug it into TweetNaCl.js like this: nacl.setPRNG(function(x, n) { // ... copy n random bytes into x ... }); Note that `nacl.setPRNG` *completely replaces* internal random byte generator with the one provided. ### Constant-time comparison #### nacl.verify(x, y) Compares `x` and `y` in constant time and returns `true` if their lengths are non-zero and equal, and their contents are equal. Returns `false` if either of the arguments has zero length, or arguments have different lengths, or their contents differ. System requirements ------------------- TweetNaCl.js supports modern browsers that have a cryptographically secure pseudorandom number generator and typed arrays, including the latest versions of: * Chrome * Firefox * Safari (Mac, iOS) * Internet Explorer 11 Other systems: * Node.js Development and testing ------------------------ Install NPM modules needed for development: $ npm install To build minified versions: $ npm run build Tests use minified version, so make sure to rebuild it every time you change `nacl.js` or `nacl-fast.js`. ### Testing To run tests in Node.js: $ npm run test-node By default all tests described here work on `nacl.min.js`. To test other versions, set environment variable `NACL_SRC` to the file name you want to test. For example, the following command will test fast minified version: $ NACL_SRC=nacl-fast.min.js npm run test-node To run full suite of tests in Node.js, including comparing outputs of JavaScript port to outputs of the original C version: $ npm run test-node-all To prepare tests for browsers: $ npm run build-test-browser and then open `test/browser/test.html` (or `test/browser/test-fast.html`) to run them. To run tests in both Node and Electron: $ npm test ### Benchmarking To run benchmarks in Node.js: $ npm run bench $ NACL_SRC=nacl-fast.min.js npm run bench To run benchmarks in a browser, open `test/benchmark/bench.html` (or `test/benchmark/bench-fast.html`). Benchmarks ---------- For reference, here are benchmarks from MacBook Pro (Retina, 13-inch, Mid 2014) laptop with 2.6 GHz Intel Core i5 CPU (Intel) in Chrome 53/OS X and Xiaomi Redmi Note 3 smartphone with 1.8 GHz Qualcomm Snapdragon 650 64-bit CPU (ARM) in Chrome 52/Android: | | nacl.js Intel | nacl-fast.js Intel | nacl.js ARM | nacl-fast.js ARM | | ------------- |:-------------:|:-------------------:|:-------------:|:-----------------:| | salsa20 | 1.3 MB/s | 128 MB/s | 0.4 MB/s | 43 MB/s | | poly1305 | 13 MB/s | 171 MB/s | 4 MB/s | 52 MB/s | | hash | 4 MB/s | 34 MB/s | 0.9 MB/s | 12 MB/s | | secretbox 1K | 1113 op/s | 57583 op/s | 334 op/s | 14227 op/s | | box 1K | 145 op/s | 718 op/s | 37 op/s | 368 op/s | | scalarMult | 171 op/s | 733 op/s | 56 op/s | 380 op/s | | sign | 77 op/s | 200 op/s | 20 op/s | 61 op/s | | sign.open | 39 op/s | 102 op/s | 11 op/s | 31 op/s | (You can run benchmarks on your devices by clicking on the links at the bottom of the [home page](https://tweetnacl.js.org)). In short, with *nacl-fast.js* and 1024-byte messages you can expect to encrypt and authenticate more than 57000 messages per second on a typical laptop or more than 14000 messages per second on a $170 smartphone, sign about 200 and verify 100 messages per second on a laptop or 60 and 30 messages per second on a smartphone, per CPU core (with Web Workers you can do these operations in parallel), which is good enough for most applications. Contributors ------------ See AUTHORS.md file. Third-party libraries based on TweetNaCl.js ------------------------------------------- * [forward-secrecy](https://github.com/alax/forward-secrecy) — Axolotl ratchet implementation * [nacl-stream](https://github.com/dchest/nacl-stream-js) - streaming encryption * [tweetnacl-auth-js](https://github.com/dchest/tweetnacl-auth-js) — implementation of [`crypto_auth`](http://nacl.cr.yp.to/auth.html) * [tweetnacl-sealed-box](https://github.com/whs/tweetnacl-sealed-box) — implementation of [`sealed boxes`](https://download.libsodium.org/doc/public-key_cryptography/sealed_boxes.html) * [chloride](https://github.com/dominictarr/chloride) - unified API for various NaCl modules Who uses it ----------- Some notable users of TweetNaCl.js: * [GitHub](https://github.com) * [MEGA](https://github.com/meganz/webclient) * [Stellar](https://www.stellar.org/) * [miniLock](https://github.com/kaepora/miniLock) [![build status](https://app.travis-ci.com/dankogai/js-base64.svg)](https://app.travis-ci.com/github/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.encode(latin, true)); // ZGFua29nYWk skips padding Base64.encodeURI(latin)); // ZGFua29nYWk Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.decode( 'ZGFua29nYWk'); // dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend Uint8Array.prototype Base64.extendUint8Array(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IEs before 11. Do the following in your shell. ```shell $ make base64.es5.js ``` ## Brief History * Since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. * Since version 3.7 `base64.js` is ES5-compatible again (hence IE11-compabile). * Since 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see above) # BIP39 [![Build Status](https://travis-ci.org/bitcoinjs/bip39.png?branch=master)](https://travis-ci.org/bitcoinjs/bip39) [![NPM](https://img.shields.io/npm/v/bip39.svg)](https://www.npmjs.org/package/bip39) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) JavaScript implementation of [Bitcoin BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki): Mnemonic code for generating deterministic keys ## Reminder for developers ***Please remember to allow recovery from mnemonic phrases that have invalid checksums (or that you don't have the wordlist)*** When a checksum is invalid, warn the user that the phrase is not something generated by your app, and ask if they would like to use it anyway. This way, your app only needs to hold the wordlists for your supported languages, but you can recover phrases made by other apps in other languages. However, there should be other checks in place, such as checking to make sure the user is inputting 12 words or more separated by a space. ie. `phrase.trim().split(/\s+/g).length >= 12` ## Removing wordlists from webpack/browserify Browserify/Webpack bundles can get very large if you include all the wordlists, so you can now exclude wordlists to make your bundle lighter. For example, if we want to exclude all wordlists besides chinese_simplified, you could build using the browserify command below. ```bash $ browserify -r bip39 -s bip39 \ --exclude=./wordlists/english.json \ --exclude=./wordlists/japanese.json \ --exclude=./wordlists/spanish.json \ --exclude=./wordlists/italian.json \ --exclude=./wordlists/french.json \ --exclude=./wordlists/korean.json \ --exclude=./wordlists/chinese_traditional.json \ > bip39.browser.js ``` This will create a bundle that only contains the chinese_simplified wordlist, and it will be the default wordlist for all calls without explicit wordlists. You can also do this in Webpack using the `IgnorePlugin`. Here is an example of excluding all non-English wordlists ```javascript ... plugins: [ new webpack.IgnorePlugin(/^\.\/(?!english)/, /bip39\/src\/wordlists$/), ], ... ``` This is how it will look in the browser console. ```javascript > bip39.entropyToMnemonic('00000000000000000000000000000000') "的 的 的 的 的 的 的 的 的 的 的 在" > bip39.wordlists.chinese_simplified Array(2048) [ "的", "一", "是", "在", "不", "了", "有", "和", "人", "这", … ] > bip39.wordlists.english undefined > bip39.wordlists.japanese undefined > bip39.wordlists.spanish undefined > bip39.wordlists.italian undefined > bip39.wordlists.french undefined > bip39.wordlists.korean undefined > bip39.wordlists.chinese_traditional undefined ``` For a list of supported wordlists check the wordlists folder. The name of the json file (minus the extension) is the name of the key to access the wordlist. You can also change the default wordlist at runtime if you dislike the wordlist you were given as default. ```javascript > bip39.entropyToMnemonic('00000000000000000000000000000fff') "あいこくしん あいこくしん あいこくしん あいこくしん あいこくしん あいこくしん あいこくしん あいこくしん あいこくしん あいこくしん あまい ろんり" > bip39.setDefaultWordlist('italian') undefined > bip39.entropyToMnemonic('00000000000000000000000000000fff') "abaco abaco abaco abaco abaco abaco abaco abaco abaco abaco aforisma zibetto" ``` ## Installation ``` bash npm install bip39 ``` ## Examples ``` js // Generate a random mnemonic (uses crypto.randomBytes under the hood), defaults to 128-bits of entropy const mnemonic = bip39.generateMnemonic() // => 'seed sock milk update focus rotate barely fade car face mechanic mercy' bip39.mnemonicToSeedSync('basket actual').toString('hex') // => '5cf2d4a8b0355e90295bdfc565a022a409af063d5365bb57bf74d9528f494bfa4400f53d8349b80fdae44082d7f9541e1dba2b003bcfec9d0d53781ca676651f' bip39.mnemonicToSeedSync('basket actual') // => <Buffer 5c f2 d4 a8 b0 35 5e 90 29 5b df c5 65 a0 22 a4 09 af 06 3d 53 65 bb 57 bf 74 d9 52 8f 49 4b fa 44 00 f5 3d 83 49 b8 0f da e4 40 82 d7 f9 54 1e 1d ba 2b ...> // mnemonicToSeed has an synchronous version // mnemonicToSeedSync is less performance oriented bip39.mnemonicToSeed('basket actual').then(console.log) // => <Buffer 5c f2 d4 a8 b0 35 5e 90 29 5b df c5 65 a0 22 a4 09 af 06 3d 53 65 bb 57 bf 74 d9 52 8f 49 4b fa 44 00 f5 3d 83 49 b8 0f da e4 40 82 d7 f9 54 1e 1d ba 2b ...> bip39.mnemonicToSeed('basket actual').then(bytes => bytes.toString('hex')).then(console.log) // => '5cf2d4a8b0355e90295bdfc565a022a409af063d5365bb57bf74d9528f494bfa4400f53d8349b80fdae44082d7f9541e1dba2b003bcfec9d0d53781ca676651f' bip39.mnemonicToSeedSync('basket actual', 'a password') // => <Buffer 46 16 a4 4f 2c 90 b9 69 02 14 b8 fd 43 5b b4 14 62 43 de 10 7b 30 87 59 0a 3b b8 d3 1b 2f 3a ef ab 1d 4b 52 6d 21 e5 0a 04 02 3d 7a d0 66 43 ea 68 3b ... > bip39.validateMnemonic(mnemonic) // => true bip39.validateMnemonic('basket actual') // => false ``` ``` js const bip39 = require('bip39') // defaults to BIP39 English word list // uses HEX strings for entropy const mnemonic = bip39.entropyToMnemonic('00000000000000000000000000000000') // => abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about // reversible bip39.mnemonicToEntropy(mnemonic) // => '00000000000000000000000000000000' ``` wide-align ---------- A wide-character aware text alignment function for use in terminals / on the console. ### Usage ``` var align = require('wide-align') // Note that if you view this on a unicode console, all of the slashes are // aligned. This is because on a console, all narrow characters are // an en wide and all wide characters are an em. In browsers, this isn't // held to and wide characters like "古" can be less than two narrow // characters even with a fixed width font. console.log(align.center('abc', 10)) // ' abc ' console.log(align.center('古古古', 10)) // ' 古古古 ' console.log(align.left('abc', 10)) // 'abc ' console.log(align.left('古古古', 10)) // '古古古 ' console.log(align.right('abc', 10)) // ' abc' console.log(align.right('古古古', 10)) // ' 古古古' ``` ### Functions #### `align.center(str, length)` → `str` Returns *str* with spaces added to both sides such that that it is *length* chars long and centered in the spaces. #### `align.left(str, length)` → `str` Returns *str* with spaces to the right such that it is *length* chars long. ### `align.right(str, length)` → `str` Returns *str* with spaces to the left such that it is *length* chars long. ### Origins These functions were originally taken from [cliui](https://npmjs.com/package/cliui). Changes include switching to the MUCH faster pad generation function from [lodash](https://npmjs.com/package/lodash), making center alignment pad both sides and adding left alignment. # lodash.truncate v4.4.2 The [lodash](https://lodash.com/) method `_.truncate` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.truncate ``` In Node.js: ```js var truncate = require('lodash.truncate'); ``` See the [documentation](https://lodash.com/docs#truncate) or [package source](https://github.com/lodash/lodash/blob/4.4.2-npm-packages/lodash.truncate) for more details. # `asbuild` [![Stars](https://img.shields.io/github/stars/AssemblyScript/asbuild.svg?style=social&maxAge=3600&label=Star)](https://github.com/AssemblyScript/asbuild/stargazers) *A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc.* ## 🚩 Table of Contents - [Installing](#-installing) - [Usage](#-usage) - [`asb init`](#asb-init---create-an-empty-project) - [`asb test`](#asb-test---run-as-pect-tests) - [`asb fmt`](#asb-fmt---format-as-files-using-eslint) - [`asb run`](#asb-run---run-a-wasi-binary) - [`asb build`](#asb-build---compile-the-project-using-asc) - [Background](#-background) ## 🔧 Installing Install it globally ``` npm install -g asbuild ``` Or, locally as dev dependencies ``` npm install --save-dev asbuild ``` ## 💡 Usage ``` Build tool for AssemblyScript projects. Usage: asb [command] [options] Commands: asb Alias of build command, to maintain back-ward compatibility [default] asb build Compile a local package and all of its dependencies [aliases: compile, make] asb init [baseDir] Create a new AS package in an given directory asb test Run as-pect tests asb fmt [paths..] This utility formats current module using eslint. [aliases: format, lint] Options: --version Show version number [boolean] --help Show help [boolean] ``` ### `asb init` - Create an empty project ``` asb init [baseDir] Create a new AS package in an given directory Positionals: baseDir Create a sample AS project in this directory [string] [default: "."] Options: --version Show version number [boolean] --help Show help [boolean] --yes Skip the interactive prompt [boolean] [default: false] ``` ### `asb test` - Run as-pect tests ``` asb test Run as-pect tests USAGE: asb test [options] -- [aspect_options] Options: --version Show version number [boolean] --help Show help [boolean] --verbose, --vv Print out arguments passed to as-pect [boolean] [default: false] ``` ### `asb fmt` - Format AS files using ESlint ``` asb fmt [paths..] This utility formats current module using eslint. Positionals: paths Paths to format [array] [default: ["."]] Initialisation: --init Generates recommended eslint config for AS Projects [boolean] Miscellaneous --lint, --dry-run Tries to fix problems without saving the changes to the file system [boolean] [default: false] Options: --version Show version number [boolean] --help Show help ``` ### `asb run` - Run a WASI binary ``` asb run Run a WASI binary USAGE: asb run [options] [binary path] -- [binary options] Positionals: binary path to Wasm binary [string] [required] Options: --version Show version number [boolean] --help Show help [boolean] --preopen, -p comma separated list of directories to open. [default: "."] ``` ### `asb build` - Compile the project using asc ``` asb build Compile a local package and all of its dependencies USAGE: asb build [entry_file] [options] -- [asc_options] Options: --version Show version number [boolean] --help Show help [boolean] --baseDir, -d Base directory of project. [string] [default: "."] --config, -c Path to asconfig file [string] [default: "./asconfig.json"] --wat Output wat file to outDir [boolean] [default: false] --outDir Directory to place built binaries. Default "./build/<target>/" [string] --target Target for compilation [string] [default: "release"] --verbose Print out arguments passed to asc [boolean] [default: false] Examples: asb build Build release of 'assembly/index.ts to build/release/packageName.wasm asb build --target release Build a release binary asb build -- --measure Pass argument to 'asc' ``` #### Defaults ##### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ##### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./tests/build_test) ## 📖 Background Asbuild started as wrapper around `asc` to provide an easier CLI interface and now has been extened to support other commands like `init`, `test` and `fmt` just like `cargo` to become a one stop build tool for AS Projects. ## 📜 License This library is provided under the open-source [MIT license](https://choosealicense.com/licenses/mit/). # hash-base [![NPM Package](https://img.shields.io/npm/v/hash-base.svg?style=flat-square)](https://www.npmjs.org/package/hash-base) [![Build Status](https://img.shields.io/travis/crypto-browserify/hash-base.svg?branch=master&style=flat-square)](https://travis-ci.org/crypto-browserify/hash-base) [![Dependency status](https://img.shields.io/david/crypto-browserify/hash-base.svg?style=flat-square)](https://david-dm.org/crypto-browserify/hash-base#info=dependencies) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Abstract base class to inherit from if you want to create streams implementing the same API as node crypto [Hash][1] (for [Cipher][2] / [Decipher][3] check [crypto-browserify/cipher-base][4]). ## Example ```js const HashBase = require('hash-base') const inherits = require('inherits') // our hash function is XOR sum of all bytes function MyHash () { HashBase.call(this, 1) // in bytes this._sum = 0x00 } inherits(MyHash, HashBase) MyHash.prototype._update = function () { for (let i = 0; i < this._block.length; ++i) this._sum ^= this._block[i] } MyHash.prototype._digest = function () { return this._sum } const data = Buffer.from([ 0x00, 0x42, 0x01 ]) const hash = new MyHash().update(data).digest() console.log(hash) // => 67 ``` You also can check [source code](index.js) or [crypto-browserify/md5.js][5] ## LICENSE MIT [1]: https://nodejs.org/api/crypto.html#crypto_class_hash [2]: https://nodejs.org/api/crypto.html#crypto_class_cipher [3]: https://nodejs.org/api/crypto.html#crypto_class_decipher [4]: https://github.com/crypto-browserify/cipher-base [5]: https://github.com/crypto-browserify/md5.js base64-js ========= `base64-js` does basic base64 encoding/decoding in pure JS. [![build status](https://secure.travis-ci.org/beatgammit/base64-js.png)](http://travis-ci.org/beatgammit/base64-js) Many browsers already have base64 encoding/decoding functionality, but it is for text data, not all-purpose binary data. Sometimes encoding/decoding binary data in the browser is useful, and that is what this module does. ## install With [npm](https://npmjs.org) do: `npm install base64-js` and `var base64js = require('base64-js')` For use in web browsers do: `<script src="base64js.min.js"></script>` [Get supported base64-js with the Tidelift Subscription](https://tidelift.com/subscription/pkg/npm-base64-js?utm_source=npm-base64-js&utm_medium=referral&utm_campaign=readme) ## methods `base64js` has three exposed functions, `byteLength`, `toByteArray` and `fromByteArray`, which both take a single argument. * `byteLength` - Takes a base64 string and returns length of byte array * `toByteArray` - Takes a base64 string and returns a byte array * `fromByteArray` - Takes a byte array and returns a base64 string ## license MIT [![npm version](https://img.shields.io/npm/v/espree.svg)](https://www.npmjs.com/package/espree) [![Build Status](https://travis-ci.org/eslint/espree.svg?branch=master)](https://travis-ci.org/eslint/espree) [![npm downloads](https://img.shields.io/npm/dm/espree.svg)](https://www.npmjs.com/package/espree) [![Bountysource](https://www.bountysource.com/badge/tracker?tracker_id=9348450)](https://www.bountysource.com/trackers/9348450-eslint?utm_source=9348450&utm_medium=shield&utm_campaign=TRACKER_BADGE) # Espree Espree started out as a fork of [Esprima](http://esprima.org) v1.2.2, the last stable published released of Esprima before work on ECMAScript 6 began. Espree is now built on top of [Acorn](https://github.com/ternjs/acorn), which has a modular architecture that allows extension of core functionality. The goal of Espree is to produce output that is similar to Esprima with a similar API so that it can be used in place of Esprima. ## Usage Install: ``` npm i espree ``` And in your Node.js code: ```javascript const espree = require("espree"); const ast = espree.parse(code); ``` ## API ### `parse()` `parse` parses the given code and returns a abstract syntax tree (AST). It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). ```javascript const espree = require("espree"); const ast = espree.parse(code, options); ``` **Example :** ```js const ast = espree.parse('let foo = "bar"', { ecmaVersion: 6 }); console.log(ast); ``` <details><summary>Output</summary> <p> ``` Node { type: 'Program', start: 0, end: 15, body: [ Node { type: 'VariableDeclaration', start: 0, end: 15, declarations: [Array], kind: 'let' } ], sourceType: 'script' } ``` </p> </details> ### `tokenize()` `tokenize` returns the tokens of a given code. It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). Even if `options` is empty or undefined or `options.tokens` is `false`, it assigns it to `true` in order to get the `tokens` array **Example :** ```js const tokens = espree.tokenize('let foo = "bar"', { ecmaVersion: 6 }); console.log(tokens); ``` <details><summary>Output</summary> <p> ``` Token { type: 'Keyword', value: 'let', start: 0, end: 3 }, Token { type: 'Identifier', value: 'foo', start: 4, end: 7 }, Token { type: 'Punctuator', value: '=', start: 8, end: 9 }, Token { type: 'String', value: '"bar"', start: 10, end: 15 } ``` </p> </details> ### `version` Returns the current `espree` version ### `VisitorKeys` Returns all visitor keys for traversing the AST from [eslint-visitor-keys](https://github.com/eslint/eslint-visitor-keys) ### `latestEcmaVersion` Returns the latest ECMAScript supported by `espree` ### `supportedEcmaVersions` Returns an array of all supported ECMAScript versions ## Options ```js const options = { // attach range information to each node range: false, // attach line/column location information to each node loc: false, // create a top-level comments array containing all comments comment: false, // create a top-level tokens array containing all tokens tokens: false, // Set to 3, 5 (default), 6, 7, 8, 9, 10, 11, or 12 to specify the version of ECMAScript syntax you want to use. // You can also set to 2015 (same as 6), 2016 (same as 7), 2017 (same as 8), 2018 (same as 9), 2019 (same as 10), 2020 (same as 11), or 2021 (same as 12) to use the year-based naming. ecmaVersion: 5, // specify which type of script you're parsing ("script" or "module") sourceType: "script", // specify additional language features ecmaFeatures: { // enable JSX parsing jsx: false, // enable return in global scope globalReturn: false, // enable implied strict mode (if ecmaVersion >= 5) impliedStrict: false } } ``` ## Esprima Compatibility Going Forward The primary goal is to produce the exact same AST structure and tokens as Esprima, and that takes precedence over anything else. (The AST structure being the [ESTree](https://github.com/estree/estree) API with JSX extensions.) Separate from that, Espree may deviate from what Esprima outputs in terms of where and how comments are attached, as well as what additional information is available on AST nodes. That is to say, Espree may add more things to the AST nodes than Esprima does but the overall AST structure produced will be the same. Espree may also deviate from Esprima in the interface it exposes. ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/espree/issues). Espree is licensed under a permissive BSD 2-clause license. ## Security Policy We work hard to ensure that Espree is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting * `npm run browserify` - creates a version of Espree that is usable in a browser ## Differences from Espree 2.x * The `tokenize()` method does not use `ecmaFeatures`. Any string will be tokenized completely based on ECMAScript 6 semantics. * Trailing whitespace no longer is counted as part of a node. * `let` and `const` declarations are no longer parsed by default. You must opt-in by using an `ecmaVersion` newer than `5` or setting `sourceType` to `module`. * The `esparse` and `esvalidate` binary scripts have been removed. * There is no `tolerant` option. We will investigate adding this back in the future. ## Known Incompatibilities In an effort to help those wanting to transition from other parsers to Espree, the following is a list of noteworthy incompatibilities with other parsers. These are known differences that we do not intend to change. ### Esprima 1.2.2 * Esprima counts trailing whitespace as part of each AST node while Espree does not. In Espree, the end of a node is where the last token occurs. * Espree does not parse `let` and `const` declarations by default. * Error messages returned for parsing errors are different. * There are two addition properties on every node and token: `start` and `end`. These represent the same data as `range` and are used internally by Acorn. ### Esprima 2.x * Esprima 2.x uses a different comment attachment algorithm that results in some comments being added in different places than Espree. The algorithm Espree uses is the same one used in Esprima 1.2.2. ## Frequently Asked Questions ### Why another parser [ESLint](http://eslint.org) had been relying on Esprima as its parser from the beginning. While that was fine when the JavaScript language was evolving slowly, the pace of development increased dramatically and Esprima had fallen behind. ESLint, like many other tools reliant on Esprima, has been stuck in using new JavaScript language features until Esprima updates, and that caused our users frustration. We decided the only way for us to move forward was to create our own parser, bringing us inline with JSHint and JSLint, and allowing us to keep implementing new features as we need them. We chose to fork Esprima instead of starting from scratch in order to move as quickly as possible with a compatible API. With Espree 2.0.0, we are no longer a fork of Esprima but rather a translation layer between Acorn and Esprima syntax. This allows us to put work back into a community-supported parser (Acorn) that is continuing to grow and evolve while maintaining an Esprima-compatible parser for those utilities still built on Esprima. ### Have you tried working with Esprima? Yes. Since the start of ESLint, we've regularly filed bugs and feature requests with Esprima and will continue to do so. However, there are some different philosophies around how the projects work that need to be worked through. The initial goal was to have Espree track Esprima and eventually merge the two back together, but we ultimately decided that building on top of Acorn was a better choice due to Acorn's plugin support. ### Why don't you just use Acorn? Acorn is a great JavaScript parser that produces an AST that is compatible with Esprima. Unfortunately, ESLint relies on more than just the AST to do its job. It relies on Esprima's tokens and comment attachment features to get a complete picture of the source code. We investigated switching to Acorn, but the inconsistencies between Esprima and Acorn created too much work for a project like ESLint. We are building on top of Acorn, however, so that we can contribute back and help make Acorn even better. ### What ECMAScript features do you support? Espree supports all ECMAScript 2020 features and partially supports ECMAScript 2021 features. Because ECMAScript 2021 is still under development, we are implementing features as they are finalized. Currently, Espree supports: * [Logical Assignment Operators](https://github.com/tc39/proposal-logical-assignment) * [Numeric Separators](https://github.com/tc39/proposal-numeric-separator) See [finished-proposals.md](https://github.com/tc39/proposals/blob/master/finished-proposals.md) to know what features are finalized. ### How do you determine which experimental features to support? In general, we do not support experimental JavaScript features. We may make exceptions from time to time depending on the maturity of the features. # Statuses [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Node.js Version][node-version-image]][node-version-url] [![Build Status][travis-image]][travis-url] [![Test Coverage][coveralls-image]][coveralls-url] HTTP status utility for node. This module provides a list of status codes and messages sourced from a few different projects: * The [IANA Status Code Registry](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) * The [Node.js project](https://nodejs.org/) * The [NGINX project](https://www.nginx.com/) * The [Apache HTTP Server project](https://httpd.apache.org/) ## Installation This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```sh $ npm install statuses ``` ## API <!-- eslint-disable no-unused-vars --> ```js var status = require('statuses') ``` ### var code = status(Integer || String) If `Integer` or `String` is a valid HTTP code or status message, then the appropriate `code` will be returned. Otherwise, an error will be thrown. <!-- eslint-disable no-undef --> ```js status(403) // => 403 status('403') // => 403 status('forbidden') // => 403 status('Forbidden') // => 403 status(306) // throws, as it's not supported by node.js ``` ### status.STATUS_CODES Returns an object which maps status codes to status messages, in the same format as the [Node.js http module](https://nodejs.org/dist/latest/docs/api/http.html#http_http_status_codes). ### status.codes Returns an array of all the status codes as `Integer`s. ### var msg = status[code] Map of `code` to `status message`. `undefined` for invalid `code`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status[404] // => 'Not Found' ``` ### var code = status[msg] Map of `status message` to `code`. `msg` can either be title-cased or lower-cased. `undefined` for invalid `status message`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status['not found'] // => 404 status['Not Found'] // => 404 ``` ### status.redirect[code] Returns `true` if a status code is a valid redirect status. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.redirect[200] // => undefined status.redirect[301] // => true ``` ### status.empty[code] Returns `true` if a status code expects an empty body. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.empty[200] // => undefined status.empty[204] // => true status.empty[304] // => true ``` ### status.retry[code] Returns `true` if you should retry the rest. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.retry[501] // => undefined status.retry[503] // => true ``` [npm-image]: https://img.shields.io/npm/v/statuses.svg [npm-url]: https://npmjs.org/package/statuses [node-version-image]: https://img.shields.io/node/v/statuses.svg [node-version-url]: https://nodejs.org/en/download [travis-image]: https://img.shields.io/travis/jshttp/statuses.svg [travis-url]: https://travis-ci.org/jshttp/statuses [coveralls-image]: https://img.shields.io/coveralls/jshttp/statuses.svg [coveralls-url]: https://coveralls.io/r/jshttp/statuses?branch=master [downloads-image]: https://img.shields.io/npm/dm/statuses.svg [downloads-url]: https://npmjs.org/package/statuses # Tools ## clang-format The clang-format checking tools is designed to check changed lines of code compared to given git-refs. ## Migration Script The migration tool is designed to reduce repetitive work in the migration process. However, the script is not aiming to convert every thing for you. There are usually some small fixes and major reconstruction required. ### How To Use To run the conversion script, first make sure you have the latest `node-addon-api` in your `node_modules` directory. ``` npm install node-addon-api ``` Then run the script passing your project directory ``` node ./node_modules/node-addon-api/tools/conversion.js ./ ``` After finish, recompile and debug things that are missed by the script. ### Quick Fixes Here is the list of things that can be fixed easily. 1. Change your methods' return value to void if it doesn't return value to JavaScript. 2. Use `.` to access attribute or to invoke member function in Napi::Object instead of `->`. 3. `Napi::New(env, value);` to `Napi::[Type]::New(env, value); ### Major Reconstructions The implementation of `Napi::ObjectWrap` is significantly different from NAN's. `Napi::ObjectWrap` takes a pointer to the wrapped object and creates a reference to the wrapped object inside ObjectWrap constructor. `Napi::ObjectWrap` also associates wrapped object's instance methods to Javascript module instead of static methods like NAN. So if you use Nan::ObjectWrap in your module, you will need to execute the following steps. 1. Convert your [ClassName]::New function to a constructor function that takes a `Napi::CallbackInfo`. Declare it as ``` [ClassName](const Napi::CallbackInfo& info); ``` and define it as ``` [ClassName]::[ClassName](const Napi::CallbackInfo& info) : Napi::ObjectWrap<[ClassName]>(info){ ... } ``` This way, the `Napi::ObjectWrap` constructor will be invoked after the object has been instantiated and `Napi::ObjectWrap` can use the `this` pointer to create a reference to the wrapped object. 2. Move your original constructor code into the new constructor. Delete your original constructor. 3. In your class initialization function, associate native methods in the following way. ``` Napi::FunctionReference constructor; void [ClassName]::Init(Napi::Env env, Napi::Object exports, Napi::Object module) { Napi::HandleScope scope(env); Napi::Function ctor = DefineClass(env, "Canvas", { InstanceMethod<&[ClassName]::Func1>("Func1"), InstanceMethod<&[ClassName]::Func2>("Func2"), InstanceAccessor<&[ClassName]::ValueGetter>("Value"), StaticMethod<&[ClassName]::StaticMethod>("MethodName"), InstanceValue("Value", Napi::[Type]::New(env, value)), }); constructor = Napi::Persistent(ctor); constructor .SuppressDestruct(); exports.Set("[ClassName]", ctor); } ``` 4. In function where you need to Unwrap the ObjectWrap in NAN like `[ClassName]* native = Nan::ObjectWrap::Unwrap<[ClassName]>(info.This());`, use `this` pointer directly as the unwrapped object as each ObjectWrap instance is associated with a unique object instance. If you still find issues after following this guide, please leave us an issue describing your problem and we will try to resolve it. # lodash v4.17.21 The [Lodash](https://lodash.com/) library exported as [Node.js](https://nodejs.org/) modules. ## Installation Using npm: ```shell $ npm i -g npm $ npm i --save lodash ``` In Node.js: ```js // Load the full build. var _ = require('lodash'); // Load the core build. var _ = require('lodash/core'); // Load the FP build for immutable auto-curried iteratee-first data-last methods. var fp = require('lodash/fp'); // Load method categories. var array = require('lodash/array'); var object = require('lodash/fp/object'); // Cherry-pick methods for smaller browserify/rollup/webpack bundles. var at = require('lodash/at'); var curryN = require('lodash/fp/curryN'); ``` See the [package source](https://github.com/lodash/lodash/tree/4.17.21-npm) for more details. **Note:**<br> Install [n_](https://www.npmjs.com/package/n_) for Lodash use in the Node.js < 6 REPL. ## Support Tested in Chrome 74-75, Firefox 66-67, IE 11, Edge 18, Safari 11-12, & Node.js 8-12.<br> Automated [browser](https://saucelabs.com/u/lodash) & [CI](https://travis-ci.org/lodash/lodash/) test runs are available. # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0,4.1,4.2?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <tannerntannern@gmail.com> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) # sha.js [![NPM Package](https://img.shields.io/npm/v/sha.js.svg?style=flat-square)](https://www.npmjs.org/package/sha.js) [![Build Status](https://img.shields.io/travis/crypto-browserify/sha.js.svg?branch=master&style=flat-square)](https://travis-ci.org/crypto-browserify/sha.js) [![Dependency status](https://img.shields.io/david/crypto-browserify/sha.js.svg?style=flat-square)](https://david-dm.org/crypto-browserify/sha.js#info=dependencies) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Node style `SHA` on pure JavaScript. ```js var shajs = require('sha.js') console.log(shajs('sha256').update('42').digest('hex')) // => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049 console.log(new shajs.sha256().update('42').digest('hex')) // => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049 var sha256stream = shajs('sha256') sha256stream.end('42') console.log(sha256stream.read().toString('hex')) // => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049 ``` ## supported hashes `sha.js` currently implements: - SHA (SHA-0) -- **legacy, do not use in new systems** - SHA-1 -- **legacy, do not use in new systems** - SHA-224 - SHA-256 - SHA-384 - SHA-512 ## Not an actual stream Note, this doesn't actually implement a stream, but wrapping this in a stream is trivial. It does update incrementally, so you can hash things larger than RAM, as it uses a constant amount of memory (except when using base64 or utf8 encoding, see code comments). ## Acknowledgements This work is derived from Paul Johnston's [A JavaScript implementation of the Secure Hash Algorithm](http://pajhome.org.uk/crypt/md5/sha1.html). ## LICENSE [MIT](LICENSE) # is-extglob [![NPM version](https://img.shields.io/npm/v/is-extglob.svg?style=flat)](https://www.npmjs.com/package/is-extglob) [![NPM downloads](https://img.shields.io/npm/dm/is-extglob.svg?style=flat)](https://npmjs.org/package/is-extglob) [![Build Status](https://img.shields.io/travis/jonschlinkert/is-extglob.svg?style=flat)](https://travis-ci.org/jonschlinkert/is-extglob) > Returns true if a string has an extglob. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-extglob ``` ## Usage ```js var isExtglob = require('is-extglob'); ``` **True** ```js isExtglob('?(abc)'); isExtglob('@(abc)'); isExtglob('!(abc)'); isExtglob('*(abc)'); isExtglob('+(abc)'); ``` **False** Escaped extglobs: ```js isExtglob('\\?(abc)'); isExtglob('\\@(abc)'); isExtglob('\\!(abc)'); isExtglob('\\*(abc)'); isExtglob('\\+(abc)'); ``` Everything else... ```js isExtglob('foo.js'); isExtglob('!foo.js'); isExtglob('*.js'); isExtglob('**/abc.js'); isExtglob('abc/*.js'); isExtglob('abc/(aaa|bbb).js'); isExtglob('abc/[a-z].js'); isExtglob('abc/{a,b}.js'); isExtglob('abc/?.js'); isExtglob('abc.js'); isExtglob('abc/def/ghi.js'); ``` ## History **v2.0** Adds support for escaping. Escaped exglobs no longer return true. ## About ### Related projects * [has-glob](https://www.npmjs.com/package/has-glob): Returns `true` if an array has a glob pattern. | [homepage](https://github.com/jonschlinkert/has-glob "Returns `true` if an array has a glob pattern.") * [is-glob](https://www.npmjs.com/package/is-glob): Returns `true` if the given string looks like a glob pattern or an extglob pattern… [more](https://github.com/jonschlinkert/is-glob) | [homepage](https://github.com/jonschlinkert/is-glob "Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a bet") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/jonschlinkert/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ### License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/is-extglob/blob/master/LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.1.31, on October 12, 2016._ # ripemd160 [![NPM Package](https://img.shields.io/npm/v/ripemd160.svg?style=flat-square)](https://www.npmjs.org/package/ripemd160) [![Build Status](https://img.shields.io/travis/crypto-browserify/ripemd160.svg?branch=master&style=flat-square)](https://travis-ci.org/crypto-browserify/ripemd160) [![Dependency status](https://img.shields.io/david/crypto-browserify/ripemd160.svg?style=flat-square)](https://david-dm.org/crypto-browserify/ripemd160#info=dependencies) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Node style `ripemd160` on pure JavaScript. ## Example ```js var RIPEMD160 = require('ripemd160') console.log(new RIPEMD160().update('42').digest('hex')) // => 0df020ba32aa9b8b904471ff582ce6b579bf8bc8 var ripemd160stream = new RIPEMD160() ripemd160stream.end('42') console.log(ripemd160stream.read().toString('hex')) // => 0df020ba32aa9b8b904471ff582ce6b579bf8bc8 ``` ## LICENSE MIT cipher-base === [![Build Status](https://travis-ci.org/crypto-browserify/cipher-base.svg)](https://travis-ci.org/crypto-browserify/cipher-base) Abstract base class to inherit from if you want to create streams implementing the same api as node crypto streams. Requires you to implement 2 methods `_final` and `_update`. `_update` takes a buffer and should return a buffer, `_final` takes no arguments and should return a buffer. The constructor takes one argument and that is a string which if present switches it into hash mode, i.e. the object you get from crypto.createHash or crypto.createSign, this switches the name of the final method to be the string you passed instead of `final` and returns `this` from update. [![codecov](https://codecov.io/gh/alepop/ed25519-hd-key/branch/master/graph/badge.svg)](https://codecov.io/gh/alepop/ed25519-hd-key) ed25519 HD Key ===== Key Derivation for `ed25519` ------------ [SLIP-0010](https://github.com/satoshilabs/slips/blob/master/slip-0010.md) - Specification Installation ------------ npm i --save ed25519-hd-key Usage ----- **example:** ```js const { derivePath, getMasterKeyFromSeed, getPublicKey } = require('ed25519-hd-key') const hexSeed = 'fffcf9f6f3f0edeae7e4e1dedbd8d5d2cfccc9c6c3c0bdbab7b4b1aeaba8a5a29f9c999693908d8a8784817e7b7875726f6c696663605d5a5754514e4b484542'; const { key, chainCode } = getMasterKeyFromSeed(hexSeed); console.log(key.toString('hex')) // => 2b4be7f19ee27bbf30c667b642d5f4aa69fd169872f8fc3059c08ebae2eb19e7 console.log(chainCode.toString('hex')); // => 90046a93de5380a72b5e45010748567d5ea02bbf6522f979e05c0d8d8ca9fffb const { key, chainCode} = derivePath("m/0'/2147483647'", hexSeed); console.log(key.toString('hex')) // => ea4f5bfe8694d8bb74b7b59404632fd5968b774ed545e810de9c32a4fb4192f4 console.log(chainCode.toString('hex')); // => 138f0b2551bcafeca6ff2aa88ba8ed0ed8de070841f0c4ef0165df8181eaad7f console.log(getPublicKey(key).toString('hex')) // => 005ba3b9ac6e90e83effcd25ac4e58a1365a9e35a3d3ae5eb07b9e4d90bcf7506d ``` Tests ----- ``` npm test ``` References ---------- [SLIP-0010](https://github.com/satoshilabs/slips/blob/master/slip-0010.md) [BIP-0032](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki) [BIP-0044](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) These files are compiled dot templates from dot folder. Do NOT edit them directly, edit the templates and run `npm run build` from main ajv folder. # responselike > A response-like object for mocking a Node.js HTTP response stream [![Build Status](https://travis-ci.org/lukechilds/responselike.svg?branch=master)](https://travis-ci.org/lukechilds/responselike) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/responselike/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/responselike?branch=master) [![npm](https://img.shields.io/npm/dm/responselike.svg)](https://www.npmjs.com/package/responselike) [![npm](https://img.shields.io/npm/v/responselike.svg)](https://www.npmjs.com/package/responselike) Returns a streamable response object similar to a [Node.js HTTP response stream](https://nodejs.org/api/http.html#http_class_http_incomingmessage). Useful for formatting cached responses so they can be consumed by code expecting a real response. ## Install ```shell npm install --save responselike ``` Or if you're just using for testing you'll want: ```shell npm install --save-dev responselike ``` ## Usage ```js const Response = require('responselike'); const response = new Response(200, { foo: 'bar' }, Buffer.from('Hi!'), 'https://example.com'); response.statusCode; // 200 response.headers; // { foo: 'bar' } response.body; // <Buffer 48 69 21> response.url; // 'https://example.com' response.pipe(process.stdout); // Hi! ``` ## API ### new Response(statusCode, headers, body, url) Returns a streamable response object similar to a [Node.js HTTP response stream](https://nodejs.org/api/http.html#http_class_http_incomingmessage). #### statusCode Type: `number` HTTP response status code. #### headers Type: `object` HTTP headers object. Keys will be automatically lowercased. #### body Type: `buffer` A Buffer containing the response body. The Buffer contents will be streamable but is also exposed directly as `response.body`. #### url Type: `string` Request URL string. ## License MIT © Luke Childs binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/main?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js import binaryen from "binaryen"; // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use main/latest. ### Command line The package includes Node.js builds of [wasm-opt](https://github.com/WebAssembly/binaryen#wasm-opt) and [wasm2js](https://github.com/WebAssembly/binaryen#wasm2js). API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/main/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Bulk memory operations 🦄](#bulk-memory-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/main/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#memory.**atomic.wait32**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#memory.**atomic.wait64**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#memory**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(name: `string`, body: `ExpressionRef`, catchTags: `string[]`, catchBodies: `ExpressionRef[]`, delegateTarget: `string`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` #### [Bulk memory operations](https://github.com/WebAssembly/bulk-memory-operations/blob/master/proposals/bulk-memory-operations/Overview.md) 🦄 * Module#memory.**init**(segment: `number`, dest: `ExpressionRef`, offset: `ExpressionRef`, size: `ExpressionRef`): `ExpressionRef` * Module#memory.**copy**(dest: `ExpressionRef`, source: `ExpressionRef`, size: `ExpressionRef`): `ExpressionRef` * Module#memory.**fill**(dest: `ExpressionRef`, value: `ExpressionRef`, size: `ExpressionRef`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. # registry-auth-token [![npm version](http://img.shields.io/npm/v/registry-auth-token.svg?style=flat-square)](http://browsenpm.org/package/registry-auth-token)[![Build Status](http://img.shields.io/travis/rexxars/registry-auth-token/main.svg?style=flat-square)](https://travis-ci.org/rexxars/registry-auth-token) Get the auth token set for an npm registry from `.npmrc`. Also allows fetching the configured registry URL for a given npm scope. ## Installing ``` npm install --save registry-auth-token ``` ## Usage Returns an object containing `token` and `type`, or `undefined` if no token can be found. `type` can be either `Bearer` or `Basic`. ```js var getAuthToken = require('registry-auth-token') var getRegistryUrl = require('registry-auth-token/registry-url') // Get auth token and type for default `registry` set in `.npmrc` console.log(getAuthToken()) // {token: 'someToken', type: 'Bearer'} // Get auth token for a specific registry URL console.log(getAuthToken('//registry.foo.bar')) // Find the registry auth token for a given URL (with deep path): // If registry is at `//some.host/registry` // URL passed is `//some.host/registry/deep/path` // Will find token the closest matching path; `//some.host/registry` console.log(getAuthToken('//some.host/registry/deep/path', {recursive: true})) // Find the configured registry url for scope `@foobar`. // Falls back to the global registry if not defined. console.log(getRegistryUrl('@foobar')) // Use the npm config that is passed in console.log(getRegistryUrl('http://registry.foobar.eu/', { npmrc: { 'registry': 'http://registry.foobar.eu/', '//registry.foobar.eu/:_authToken': 'qar' } })) ``` ## Return value ```js // If auth info can be found: {token: 'someToken', type: 'Bearer'} // Or: {token: 'someOtherToken', type: 'Basic'} // Or, if nothing is found: undefined ``` ## Security Please be careful when using this. Leaking your auth token is dangerous. ## License MIT-licensed. See LICENSE. <h1 align=center> <a href="http://chaijs.com" title="Chai Documentation"> <img alt="ChaiJS" src="http://chaijs.com/img/chai-logo.png"/> type-detect </a> </h1> <br> <p align=center> Improved typeof detection for <a href="http://nodejs.org">node</a> and the browser. </p> <p align=center> <a href="./LICENSE"> <img alt="license:mit" src="https://img.shields.io/badge/license-mit-green.svg?style=flat-square" /> </a> <a href="https://github.com/chaijs/type-detect/releases"> <img alt="tag:?" src="https://img.shields.io/github/tag/chaijs/type-detect.svg?style=flat-square" /> </a> <a href="https://travis-ci.org/chaijs/type-detect"> <img alt="build:?" src="https://img.shields.io/travis/chaijs/type-detect/master.svg?style=flat-square" /> </a> <a href="https://coveralls.io/r/chaijs/type-detect"> <img alt="coverage:?" src="https://img.shields.io/coveralls/chaijs/type-detect/master.svg?style=flat-square" /> </a> <a href="https://www.npmjs.com/packages/type-detect"> <img alt="npm:?" src="https://img.shields.io/npm/v/type-detect.svg?style=flat-square" /> </a> <a href="https://www.npmjs.com/packages/type-detect"> <img alt="dependencies:?" src="https://img.shields.io/npm/dm/type-detect.svg?style=flat-square" /> </a> <a href=""> <img alt="devDependencies:?" src="https://img.shields.io/david/chaijs/type-detect.svg?style=flat-square" /> </a> <br/> <table> <tr><th colspan=6>Supported Browsers</th></tr> <tr> <th align=center><img src="https://camo.githubusercontent.com/ab586f11dfcb49bf5f2c2fa9adadc5e857de122a/687474703a2f2f73766773686172652e636f6d2f692f3278532e737667" alt=""> Chrome</th> <th align=center><img src="https://camo.githubusercontent.com/98cca3108c18dcfaa62667b42046540c6822cdac/687474703a2f2f73766773686172652e636f6d2f692f3279352e737667" alt=""> Edge</th> <th align=center><img src="https://camo.githubusercontent.com/acdcb09840a9e1442cbaf1b684f95ab3c3f41cf4/687474703a2f2f73766773686172652e636f6d2f692f3279462e737667" alt=""> Firefox</th> <th align=center><img src="https://camo.githubusercontent.com/728f8cb0bee9ed58ab85e39266f1152c53e0dffd/687474703a2f2f73766773686172652e636f6d2f692f3278342e737667" alt=""> Safari</th> <th align=center><img src="https://camo.githubusercontent.com/96a2317034dee0040d0a762e7a30c3c650c45aac/687474703a2f2f73766773686172652e636f6d2f692f3279532e737667" alt=""> IE</th> </tr><tr> <td align=center>✅</td> <td align=center>✅</td> <td align=center>✅</td> <td align=center>✅</td> <td align=center>9, 10, 11</td> </tr> </table> <br> <a href="https://chai-slack.herokuapp.com/"> <img alt="Join the Slack chat" src="https://img.shields.io/badge/slack-join%20chat-E2206F.svg?style=flat-square" /> </a> <a href="https://gitter.im/chaijs/chai"> <img alt="Join the Gitter chat" src="https://img.shields.io/badge/gitter-join%20chat-D0104D.svg?style=flat-square" /> </a> </p> ## What is Type-Detect? Type Detect is a module which you can use to detect the type of a given object. It returns a string representation of the object's type, either using [`typeof`](http://www.ecma-international.org/ecma-262/6.0/index.html#sec-typeof-operator) or [`@@toStringTag`](http://www.ecma-international.org/ecma-262/6.0/index.html#sec-symbol.tostringtag). It also normalizes some object names for consistency among browsers. ## Why? The `typeof` operator will only specify primitive values; everything else is `"object"` (including `null`, arrays, regexps, etc). Many developers use `Object.prototype.toString()` - which is a fine alternative and returns many more types (null returns `[object Null]`, Arrays as `[object Array]`, regexps as `[object RegExp]` etc). Sadly, `Object.prototype.toString` is slow, and buggy. By slow - we mean it is slower than `typeof`. By buggy - we mean that some values (like Promises, the global object, iterators, dataviews, a bunch of HTML elements) all report different things in different browsers. `type-detect` fixes all of the shortcomings with `Object.prototype.toString`. We have extra code to speed up checks of JS and DOM objects, as much as 20-30x faster for some values. `type-detect` also fixes any consistencies with these objects. ## Installation ### Node.js `type-detect` is available on [npm](http://npmjs.org). To install it, type: $ npm install type-detect ### Browsers You can also use it within the browser; install via npm and use the `type-detect.js` file found within the download. For example: ```html <script src="./node_modules/type-detect/type-detect.js"></script> ``` ## Usage The primary export of `type-detect` is function that can serve as a replacement for `typeof`. The results of this function will be more specific than that of native `typeof`. ```js var type = require('type-detect'); ``` #### array ```js assert(type([]) === 'Array'); assert(type(new Array()) === 'Array'); ``` #### regexp ```js assert(type(/a-z/gi) === 'RegExp'); assert(type(new RegExp('a-z')) === 'RegExp'); ``` #### function ```js assert(type(function () {}) === 'function'); ``` #### arguments ```js (function () { assert(type(arguments) === 'Arguments'); })(); ``` #### date ```js assert(type(new Date) === 'Date'); ``` #### number ```js assert(type(1) === 'number'); assert(type(1.234) === 'number'); assert(type(-1) === 'number'); assert(type(-1.234) === 'number'); assert(type(Infinity) === 'number'); assert(type(NaN) === 'number'); assert(type(new Number(1)) === 'Number'); // note - the object version has a capital N ``` #### string ```js assert(type('hello world') === 'string'); assert(type(new String('hello')) === 'String'); // note - the object version has a capital S ``` #### null ```js assert(type(null) === 'null'); assert(type(undefined) !== 'null'); ``` #### undefined ```js assert(type(undefined) === 'undefined'); assert(type(null) !== 'undefined'); ``` #### object ```js var Noop = function () {}; assert(type({}) === 'Object'); assert(type(Noop) !== 'Object'); assert(type(new Noop) === 'Object'); assert(type(new Object) === 'Object'); ``` #### ECMA6 Types All new ECMAScript 2015 objects are also supported, such as Promises and Symbols: ```js assert(type(new Map() === 'Map'); assert(type(new WeakMap()) === 'WeakMap'); assert(type(new Set()) === 'Set'); assert(type(new WeakSet()) === 'WeakSet'); assert(type(Symbol()) === 'symbol'); assert(type(new Promise(callback) === 'Promise'); assert(type(new Int8Array()) === 'Int8Array'); assert(type(new Uint8Array()) === 'Uint8Array'); assert(type(new UInt8ClampedArray()) === 'Uint8ClampedArray'); assert(type(new Int16Array()) === 'Int16Array'); assert(type(new Uint16Array()) === 'Uint16Array'); assert(type(new Int32Array()) === 'Int32Array'); assert(type(new UInt32Array()) === 'Uint32Array'); assert(type(new Float32Array()) === 'Float32Array'); assert(type(new Float64Array()) === 'Float64Array'); assert(type(new ArrayBuffer()) === 'ArrayBuffer'); assert(type(new DataView(arrayBuffer)) === 'DataView'); ``` Also, if you use `Symbol.toStringTag` to change an Objects return value of the `toString()` Method, `type()` will return this value, e.g: ```js var myObject = {}; myObject[Symbol.toStringTag] = 'myCustomType'; assert(type(myObject) === 'myCustomType'); ``` # json-buffer JSON functions that can convert buffers! [![build status](https://secure.travis-ci.org/dominictarr/json-buffer.png)](http://travis-ci.org/dominictarr/json-buffer) [![testling badge](https://ci.testling.com/dominictarr/json-buffer.png)](https://ci.testling.com/dominictarr/json-buffer) JSON mangles buffers by converting to an array... which isn't helpful. json-buffers converts to base64 instead, and deconverts base64 to a buffer. ``` js var JSONB = require('json-buffer') var Buffer = require('buffer').Buffer var str = JSONB.stringify(new Buffer('hello there!')) console.log(JSONB.parse(str)) //GET a BUFFER back ``` ## License MIT # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = new LRU(options) , otherCache = new LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. Setting it to a non-number or negative number will throw a `TypeError`. Setting it to 0 makes it be `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a `TypeError`. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. * `updateAgeOnGet` When using time-expiring entries with `maxAge`, setting this to `true` will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.) ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT # file-entry-cache > Super simple cache for file metadata, useful for process that work o a given series of files > and that only need to repeat the job on the changed ones since the previous run of the process — Edit [![NPM Version](http://img.shields.io/npm/v/file-entry-cache.svg?style=flat)](https://npmjs.org/package/file-entry-cache) [![Build Status](http://img.shields.io/travis/royriojas/file-entry-cache.svg?style=flat)](https://travis-ci.org/royriojas/file-entry-cache) ## install ```bash npm i --save file-entry-cache ``` ## Usage The module exposes two functions `create` and `createFromFile`. ## `create(cacheName, [directory, useCheckSum])` - **cacheName**: the name of the cache to be created - **directory**: Optional the directory to load the cache from - **usecheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ## `createFromFile(pathToCache, [useCheckSum])` - **pathToCache**: the path to the cache file (this combines the cache name and directory) - **useCheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ```js // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var fileEntryCache = require('file-entry-cache'); var cache = fileEntryCache.create('testCache'); var files = expand('../fixtures/*.txt'); // the first time this method is called, will return all the files var oFiles = cache.getUpdatedFiles(files); // this will persist this to disk checking each file stats and // updating the meta attributes `size` and `mtime`. // custom fields could also be added to the meta object and will be persisted // in order to retrieve them later cache.reconcile(); // use this if you want the non visited file entries to be kept in the cache // for more than one execution // // cache.reconcile( true /* noPrune */) // on a second run var cache2 = fileEntryCache.create('testCache'); // will return now only the files that were modified or none // if no files were modified previous to the execution of this function var oFiles = cache.getUpdatedFiles(files); // if you want to prevent a file from being considered non modified // something useful if a file failed some sort of validation // you can then remove the entry from the cache doing cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles` // that will effectively make the file to appear again as modified until the validation is passed. In that // case you should not remove it from the cache // if you need all the files, so you can determine what to do with the changed ones // you can call var oFiles = cache.normalizeEntries(files); // oFiles will be an array of objects like the following entry = { key: 'some/name/file', the path to the file changed: true, // if the file was changed since previous run meta: { size: 3242, // the size of the file mtime: 231231231, // the modification time of the file data: {} // some extra field stored for this file (useful to save the result of a transformation on the file } } ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistence (write-back cache) in order to make a script that will beautify files with `esformatter` to execute only on the files that were changed since the last run. In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second. This module uses [flat-cache](https://www.npmjs.com/package/flat-cache) a super simple `key/value` cache storage with optional file persistance. The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for `getChangedFiles` will return only the files that were modified. Making the process to end faster. This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the `meta` field, of the entries. Anything added to the meta field will be persisted. Those processes won't need to call `getChangedFiles` they will instead call `normalizeEntries` that will return the entries with a `changed` field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed. In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed. ## Important notes - The values set on the meta attribute of the entries should be `stringify-able` ones if possible, flat-cache uses `circular-json` to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular values - All the changes to the cache state are done to memory first and only persisted after reconcile. ## License MIT # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/main/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/yargs-parser@19.0.0/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _: [], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _: [2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC are-we-there-yet ---------------- Track complex hiearchies of asynchronous task completion statuses. This is intended to give you a way of recording and reporting the progress of the big recursive fan-out and gather type workflows that are so common in async. What you do with this completion data is up to you, but the most common use case is to feed it to one of the many progress bar modules. Most progress bar modules include a rudamentary version of this, but my needs were more complex. Usage ===== ```javascript var TrackerGroup = require("are-we-there-yet").TrackerGroup var top = new TrackerGroup("program") var single = top.newItem("one thing", 100) single.completeWork(20) console.log(top.completed()) // 0.2 fs.stat("file", function(er, stat) { if (er) throw er var stream = top.newStream("file", stat.size) console.log(top.completed()) // now 0.1 as single is 50% of the job and is 20% complete // and 50% * 20% == 10% fs.createReadStream("file").pipe(stream).on("data", function (chunk) { // do stuff with chunk }) top.on("change", function (name) { // called each time a chunk is read from "file" // top.completed() will start at 0.1 and fill up to 0.6 as the file is read }) }) ``` Shared Methods ============== * var completed = tracker.completed() Implemented in: `Tracker`, `TrackerGroup`, `TrackerStream` Returns the ratio of completed work to work to be done. Range of 0 to 1. * tracker.finish() Implemented in: `Tracker`, `TrackerGroup` Marks the tracker as completed. With a TrackerGroup this marks all of its components as completed. Marks all of the components of this tracker as finished, which in turn means that `tracker.completed()` for this will now be 1. This will result in one or more `change` events being emitted. Events ====== All tracker objects emit `change` events with the following arguments: ``` function (name, completed, tracker) ``` `name` is the name of the tracker that originally emitted the event, or if it didn't have one, the first containing tracker group that had one. `completed` is the percent complete (as returned by `tracker.completed()` method). `tracker` is the tracker object that you are listening for events on. TrackerGroup ============ * var tracker = new TrackerGroup(**name**) * **name** *(optional)* - The name of this tracker group, used in change notifications if the component updating didn't have a name. Defaults to undefined. Creates a new empty tracker aggregation group. These are trackers whose completion status is determined by the completion status of other trackers. * tracker.addUnit(**otherTracker**, **weight**) * **otherTracker** - Any of the other are-we-there-yet tracker objects * **weight** *(optional)* - The weight to give the tracker, defaults to 1. Adds the **otherTracker** to this aggregation group. The weight determines how long you expect this tracker to take to complete in proportion to other units. So for instance, if you add one tracker with a weight of 1 and another with a weight of 2, you're saying the second will take twice as long to complete as the first. As such, the first will account for 33% of the completion of this tracker and the second will account for the other 67%. Returns **otherTracker**. * var subGroup = tracker.newGroup(**name**, **weight**) The above is exactly equivalent to: ```javascript var subGroup = tracker.addUnit(new TrackerGroup(name), weight) ``` * var subItem = tracker.newItem(**name**, **todo**, **weight**) The above is exactly equivalent to: ```javascript var subItem = tracker.addUnit(new Tracker(name, todo), weight) ``` * var subStream = tracker.newStream(**name**, **todo**, **weight**) The above is exactly equivalent to: ```javascript var subStream = tracker.addUnit(new TrackerStream(name, todo), weight) ``` * console.log( tracker.debug() ) Returns a tree showing the completion of this tracker group and all of its children, including recursively entering all of the children. Tracker ======= * var tracker = new Tracker(**name**, **todo**) * **name** *(optional)* The name of this counter to report in change events. Defaults to undefined. * **todo** *(optional)* The amount of work todo (a number). Defaults to 0. Ordinarily these are constructed as a part of a tracker group (via `newItem`). * var completed = tracker.completed() Returns the ratio of completed work to work to be done. Range of 0 to 1. If total work to be done is 0 then it will return 0. * tracker.addWork(**todo**) * **todo** A number to add to the amount of work to be done. Increases the amount of work to be done, thus decreasing the completion percentage. Triggers a `change` event. * tracker.completeWork(**completed**) * **completed** A number to add to the work complete Increase the amount of work complete, thus increasing the completion percentage. Will never increase the work completed past the amount of work todo. That is, percentages > 100% are not allowed. Triggers a `change` event. * tracker.finish() Marks this tracker as finished, tracker.completed() will now be 1. Triggers a `change` event. TrackerStream ============= * var tracker = new TrackerStream(**name**, **size**, **options**) * **name** *(optional)* The name of this counter to report in change events. Defaults to undefined. * **size** *(optional)* The number of bytes being sent through this stream. * **options** *(optional)* A hash of stream options The tracker stream object is a pass through stream that updates an internal tracker object each time a block passes through. It's intended to track downloads, file extraction and other related activities. You use it by piping your data source into it and then using it as your data source. If your data has a length attribute then that's used as the amount of work completed when the chunk is passed through. If it does not (eg, object streams) then each chunk counts as completing 1 unit of work, so your size should be the total number of objects being streamed. * tracker.addWork(**todo**) * **todo** Increase the expected overall size by **todo** bytes. Increases the amount of work to be done, thus decreasing the completion percentage. Triggers a `change` event. Like `chown -R`. Takes the same arguments as `fs.chown()` # napi-build-utils [![npm](https://img.shields.io/npm/v/napi-build-utils.svg)](https://www.npmjs.com/package/napi-build-utils) ![Node version](https://img.shields.io/node/v/prebuild.svg) [![Build Status](https://travis-ci.org/inspiredware/napi-build-utils.svg?branch=master)](https://travis-ci.org/inspiredware/napi-build-utils) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](http://standardjs.com/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) A set of utilities to assist developers of tools that build [N-API](https://nodejs.org/api/n-api.html#n_api_n_api) native add-ons. ## Background This module is targeted to developers creating tools that build N-API native add-ons. It implements a set of functions that aid in determining the N-API version supported by the currently running Node instance and the set of N-API versions against which the N-API native add-on is designed to be built. Other functions determine whether a particular N-API version can be built and can issue console warnings for unsupported N-API versions. Unlike the modules this code is designed to facilitate building, this module is written entirely in JavaScript. ## Quick start ```bash $ npm install napi-build-utils ``` The module exports a set of functions documented [here](./index.md). For example: ```javascript var napiBuildUtils = require('napi-build-utils'); var napiVersion = napiBuildUtils.getNapiVersion(); // N-API version supported by Node, or undefined. ``` ## Declaring supported N-API versions Native modules that are designed to work with [N-API](https://nodejs.org/api/n-api.html#n_api_n_api) must explicitly declare the N-API version(s) against which they are coded to build. This is accomplished by including a `binary.napi_versions` property in the module's `package.json` file. For example: ```json "binary": { "napi_versions": [2,3] } ``` In the absence of a need to compile against a specific N-API version, the value `3` is a good choice as this is the N-API version that was supported when N-API left experimental status. Modules that are built against a specific N-API version will continue to operate indefinitely, even as later versions of N-API are introduced. ## Support If you run into problems or limitations, please file an issue and we'll take a look. Pull requests are also welcome. # <img src="docs_app/assets/Rx_Logo_S.png" alt="RxJS Logo" width="86" height="86"> RxJS: Reactive Extensions For JavaScript [![CircleCI](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x.svg?style=svg)](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x) [![npm version](https://badge.fury.io/js/%40reactivex%2Frxjs.svg)](http://badge.fury.io/js/%40reactivex%2Frxjs) [![Join the chat at https://gitter.im/Reactive-Extensions/RxJS](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/Reactive-Extensions/RxJS?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # RxJS 6 Stable ### MIGRATION AND RELEASE INFORMATION: Find out how to update to v6, **automatically update your TypeScript code**, and more! - [Current home is MIGRATION.md](./docs_app/content/guide/v6/migration.md) ### FOR V 5.X PLEASE GO TO [THE 5.0 BRANCH](https://github.com/ReactiveX/rxjs/tree/5.x) Reactive Extensions Library for JavaScript. This is a rewrite of [Reactive-Extensions/RxJS](https://github.com/Reactive-Extensions/RxJS) and is the latest production-ready version of RxJS. This rewrite is meant to have better performance, better modularity, better debuggable call stacks, while staying mostly backwards compatible, with some breaking changes that reduce the API surface. [Apache 2.0 License](LICENSE.txt) - [Code of Conduct](CODE_OF_CONDUCT.md) - [Contribution Guidelines](CONTRIBUTING.md) - [Maintainer Guidelines](doc_app/content/maintainer-guidelines.md) - [API Documentation](https://rxjs.dev/) ## Versions In This Repository - [master](https://github.com/ReactiveX/rxjs/commits/master) - This is all of the current, unreleased work, which is against v6 of RxJS right now - [stable](https://github.com/ReactiveX/rxjs/commits/stable) - This is the branch for the latest version you'd get if you do `npm install rxjs` ## Important By contributing or commenting on issues in this repository, whether you've read them or not, you're agreeing to the [Contributor Code of Conduct](CODE_OF_CONDUCT.md). Much like traffic laws, ignorance doesn't grant you immunity. ## Installation and Usage ### ES6 via npm ```sh npm install rxjs ``` It's recommended to pull in the Observable creation methods you need directly from `'rxjs'` as shown below with `range`. And you can pull in any operator you need from one spot, under `'rxjs/operators'`. ```ts import { range } from "rxjs"; import { map, filter } from "rxjs/operators"; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` Here, we're using the built-in `pipe` method on Observables to combine operators. See [pipeable operators](https://github.com/ReactiveX/rxjs/blob/master/doc/pipeable-operators.md) for more information. ### CommonJS via npm To install this library for CommonJS (CJS) usage, use the following command: ```sh npm install rxjs ``` (Note: destructuring available in Node 8+) ```js const { range } = require('rxjs'); const { map, filter } = require('rxjs/operators'); range(1, 200).pipe( filter(x => x % 2 === 1), map(x => x + x) ).subscribe(x => console.log(x)); ``` ### CDN For CDN, you can use [unpkg](https://unpkg.com/): https://unpkg.com/rxjs/bundles/rxjs.umd.min.js The global namespace for rxjs is `rxjs`: ```js const { range } = rxjs; const { map, filter } = rxjs.operators; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` ## Goals - Smaller overall bundles sizes - Provide better performance than preceding versions of RxJS - To model/follow the [Observable Spec Proposal](https://github.com/zenparsing/es-observable) to the observable - Provide more modular file structure in a variety of formats - Provide more debuggable call stacks than preceding versions of RxJS ## Building/Testing - `npm run build_all` - builds everything - `npm test` - runs tests - `npm run test_no_cache` - run test with `ts-node` set to false ## Performance Tests Run `npm run build_perf` or `npm run perf` to run the performance tests with `protractor`. Run `npm run perf_micro [operator]` to run micro performance test benchmarking operator. ## Adding documentation We appreciate all contributions to the documentation of any type. All of the information needed to get the docs app up and running locally as well as how to contribute can be found in the [documentation directory](./docs_app). ## Generating PNG marble diagrams The script `npm run tests2png` requires some native packages installed locally: `imagemagick`, `graphicsmagick`, and `ghostscript`. For Mac OS X with [Homebrew](http://brew.sh/): - `brew install imagemagick` - `brew install graphicsmagick` - `brew install ghostscript` - You may need to install the Ghostscript fonts manually: - Download the tarball from the [gs-fonts project](https://sourceforge.net/projects/gs-fonts) - `mkdir -p /usr/local/share/ghostscript && tar zxvf /path/to/ghostscript-fonts.tar.gz -C /usr/local/share/ghostscript` For Debian Linux: - `sudo add-apt-repository ppa:dhor/myway` - `apt-get install imagemagick` - `apt-get install graphicsmagick` - `apt-get install ghostscript` For Windows and other Operating Systems, check the download instructions here: - http://imagemagick.org - http://www.graphicsmagick.org - http://www.ghostscript.com/ # ci-info Get details about the current Continuous Integration environment. Please [open an issue](https://github.com/watson/ci-info/issues/new?template=ci-server-not-detected.md) if your CI server isn't properly detected :) [![npm](https://img.shields.io/npm/v/ci-info.svg)](https://www.npmjs.com/package/ci-info) [![Build status](https://travis-ci.org/watson/ci-info.svg?branch=master)](https://travis-ci.org/watson/ci-info) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat)](https://github.com/feross/standard) ## Installation ```bash npm install ci-info --save ``` ## Usage ```js var ci = require('ci-info') if (ci.isCI) { console.log('The name of the CI server is:', ci.name) } else { console.log('This program is not running on a CI server') } ``` ## Supported CI tools Officially supported CI servers: | Name | Constant | isPR | |------|----------|------| | [AWS CodeBuild](https://aws.amazon.com/codebuild/) | `ci.CODEBUILD` | 🚫 | | [AppVeyor](http://www.appveyor.com) | `ci.APPVEYOR` | ✅ | | [Azure Pipelines](https://azure.microsoft.com/en-us/services/devops/pipelines/) | `ci.AZURE_PIPELINES` | ✅ | | [Bamboo](https://www.atlassian.com/software/bamboo) by Atlassian | `ci.BAMBOO` | 🚫 | | [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) | `ci.BITBUCKET` | ✅ | | [Bitrise](https://www.bitrise.io/) | `ci.BITRISE` | ✅ | | [Buddy](https://buddy.works/) | `ci.BUDDY` | ✅ | | [Buildkite](https://buildkite.com) | `ci.BUILDKITE` | ✅ | | [CircleCI](http://circleci.com) | `ci.CIRCLE` | ✅ | | [Cirrus CI](https://cirrus-ci.org) | `ci.CIRRUS` | ✅ | | [Codeship](https://codeship.com) | `ci.CODESHIP` | 🚫 | | [Drone](https://drone.io) | `ci.DRONE` | ✅ | | [dsari](https://github.com/rfinnie/dsari) | `ci.DSARI` | 🚫 | | [GitLab CI](https://about.gitlab.com/gitlab-ci/) | `ci.GITLAB` | 🚫 | | [GoCD](https://www.go.cd/) | `ci.GOCD` | 🚫 | | [Hudson](http://hudson-ci.org) | `ci.HUDSON` | 🚫 | | [Jenkins CI](https://jenkins-ci.org) | `ci.JENKINS` | ✅ | | [Magnum CI](https://magnum-ci.com) | `ci.MAGNUM` | 🚫 | | [Netlify CI](https://www.netlify.com/) | `ci.NETLIFY` | ✅ | | [Sail CI](https://sail.ci/) | `ci.SAIL` | ✅ | | [Semaphore](https://semaphoreci.com) | `ci.SEMAPHORE` | ✅ | | [Shippable](https://www.shippable.com/) | `ci.SHIPPABLE` | ✅ | | [Solano CI](https://www.solanolabs.com/) | `ci.SOLANO` | ✅ | | [Strider CD](https://strider-cd.github.io/) | `ci.STRIDER` | 🚫 | | [TaskCluster](http://docs.taskcluster.net) | `ci.TASKCLUSTER` | 🚫 | | [TeamCity](https://www.jetbrains.com/teamcity/) by JetBrains | `ci.TEAMCITY` | 🚫 | | [Travis CI](http://travis-ci.org) | `ci.TRAVIS` | ✅ | ## API ### `ci.name` Returns a string containing name of the CI server the code is running on. If CI server is not detected, it returns `null`. Don't depend on the value of this string not to change for a specific vendor. If you find your self writing `ci.name === 'Travis CI'`, you most likely want to use `ci.TRAVIS` instead. ### `ci.isCI` Returns a boolean. Will be `true` if the code is running on a CI server, otherwise `false`. Some CI servers not listed here might still trigger the `ci.isCI` boolean to be set to `true` if they use certain vendor neutral environment variables. In those cases `ci.name` will be `null` and no vendor specific boolean will be set to `true`. ### `ci.isPR` Returns a boolean if PR detection is supported for the current CI server. Will be `true` if a PR is being tested, otherwise `false`. If PR detection is not supported for the current CI server, the value will be `null`. ### `ci.<VENDOR-CONSTANT>` A vendor specific boolean constant is exposed for each support CI vendor. A constant will be `true` if the code is determined to run on the given CI server, otherwise `false`. Examples of vendor constants are `ci.TRAVIS` or `ci.APPVEYOR`. For a complete list, see the support table above. Deprecated vendor constants that will be removed in the next major release: - `ci.TDDIUM` (Solano CI) This have been renamed `ci.SOLANO` ## License [MIT](LICENSE) [Build]: http://img.shields.io/travis/litejs/natural-compare-lite.png [Coverage]: http://img.shields.io/coveralls/litejs/natural-compare-lite.png [1]: https://travis-ci.org/litejs/natural-compare-lite [2]: https://coveralls.io/r/litejs/natural-compare-lite [npm package]: https://npmjs.org/package/natural-compare-lite [GitHub repo]: https://github.com/litejs/natural-compare-lite @version 1.4.0 @date 2015-10-26 @stability 3 - Stable Natural Compare &ndash; [![Build][]][1] [![Coverage][]][2] =============== Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a "natural ordering". ```text Standard sorting: Natural order sorting: img1.png img1.png img10.png img2.png img12.png img10.png img2.png img12.png ``` String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function. ### Installation - In browser ```html <script src=min.natural-compare.js></script> ``` - In node.js: `npm install natural-compare-lite` ```javascript require("natural-compare-lite") ``` ### Usage ```javascript // Simple case sensitive example var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"]; a.sort(String.naturalCompare); // ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"] // Use wrapper function for case insensitivity a.sort(function(a, b){ return String.naturalCompare(a.toLowerCase(), b.toLowerCase()); }) // In most cases we want to sort an array of objects var a = [ {"street":"350 5th Ave", "room":"A-1021"} , {"street":"350 5th Ave", "room":"A-21046-b"} ]; // sort by street, then by room a.sort(function(a, b){ return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room); }) // When text transformation is needed (eg toLowerCase()), // it is best for performance to keep // transformed key in that object. // There are no need to do text transformation // on each comparision when sorting. var a = [ {"make":"Audi", "model":"A6"} , {"make":"Kia", "model":"Rio"} ]; // sort by make, then by model a.map(function(car){ car.sort_key = (car.make + " " + car.model).toLowerCase(); }) a.sort(function(a, b){ return String.naturalCompare(a.sort_key, b.sort_key); }) ``` - Works well with dates in ISO format eg "Rev 2012-07-26.doc". ### Custom alphabet It is possible to configure a custom alphabet to achieve a desired order. ```javascript // Estonian alphabet String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy" ["t", "z", "x", "õ"].sort(String.naturalCompare) // ["z", "t", "õ", "x"] // Russian alphabet String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя" ["Ё", "А", "Б"].sort(String.naturalCompare) // ["А", "Б", "Ё"] ``` External links -------------- - [GitHub repo][https://github.com/litejs/natural-compare-lite] - [jsperf test](http://jsperf.com/natural-sort-2/12) Licence ------- Copyright (c) 2012-2015 Lauri Rooden &lt;lauri@rooden.ee&gt; [The MIT License](http://lauri.rooden.ee/mit-license.txt) # near-api-js [![Build Status](https://travis-ci.com/near/near-api-js.svg?branch=master)](https://travis-ci.com/near/near-api-js) [![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/near/near-api-js) A JavaScript/TypeScript library for development of DApps on the NEAR platform # Documentation [Read the TypeDoc API documentation](https://near.github.io/near-api-js/) --- # Examples ## [Quick Reference](https://github.com/near/near-api-js/blob/master/examples/quick-reference.md) _(Cheat sheet / quick reference)_ ## [Cookbook](https://github.com/near/near-api-js/blob/master/examples/cookbook/README.md) _(Common use cases / more complex examples)_ --- # Contribute to this library 1. Install dependencies yarn 2. Run continuous build with: yarn build -- -w # Publish Prepare `dist` version by running: yarn dist When publishing to npm use [np](https://github.com/sindresorhus/np). --- # Integration Test Start the node by following instructions from [nearcore](https://github.com/nearprotocol/nearcore), then yarn test Tests use sample contract from `near-hello` npm package, see https://github.com/nearprotocol/near-hello # Update error schema Follow next steps: 1. [Change hash for the commit with errors in the nearcore](https://github.com/near/near-api-js/blob/master/gen_error_types.js#L7-L9) 2. Fetch new schema: `node fetch_error_schema.js` 3. `yarn build` to update `lib/**.js` files # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE](LICENSE) and [LICENSE-APACHE](LICENSE-APACHE) for details. Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. # cacheable-request > Wrap native HTTP requests with RFC compliant cache support [![Build Status](https://travis-ci.org/lukechilds/cacheable-request.svg?branch=master)](https://travis-ci.org/lukechilds/cacheable-request) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/cacheable-request/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/cacheable-request?branch=master) [![npm](https://img.shields.io/npm/dm/cacheable-request.svg)](https://www.npmjs.com/package/cacheable-request) [![npm](https://img.shields.io/npm/v/cacheable-request.svg)](https://www.npmjs.com/package/cacheable-request) [RFC 7234](http://httpwg.org/specs/rfc7234.html) compliant HTTP caching for native Node.js HTTP/HTTPS requests. Caching works out of the box in memory or is easily pluggable with a wide range of storage adapters. **Note:** This is a low level wrapper around the core HTTP modules, it's not a high level request library. ## Features - Only stores cacheable responses as defined by RFC 7234 - Fresh cache entries are served directly from cache - Stale cache entries are revalidated with `If-None-Match`/`If-Modified-Since` headers - 304 responses from revalidation requests use cached body - Updates `Age` header on cached responses - Can completely bypass cache on a per request basis - In memory cache by default - Official support for Redis, MongoDB, SQLite, PostgreSQL and MySQL storage adapters - Easily plug in your own or third-party storage adapters - If DB connection fails, cache is automatically bypassed ([disabled by default](#optsautomaticfailover)) - Adds cache support to any existing HTTP code with minimal changes - Uses [http-cache-semantics](https://github.com/pornel/http-cache-semantics) internally for HTTP RFC 7234 compliance ## Install ```shell npm install cacheable-request ``` ## Usage ```js const http = require('http'); const CacheableRequest = require('cacheable-request'); // Then instead of const req = http.request('http://example.com', cb); req.end(); // You can do const cacheableRequest = new CacheableRequest(http.request); const cacheReq = cacheableRequest('http://example.com', cb); cacheReq.on('request', req => req.end()); // Future requests to 'example.com' will be returned from cache if still valid // You pass in any other http.request API compatible method to be wrapped with cache support: const cacheableRequest = new CacheableRequest(https.request); const cacheableRequest = new CacheableRequest(electron.net); ``` ## Storage Adapters `cacheable-request` uses [Keyv](https://github.com/lukechilds/keyv) to support a wide range of storage adapters. For example, to use Redis as a cache backend, you just need to install the official Redis Keyv storage adapter: ``` npm install @keyv/redis ``` And then you can pass `CacheableRequest` your connection string: ```js const cacheableRequest = new CacheableRequest(http.request, 'redis://user:pass@localhost:6379'); ``` [View all official Keyv storage adapters.](https://github.com/lukechilds/keyv#official-storage-adapters) Keyv also supports anything that follows the Map API so it's easy to write your own storage adapter or use a third-party solution. e.g The following are all valid storage adapters ```js const storageAdapter = new Map(); // or const storageAdapter = require('./my-storage-adapter'); // or const QuickLRU = require('quick-lru'); const storageAdapter = new QuickLRU({ maxSize: 1000 }); const cacheableRequest = new CacheableRequest(http.request, storageAdapter); ``` View the [Keyv docs](https://github.com/lukechilds/keyv) for more information on how to use storage adapters. ## API ### new cacheableRequest(request, [storageAdapter]) Returns the provided request function wrapped with cache support. #### request Type: `function` Request function to wrap with cache support. Should be [`http.request`](https://nodejs.org/api/http.html#http_http_request_options_callback) or a similar API compatible request function. #### storageAdapter Type: `Keyv storage adapter`<br> Default: `new Map()` A [Keyv](https://github.com/lukechilds/keyv) storage adapter instance, or connection string if using with an official Keyv storage adapter. ### Instance #### cacheableRequest(opts, [cb]) Returns an event emitter. ##### opts Type: `object`, `string` - Any of the default request functions options. - Any [`http-cache-semantics`](https://github.com/kornelski/http-cache-semantics#constructor-options) options. - Any of the following: ###### opts.cache Type: `boolean`<br> Default: `true` If the cache should be used. Setting this to false will completely bypass the cache for the current request. ###### opts.strictTtl Type: `boolean`<br> Default: `false` If set to `true` once a cached resource has expired it is deleted and will have to be re-requested. If set to `false` (default), after a cached resource's TTL expires it is kept in the cache and will be revalidated on the next request with `If-None-Match`/`If-Modified-Since` headers. ###### opts.maxTtl Type: `number`<br> Default: `undefined` Limits TTL. The `number` represents milliseconds. ###### opts.automaticFailover Type: `boolean`<br> Default: `false` When set to `true`, if the DB connection fails we will automatically fallback to a network request. DB errors will still be emitted to notify you of the problem even though the request callback may succeed. ###### opts.forceRefresh Type: `boolean`<br> Default: `false` Forces refreshing the cache. If the response could be retrieved from the cache, it will perform a new request and override the cache instead. ##### cb Type: `function` The callback function which will receive the response as an argument. The response can be either a [Node.js HTTP response stream](https://nodejs.org/api/http.html#http_class_http_incomingmessage) or a [responselike object](https://github.com/lukechilds/responselike). The response will also have a `fromCache` property set with a boolean value. ##### .on('request', request) `request` event to get the request object of the request. **Note:** This event will only fire if an HTTP request is actually made, not when a response is retrieved from cache. However, you should always handle the `request` event to end the request and handle any potential request errors. ##### .on('response', response) `response` event to get the response object from the HTTP request or cache. ##### .on('error', error) `error` event emitted in case of an error with the cache. Errors emitted here will be an instance of `CacheableRequest.RequestError` or `CacheableRequest.CacheError`. You will only ever receive a `RequestError` if the request function throws (normally caused by invalid user input). Normal request errors should be handled inside the `request` event. To properly handle all error scenarios you should use the following pattern: ```js cacheableRequest('example.com', cb) .on('error', err => { if (err instanceof CacheableRequest.CacheError) { handleCacheError(err); // Cache error } else if (err instanceof CacheableRequest.RequestError) { handleRequestError(err); // Request function thrown } }) .on('request', req => { req.on('error', handleRequestError); // Request error emitted req.end(); }); ``` **Note:** Database connection errors are emitted here, however `cacheable-request` will attempt to re-request the resource and bypass the cache on a connection error. Therefore a database connection error doesn't necessarily mean the request won't be fulfilled. ## License MIT © Luke Childs # minimatch A minimal matching utility. [![Build Status](https://travis-ci.org/isaacs/minimatch.svg?branch=master)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ### partial Compare a partial path to a pattern. As long as the parts of the path that are present are not contradicted by the pattern, it will be treated as a match. This is useful in applications where you're walking through a folder structure, and don't yet have the full path, but want to ensure that you do not walk down paths that can never be a match. For example, ```js minimatch('/a/b', '/a/*/c/d', { partial: true }) // true, might be /a/b/c/d minimatch('/a/b', '/**/d', { partial: true }) // true, might be /a/b/.../d minimatch('/x/y/z', '/a/**/z', { partial: true }) // false, because x !== a ``` ### allowWindowsEscape Windows path separator `\` is by default converted to `/`, which prohibits the usage of `\` as a escape character. This flag skips that behavior and allows using the escape character. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. # http-errors [![NPM Version][npm-version-image]][npm-url] [![NPM Downloads][npm-downloads-image]][node-url] [![Node.js Version][node-image]][node-url] [![Build Status][ci-image]][ci-url] [![Test Coverage][coveralls-image]][coveralls-url] Create HTTP errors for Express, Koa, Connect, etc. with ease. ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install http-errors ``` ## Example ```js var createError = require('http-errors') var express = require('express') var app = express() app.use(function (req, res, next) { if (!req.user) return next(createError(401, 'Please login to view this page.')) next() }) ``` ## API This is the current API, currently extracted from Koa and subject to change. ### Error Properties - `expose` - can be used to signal if `message` should be sent to the client, defaulting to `false` when `status` >= 500 - `headers` - can be an object of header names to values to be sent to the client, defaulting to `undefined`. When defined, the key names should all be lower-cased - `message` - the traditional error message, which should be kept short and all single line - `status` - the status code of the error, mirroring `statusCode` for general compatibility - `statusCode` - the status code of the error, defaulting to `500` ### createError([status], [message], [properties]) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = createError(404, 'This video does not exist!') ``` - `status: 500` - the status code as a number - `message` - the message of the error, defaulting to node's text for that status code. - `properties` - custom properties to attach to the object ### createError([status], [error], [properties]) Extend the given `error` object with `createError.HttpError` properties. This will not alter the inheritance of the given `error` object, and the modified `error` object is the return value. <!-- eslint-disable no-redeclare --> ```js fs.readFile('foo.txt', function (err, buf) { if (err) { if (err.code === 'ENOENT') { var httpError = createError(404, err, { expose: false }) } else { var httpError = createError(500, err) } } }) ``` - `status` - the status code as a number - `error` - the error object to extend - `properties` - custom properties to attach to the object ### createError.isHttpError(val) Determine if the provided `val` is an `HttpError`. This will return `true` if the error inherits from the `HttpError` constructor of this module or matches the "duck type" for an error this module creates. All outputs from the `createError` factory will return `true` for this function, including if an non-`HttpError` was passed into the factory. ### new createError\[code || name\](\[msg]\)) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = new createError.NotFound() ``` - `code` - the status code as a number - `name` - the name of the error as a "bumpy case", i.e. `NotFound` or `InternalServerError`. #### List of all constructors |Status Code|Constructor Name | |-----------|-----------------------------| |400 |BadRequest | |401 |Unauthorized | |402 |PaymentRequired | |403 |Forbidden | |404 |NotFound | |405 |MethodNotAllowed | |406 |NotAcceptable | |407 |ProxyAuthenticationRequired | |408 |RequestTimeout | |409 |Conflict | |410 |Gone | |411 |LengthRequired | |412 |PreconditionFailed | |413 |PayloadTooLarge | |414 |URITooLong | |415 |UnsupportedMediaType | |416 |RangeNotSatisfiable | |417 |ExpectationFailed | |418 |ImATeapot | |421 |MisdirectedRequest | |422 |UnprocessableEntity | |423 |Locked | |424 |FailedDependency | |425 |UnorderedCollection | |426 |UpgradeRequired | |428 |PreconditionRequired | |429 |TooManyRequests | |431 |RequestHeaderFieldsTooLarge | |451 |UnavailableForLegalReasons | |500 |InternalServerError | |501 |NotImplemented | |502 |BadGateway | |503 |ServiceUnavailable | |504 |GatewayTimeout | |505 |HTTPVersionNotSupported | |506 |VariantAlsoNegotiates | |507 |InsufficientStorage | |508 |LoopDetected | |509 |BandwidthLimitExceeded | |510 |NotExtended | |511 |NetworkAuthenticationRequired| ## License [MIT](LICENSE) [ci-image]: https://badgen.net/github/checks/jshttp/http-errors/master?label=ci [ci-url]: https://github.com/jshttp/http-errors/actions?query=workflow%3Aci [coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/http-errors/master [coveralls-url]: https://coveralls.io/r/jshttp/http-errors?branch=master [node-image]: https://badgen.net/npm/node/http-errors [node-url]: https://nodejs.org/en/download [npm-downloads-image]: https://badgen.net/npm/dm/http-errors [npm-url]: https://npmjs.org/package/http-errors [npm-version-image]: https://badgen.net/npm/v/http-errors [travis-image]: https://badgen.net/travis/jshttp/http-errors/master [travis-url]: https://travis-ci.org/jshttp/http-errors # npmlog The logger util that npm uses. This logger is very basic. It does the logging for npm. It supports custom levels and colored output. By default, logs are written to stderr. If you want to send log messages to outputs other than streams, then you can change the `log.stream` member, or you can just listen to the events that it emits, and do whatever you want with them. # Installation ```console npm install npmlog --save ``` # Basic Usage ```javascript var log = require('npmlog') // additional stuff ---------------------------+ // message ----------+ | // prefix ----+ | | // level -+ | | | // v v v v log.info('fyi', 'I have a kitty cat: %j', myKittyCat) ``` ## log.level * {String} The level to display logs at. Any logs at or above this level will be displayed. The special level `silent` will prevent anything from being displayed ever. ## log.record * {Array} An array of all the log messages that have been entered. ## log.maxRecordSize * {Number} The maximum number of records to keep. If log.record gets bigger than 10% over this value, then it is sliced down to 90% of this value. The reason for the 10% window is so that it doesn't have to resize a large array on every log entry. ## log.prefixStyle * {Object} A style object that specifies how prefixes are styled. (See below) ## log.headingStyle * {Object} A style object that specifies how the heading is styled. (See below) ## log.heading * {String} Default: "" If set, a heading that is printed at the start of every line. ## log.stream * {Stream} Default: `process.stderr` The stream where output is written. ## log.enableColor() Force colors to be used on all messages, regardless of the output stream. ## log.disableColor() Disable colors on all messages. ## log.enableProgress() Enable the display of log activity spinner and progress bar ## log.disableProgress() Disable the display of a progress bar ## log.enableUnicode() Force the unicode theme to be used for the progress bar. ## log.disableUnicode() Disable the use of unicode in the progress bar. ## log.setGaugeTemplate(template) Set a template for outputting the progress bar. See the [gauge documentation] for details. [gauge documentation]: https://npmjs.com/package/gauge ## log.setGaugeThemeset(themes) Select a themeset to pick themes from for the progress bar. See the [gauge documentation] for details. ## log.pause() Stop emitting messages to the stream, but do not drop them. ## log.resume() Emit all buffered messages that were written while paused. ## log.log(level, prefix, message, ...) * `level` {String} The level to emit the message at * `prefix` {String} A string prefix. Set to "" to skip. * `message...` Arguments to `util.format` Emit a log message at the specified level. ## log\[level](prefix, message, ...) For example, * log.silly(prefix, message, ...) * log.verbose(prefix, message, ...) * log.info(prefix, message, ...) * log.http(prefix, message, ...) * log.warn(prefix, message, ...) * log.error(prefix, message, ...) Like `log.log(level, prefix, message, ...)`. In this way, each level is given a shorthand, so you can do `log.info(prefix, message)`. ## log.addLevel(level, n, style, disp) * `level` {String} Level indicator * `n` {Number} The numeric level * `style` {Object} Object with fg, bg, inverse, etc. * `disp` {String} Optional replacement for `level` in the output. Sets up a new level with a shorthand function and so forth. Note that if the number is `Infinity`, then setting the level to that will cause all log messages to be suppressed. If the number is `-Infinity`, then the only way to show it is to enable all log messages. ## log.newItem(name, todo, weight) * `name` {String} Optional; progress item name. * `todo` {Number} Optional; total amount of work to be done. Default 0. * `weight` {Number} Optional; the weight of this item relative to others. Default 1. This adds a new `are-we-there-yet` item tracker to the progress tracker. The object returned has the `log[level]` methods but is otherwise an `are-we-there-yet` `Tracker` object. ## log.newStream(name, todo, weight) This adds a new `are-we-there-yet` stream tracker to the progress tracker. The object returned has the `log[level]` methods but is otherwise an `are-we-there-yet` `TrackerStream` object. ## log.newGroup(name, weight) This adds a new `are-we-there-yet` tracker group to the progress tracker. The object returned has the `log[level]` methods but is otherwise an `are-we-there-yet` `TrackerGroup` object. # Events Events are all emitted with the message object. * `log` Emitted for all messages * `log.<level>` Emitted for all messages with the `<level>` level. * `<prefix>` Messages with prefixes also emit their prefix as an event. # Style Objects Style objects can have the following fields: * `fg` {String} Color for the foreground text * `bg` {String} Color for the background * `bold`, `inverse`, `underline` {Boolean} Set the associated property * `bell` {Boolean} Make a noise (This is pretty annoying, probably.) # Message Objects Every log event is emitted with a message object, and the `log.record` list contains all of them that have been created. They have the following fields: * `id` {Number} * `level` {String} * `prefix` {String} * `message` {String} Result of `util.format()` * `messageRaw` {Array} Arguments to `util.format()` # Blocking TTYs We use [`set-blocking`](https://npmjs.com/package/set-blocking) to set stderr and stdout blocking if they are tty's and have the setBlocking call. This is a work around for an issue in early versions of Node.js 6.x, which made stderr and stdout non-blocking on OSX. (They are always blocking Windows and were never blocking on Linux.) `npmlog` needs them to be blocking so that it can allow output to stdout and stderr to be interlaced. # v8-compile-cache [![Build Status](https://travis-ci.org/zertosh/v8-compile-cache.svg?branch=master)](https://travis-ci.org/zertosh/v8-compile-cache) `v8-compile-cache` attaches a `require` hook to use [V8's code cache](https://v8project.blogspot.com/2015/07/code-caching.html) to speed up instantiation time. The "code cache" is the work of parsing and compiling done by V8. The ability to tap into V8 to produce/consume this cache was introduced in [Node v5.7.0](https://nodejs.org/en/blog/release/v5.7.0/). ## Usage 1. Add the dependency: ```sh $ npm install --save v8-compile-cache ``` 2. Then, in your entry module add: ```js require('v8-compile-cache'); ``` **Requiring `v8-compile-cache` in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by `v8-compile-cache`.** ## Options Set the environment variable `DISABLE_V8_COMPILE_CACHE=1` to disable the cache. Cache directory is defined by environment variable `V8_COMPILE_CACHE_CACHE_DIR` or defaults to `<os.tmpdir()>/v8-compile-cache-<V8_VERSION>`. ## Internals Cache files are suffixed `.BLOB` and `.MAP` corresponding to the entry module that required `v8-compile-cache`. The cache is _entry module specific_ because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis. ## Benchmarks See https://github.com/zertosh/v8-compile-cache/tree/master/bench. **Load Times:** | Module | Without Cache | With Cache | | ---------------- | -------------:| ----------:| | `babel-core` | `218ms` | `185ms` | | `yarn` | `153ms` | `113ms` | | `yarn` (bundled) | `228ms` | `105ms` | _^ Includes the overhead of loading the cache itself._ ## Acknowledgements * `FileSystemBlobStore` and `NativeCompileCache` are based on Atom's implementation of their v8 compile cache: - https://github.com/atom/atom/blob/b0d7a8a/src/file-system-blob-store.js - https://github.com/atom/atom/blob/b0d7a8a/src/native-compile-cache.js * `mkdirpSync` is based on: - https://github.com/substack/node-mkdirp/blob/f2003bb/index.js#L55-L98 The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # md5.js [![NPM Package](https://img.shields.io/npm/v/md5.js.svg?style=flat-square)](https://www.npmjs.org/package/md5.js) [![Build Status](https://img.shields.io/travis/crypto-browserify/md5.js.svg?branch=master&style=flat-square)](https://travis-ci.org/crypto-browserify/md5.js) [![Dependency status](https://img.shields.io/david/crypto-browserify/md5.js.svg?style=flat-square)](https://david-dm.org/crypto-browserify/md5.js#info=dependencies) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Node style `md5` on pure JavaScript. From [NIST SP 800-131A][1]: *md5 is no longer acceptable where collision resistance is required such as digital signatures.* ## Example ```js var MD5 = require('md5.js') console.log(new MD5().update('42').digest('hex')) // => a1d0c6e83f027327d8461063f4ac58a6 var md5stream = new MD5() md5stream.end('42') console.log(md5stream.read().toString('hex')) // => a1d0c6e83f027327d8461063f4ac58a6 ``` ## LICENSE [MIT](LICENSE) [1]: http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar1.pdf # regexpp [![npm version](https://img.shields.io/npm/v/regexpp.svg)](https://www.npmjs.com/package/regexpp) [![Downloads/month](https://img.shields.io/npm/dm/regexpp.svg)](http://www.npmtrends.com/regexpp) [![Build Status](https://github.com/mysticatea/regexpp/workflows/CI/badge.svg)](https://github.com/mysticatea/regexpp/actions) [![codecov](https://codecov.io/gh/mysticatea/regexpp/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/regexpp) [![Dependency Status](https://david-dm.org/mysticatea/regexpp.svg)](https://david-dm.org/mysticatea/regexpp) A regular expression parser for ECMAScript. ## 💿 Installation ```bash $ npm install regexpp ``` - require Node.js 8 or newer. ## 📖 Usage ```ts import { AST, RegExpParser, RegExpValidator, RegExpVisitor, parseRegExpLiteral, validateRegExpLiteral, visitRegExpAST } from "regexpp" ``` ### parseRegExpLiteral(source, options?) Parse a given regular expression literal then make AST object. This is equivalent to `new RegExpParser(options).parseLiteral(source)`. - **Parameters:** - `source` (`string | RegExp`) The source code to parse. - `options?` ([`RegExpParser.Options`]) The options to parse. - **Return:** - The AST of the regular expression. ### validateRegExpLiteral(source, options?) Validate a given regular expression literal. This is equivalent to `new RegExpValidator(options).validateLiteral(source)`. - **Parameters:** - `source` (`string`) The source code to validate. - `options?` ([`RegExpValidator.Options`]) The options to validate. ### visitRegExpAST(ast, handlers) Visit each node of a given AST. This is equivalent to `new RegExpVisitor(handlers).visit(ast)`. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. ### RegExpParser #### new RegExpParser(options?) - **Parameters:** - `options?` ([`RegExpParser.Options`]) The options to parse. #### parser.parseLiteral(source, start?, end?) Parse a regular expression literal. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"/abc/g"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression. #### parser.parsePattern(source, start?, end?, uFlag?) Parse a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"abc"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. - **Return:** - The AST of the regular expression pattern. #### parser.parseFlags(source, start?, end?) Parse a regular expression flags. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"gim"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression flags. ### RegExpValidator #### new RegExpValidator(options) - **Parameters:** - `options` ([`RegExpValidator.Options`]) The options to validate. #### validator.validateLiteral(source, start, end) Validate a regular expression literal. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. #### validator.validatePattern(source, start, end, uFlag) Validate a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. #### validator.validateFlags(source, start, end) Validate a regular expression flags. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. ### RegExpVisitor #### new RegExpVisitor(handlers) - **Parameters:** - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. #### visitor.visit(ast) Validate a regular expression literal. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. ## 📰 Changelog - [GitHub Releases](https://github.com/mysticatea/regexpp/releases) ## 🍻 Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run build` compiles TypeScript source code to `index.js`, `index.js.map`, and `index.d.ts`. - `npm run clean` removes the temporary files which are created by `npm test` and `npm run build`. - `npm run lint` runs ESLint. - `npm run update:test` updates test fixtures. - `npm run update:ids` updates `src/unicode/ids.ts`. - `npm run watch` runs tests with `--watch` option. [`AST.Node`]: src/ast.ts#L4 [`RegExpParser.Options`]: src/parser.ts#L539 [`RegExpValidator.Options`]: src/validator.ts#L127 [`RegExpVisitor.Handlers`]: src/visitor.ts#L204 # libusb [![Build Status](https://travis-ci.org/libusb/libusb.svg?branch=master)](https://travis-ci.org/libusb/libusb) [![Build status](https://ci.appveyor.com/api/projects/status/xvrfam94jii4a6lw?svg=true)](https://ci.appveyor.com/project/LudovicRousseau/libusb) [![Coverity Scan Build Status](https://scan.coverity.com/projects/2180/badge.svg)](https://scan.coverity.com/projects/libusb-libusb) libusb is a library for USB device access from Linux, macOS, Windows, OpenBSD/NetBSD and Haiku userspace. It is written in C (Haiku backend in C++) and licensed under the GNU Lesser General Public License version 2.1 or, at your option, any later version (see [COPYING](COPYING)). libusb is abstracted internally in such a way that it can hopefully be ported to other operating systems. Please see the [PORTING](PORTING) file for more information. libusb homepage: http://libusb.info/ Developers will wish to consult the API documentation: http://api.libusb.info Use the mailing list for questions, comments, etc: http://mailing-list.libusb.info - Hans de Goede <hdegoede@redhat.com> - Xiaofan Chen <xiaofanc@gmail.com> - Ludovic Rousseau <ludovic.rousseau@gmail.com> - Nathan Hjelm <hjelmn@cs.unm.edu> - Chris Dickens <christopher.a.dickens@gmail.com> (Please use the mailing list rather than mailing developers directly) # parse-passwd [![NPM version](https://img.shields.io/npm/v/parse-passwd.svg?style=flat)](https://www.npmjs.com/package/parse-passwd) [![NPM downloads](https://img.shields.io/npm/dm/parse-passwd.svg?style=flat)](https://npmjs.org/package/parse-passwd) [![Linux Build Status](https://img.shields.io/travis/doowb/parse-passwd.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/parse-passwd) [![Windows Build Status](https://img.shields.io/appveyor/ci/doowb/parse-passwd.svg?style=flat&label=AppVeyor)](https://ci.appveyor.com/project/doowb/parse-passwd) > Parse a passwd file into a list of users. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save parse-passwd ``` ## Usage ```js var parse = require('parse-passwd'); ``` ## API **Example** ```js // assuming '/etc/passwd' contains: // doowb:*:123:123:Brian Woodward:/Users/doowb:/bin/bash console.log(parse(fs.readFileSync('/etc/passwd', 'utf8'))); //=> [ //=> { //=> username: 'doowb', //=> password: '*', //=> uid: '123', //=> gid: '123', //=> gecos: 'Brian Woodward', //=> homedir: '/Users/doowb', //=> shell: '/bin/bash' //=> } //=> ] ``` **Params** * `content` **{String}**: Content of a passwd file to parse. * `returns` **{Array}**: Array of user objects parsed from the content. ## About ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). Please read the [contributing guide](contributing.md) for avice on opening issues, pull requests, and coding standards. ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Brian Woodward** * [github/doowb](https://github.com/doowb) * [twitter/doowb](http://twitter.com/doowb) ### License Copyright © 2016, [Brian Woodward](https://github.com/doowb). Released under the [MIT license](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.2.0, on October 19, 2016._ # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # debug [![Build Status](https://travis-ci.org/debug-js/debug.svg?branch=master)](https://travis-ci.org/debug-js/debug) [![Coverage Status](https://coveralls.io/repos/github/debug-js/debug/badge.svg?branch=master)](https://coveralls.io/github/debug-js/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows command prompt notes ##### CMD On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Example: ```cmd set DEBUG=* & node app.js ``` ##### PowerShell (VS Code default) PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Example: ```cmd $env:DEBUG='app';node app.js ``` Then, run the program to be debugged as usual. npm script example: ```js "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js", ``` ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` In Chromium-based web browsers (e.g. Brave, Chrome, and Electron), the JavaScript console will—by default—only show messages logged by `debug` if the "Verbose" log level is _enabled_. <img width="647" src="https://user-images.githubusercontent.com/7143133/152083257-29034707-c42c-4959-8add-3cee850e6fcf.png"> ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Extend You can simply extend debugger ```js const log = require('debug')('auth'); //creates new debug instance with extended namespace const logSign = log.extend('sign'); const logLogin = log.extend('login'); log('hello'); // auth hello logSign('hello'); //auth:sign hello logLogin('hello'); //auth:login hello ``` ## Set dynamically You can also enable debug dynamically by calling the `enable()` method : ```js let debug = require('debug'); console.log(1, debug.enabled('test')); debug.enable('test'); console.log(2, debug.enabled('test')); debug.disable(); console.log(3, debug.enabled('test')); ``` print : ``` 1 false 2 true 3 false ``` Usage : `enable(namespaces)` `namespaces` can include modes separated by a colon and wildcards. Note that calling `enable()` completely overrides previously set DEBUG variable : ``` $ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))' => false ``` `disable()` Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with. For example: ```js let debug = require('debug'); debug.enable('foo:*,-foo:bar'); let namespaces = debug.disable(); debug.enable(namespaces); ``` Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical. ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Usage in child processes Due to the way `debug` detects if the output is a TTY or not, colors are not shown in child processes when `stderr` is piped. A solution is to pass the `DEBUG_COLORS=1` environment variable to the child process. For example: ```javascript worker = fork(WORKER_WRAP_PATH, [workerPath], { stdio: [ /* stdin: */ 0, /* stdout: */ 'pipe', /* stderr: */ 'pipe', 'ipc', ], env: Object.assign({}, process.env, { DEBUG_COLORS: 1 // without this settings, colors won't be shown }), }); worker.stderr.pipe(process.stderr, { end: false }); ``` ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne - Josh Junon ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;tj@vision-media.ca&gt; Copyright (c) 2018-2021 Josh Junon Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # defer-to-connect > The safe way to handle the `connect` socket event [![Coverage Status](https://coveralls.io/repos/github/szmarczak/defer-to-connect/badge.svg?branch=master)](https://coveralls.io/github/szmarczak/defer-to-connect?branch=master) Once you receive the socket, it may be already connected (or disconnected).<br> To avoid checking that, use `defer-to-connect`. It'll do that for you. ## Usage ```js const deferToConnect = require('defer-to-connect'); deferToConnect(socket, () => { console.log('Connected!'); }); ``` ## API ### deferToConnect(socket, connectListener) Calls `connectListener()` when connected. ### deferToConnect(socket, listeners) #### listeners An object representing `connect`, `secureConnect` and `close` properties. Calls `connect()` when the socket is connected.<br> Calls `secureConnect()` when the socket is securely connected.<br> Calls `close()` when the socket is destroyed. ## License MIT # Javascript Error Polyfill [![Build Status](https://travis-ci.org/inf3rno/error-polyfill.png?branch=master)](https://travis-ci.org/inf3rno/error-polyfill) Implementing the [V8 Stack Trace API](https://github.com/v8/v8/wiki/Stack-Trace-API) in non-V8 environments as much as possible ## Installation ```bash npm install error-polyfill ``` ```bash bower install error-polyfill ``` ### Environment compatibility Tested on the following environments: Windows 7 - **Node.js** 9.6 - **Chrome** 64.0 - **Firefox** 58.0 - **Internet Explorer** 10.0, 11.0 - **PhantomJS** 2.1 - **Opera** 51.0 Travis - **Node.js** 8, 9 - **Chrome** - **Firefox** - **PhantomJS** The polyfill might work on other environments too due to its adaptive design. I use [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. ### Requirements ES5 support is required, without that the lib throws an Error and stops working. The ES5 features are tested by the [capability](https://github.com/inf3rno/capability) lib run time. Classes are created by the [o3](https://github.com/inf3rno/o3) lib. Utility functions are implemented in the [u3](https://github.com/inf3rno/u3) lib. ## API documentation ### Usage In this documentation I used the framework as follows: ```js require("error-polyfill"); // <- your code here ``` It is recommended to require the polyfill in your main script. ### Getting a past stack trace with `Error.getStackTrace` This static method is not part of the V8 Stack Trace API, but it is recommended to **use `Error.getStackTrace(throwable)` instead of `throwable.stack`** to get the stack trace of Error instances! Explanation: By non-V8 environments we cannot replace the default stack generation algorithm, so we need a workaround to generate the stack when somebody tries to access it. So the original stack string will be parsed and the result will be properly formatted by accessing the stack using the `Error.getStackTrace` method. Arguments and return values: - The `throwable` argument should be an `Error` (descendant) instance, but it can be an `Object` instance as well. - The return value is the generated `stack` of the `throwable` argument. Example: ```js try { theNotDefinedFunction(); } catch (error) { console.log(Error.getStackTrace(error)); // ReferenceError: theNotDefinedFunction is not defined // at ... // ... } ``` ### Capturing the present stack trace with `Error.captureStackTrace` The `Error.captureStackTrace(throwable [, terminator])` sets the present stack above the `terminator` on the `throwable`. Arguments and return values: - The `throwable` argument should be an instance of an `Error` descendant, but it can be an `Object` instance as well. It is recommended to use `Error` descendant instances instead of inline objects, because we can recognize them by type e.g. `error instanceof UserError`. - The optional `terminator` argument should be a `Function`. Only the calls before this function will be reported in the stack, so without a `terminator` argument, the last call in the stack will be the call of the `Error.captureStackTrace`. - There is no return value, the `stack` will be set on the `throwable` so you will be able to access it using `Error.getStackTrace`. The format of the stack depends on the `Error.prepareStackTrace` implementation. Example: ```js var UserError = function (message){ this.name = "UserError"; this.message = message; Error.captureStackTrace(this, this.constructor); }; UserError.prototype = Object.create(Error.prototype); function codeSmells(){ throw new UserError("What's going on?!"); } codeSmells(); // UserError: What's going on?! // at codeSmells (myModule.js:23:1) // ... ``` Limitations: By the current implementation the `terminator` can be only the `Error.captureStackTrace` caller function. This will change soon, but in certain conditions, e.g. by using strict mode (`"use strict";`) it is not possible to access the information necessary to implement this feature. You will get an empty `frames` array and a `warning` in the `Error.prepareStackTrace` when the stack parser meets with such conditions. ### Formatting the stack trace with `Error.prepareStackTrace` The `Error.prepareStackTrace(throwable, frames [, warnings])` formats the stack `frames` and returns the `stack` value for `Error.captureStackTrace` or `Error.getStackTrace`. The native implementation returns a stack string, but you can override that by setting a new function value. Arguments and return values: - The `throwable` argument is an `Error` or `Object` instance coming from the `Error.captureStackTrace` or from the creation of a new `Error` instance. Be aware that in some environments you need to throw that instance to get a parsable stack. Without that you will get only a `warning` by trying to access the stack with `Error.getStackTrace`. - The `frames` argument is an array of `Frame` instances. Each `frame` represents a function call in the stack. You can use these frames to build a stack string. To access information about individual frames you can use the following methods. - `frame.toString()` - Returns the string representation of the frame, e.g. `codeSmells (myModule.js:23:1)`. - `frame.getThis()` - **Cannot be supported.** Returns the context of the call, only V8 environments support this natively. - `frame.getTypeName()` - **Not implemented yet.** Returns the type name of the context, by the global namespace it is `Window` in Chrome. - `frame.getFunction()` - Returns the called function or `undefined` by strict mode. - `frame.getFunctionName()` - **Not implemented yet.** Returns the name of the called function. - `frame.getMethodName()` - **Not implemented yet.** Returns the method name of the called function is a method of an object. - `frame.getFileName()` - **Not implemented yet.** Returns the file name where the function was called. - `frame.getLineNumber()` - **Not implemented yet.** Returns at which line the function was called in the file. - `frame.getColumnNumber()` - **Not implemented yet.** Returns at which column the function was called in the file. This information is not always available. - `frame.getEvalOrigin()` - **Not implemented yet.** Returns the original of an `eval` call. - `frame.isTopLevel()` - **Not implemented yet.** Returns whether the function was called from the top level. - `frame.isEval()` - **Not implemented yet.** Returns whether the called function was `eval`. - `frame.isNative()` - **Not implemented yet.** Returns whether the called function was native. - `frame.isConstructor()` - **Not implemented yet.** Returns whether the called function was a constructor. - The optional `warnings` argument contains warning messages coming from the stack parser. It is not part of the V8 Stack Trace API. - The return value will be the stack you can access with `Error.getStackTrace(throwable)`. If it is an object, it is recommended to add a `toString` method, so you will be able to read it in the console. Example: ```js Error.prepareStackTrace = function (throwable, frames, warnings) { var string = ""; string += throwable.name || "Error"; string += ": " + (throwable.message || ""); if (warnings instanceof Array) for (var warningIndex in warnings) { var warning = warnings[warningIndex]; string += "\n # " + warning; } for (var frameIndex in frames) { var frame = frames[frameIndex]; string += "\n at " + frame.toString(); } return string; }; ``` ### Stack trace size limits with `Error.stackTraceLimit` **Not implemented yet.** You can set size limits on the stack trace, so you won't have any problems because of too long stack traces. Example: ```js Error.stackTraceLimit = 10; ``` ### Handling uncaught errors and rejections **Not implemented yet.** ## Differences between environments and modes Since there is no Stack Trace API standard, every browsers solves this problem differently. I try to document what I've found about these differences as detailed as possible, so it will be easier to follow the code. Overriding the `error.stack` property with custom Stack instances - by Node.js and Chrome the `Error.prepareStackTrace()` can override every `error.stack` automatically right by creation - by Firefox, Internet Explorer and Opera you cannot automatically override every `error.stack` by native errors - by PhantomJS you cannot override the `error.stack` property of native errors, it is not configurable Capturing the current stack trace - by Node.js, Chrome, Firefox and Opera the stack property is added by instantiating a native error - by Node.js and Chrome the stack creation is lazy loaded and cached, so the `Error.prepareStackTrace()` is called only by the first access - by Node.js and Chrome the current stack can be added to any object with `Error.captureStackTrace()` - by Internet Explorer the stack is created by throwing a native error - by PhantomJS the stack is created by throwing any object, but not a primitive Accessing the stack - by Node.js, Chrome, Firefox, Internet Explorer, Opera and PhantomJS you can use the `error.stack` property - by old Opera you have to use the `error.stacktrace` property to get the stack Prefixes and postfixes on the stack string - by Node.js, Chrome, Internet Explorer and Opera you have the `error.name` and the `error.message` in a `{name}: {message}` format at the beginning of the stack string - by Firefox and PhantomJS the stack string does not contain the `error.name` and the `error.message` - by Firefox you have an empty line at the end of the stack string Accessing the stack frames array - by Node.js and Chrome you can access the frame objects directly by overriding the `Error.prepareStackTrace()` - by Firefox, Internet Explorer, PhantomJS, and Opera you need to parse the stack string in order to get the frames The structure of the frame string - by Node.js and Chrome - the frame string of calling a function from a module: `thirdFn (http://localhost/myModule.js:45:29)` - the frame strings contain an ` at ` prefix, which is not present by the `frame.toString()` output, so it is added by the `stack.toString()` - by Firefox - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Internet Explorer - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45:29)` - by PhantomJS - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Opera - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45)` Accessing information by individual frames - by Node.js and Chrome the `frame.getThis()` and the `frame.getFunction()` returns `undefined` by frames originate from [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode) code - by Firefox, Internet Explorer, PhantomJS, and Opera the context of the function calls is not accessible, so the `frame.getThis()` cannot be implemented - by Firefox, Internet Explorer, PhantomJS, and Opera functions are not accessible with `arguments.callee.caller` by frames originate from strict mode, so by these frames `frame.getFunction()` can return only `undefined` (this is consistent with V8 behavior) ## License MIT - 2016 Jánszky László Lajos # prebuild-install > **A command line tool to easily install prebuilt binaries for multiple versions of Node.js & Electron on a specific platform.** > By default it downloads prebuilt binaries from a GitHub release. [![npm](https://img.shields.io/npm/v/prebuild-install.svg)](https://www.npmjs.com/package/prebuild-install) ![Node version](https://img.shields.io/node/v/prebuild-install.svg) [![Test](https://github.com/prebuild/prebuild-install/actions/workflows/test.yml/badge.svg)](https://github.com/prebuild/prebuild-install/actions/workflows/test.yml) [![david](https://david-dm.org/prebuild/prebuild-install.svg)](https://david-dm.org/prebuild/prebuild-install) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](http://standardjs.com/) ## Note **Instead of [`prebuild`](https://github.com/prebuild/prebuild) paired with [`prebuild-install`](https://github.com/prebuild/prebuild-install), we recommend [`prebuildify`](https://github.com/prebuild/prebuildify) paired with [`node-gyp-build`](https://github.com/prebuild/node-gyp-build).** With `prebuildify`, all prebuilt binaries are shipped inside the package that is published to npm, which means there's no need for a separate download step like you find in `prebuild`. The irony of this approach is that it is faster to download all prebuilt binaries for every platform when they are bundled than it is to download a single prebuilt binary as an install script. Upsides: 1. No extra download step, making it more reliable and faster to install. 2. Supports changing runtime versions locally and using the same install between Node.js and Electron. Reinstalling or rebuilding is not necessary, as all prebuilt binaries are in the npm tarball and the correct one is simply picked on runtime. 3. The `node-gyp-build` runtime dependency is dependency-free and will remain so out of principle, because introducing dependencies would negate the shorter install time. 4. Prebuilt binaries work even if npm install scripts are disabled. 5. The npm package checksum covers prebuilt binaries too. Downsides: 1. The installed npm package is larger on disk. Using [Node-API](https://nodejs.org/api/n-api.html) alleviates this because Node-API binaries are runtime-agnostic and forward-compatible. 2. Publishing is mildly more complicated, because `npm publish` must be done after compiling and fetching prebuilt binaries (typically in CI). ## Usage Use [`prebuild`](https://github.com/prebuild/prebuild) to create and upload prebuilt binaries. Then change your package.json install script to: ```json { "scripts": { "install": "prebuild-install || node-gyp rebuild" } } ``` ### Help ``` prebuild-install [options] --download -d [url] (download prebuilds, no url means github) --target -t version (version to install for) --runtime -r runtime (Node runtime [node, napi or electron] to build or install for, default is node) --path -p path (make a prebuild-install here) --token -T gh-token (github token for private repos) --arch arch (target CPU architecture, see Node OS module docs, default is current arch) --platform platform (target platform, see Node OS module docs, default is current platform) --tag-prefix <prefix> (github tag prefix, default is "v") --build-from-source (skip prebuild download) --verbose (log verbosely) --libc (use provided libc rather than system default) --debug (set Debug or Release configuration) --version (print prebuild-install version and exit) ``` When `prebuild-install` is run via an `npm` script, options `--build-from-source`, `--debug`, `--download`, `--target`, `--runtime`, `--arch` and `--platform` may be passed through via arguments given to the `npm` command. Alternatively you can set environment variables `npm_config_build_from_source=true`, `npm_config_platform`, `npm_config_arch`, `npm_config_target` and `npm_config_runtime`. ### Private Repositories `prebuild-install` supports downloading prebuilds from private GitHub repositories using the `-T <github-token>`: ``` $ prebuild-install -T <github-token> ``` If you don't want to use the token on cli you can put it in `~/.prebuild-installrc`: ``` token=<github-token> ``` Alternatively you can specify it in the `prebuild-install_token` environment variable. Note that using a GitHub token uses the API to resolve the correct release meaning that you are subject to the ([GitHub Rate Limit](https://developer.github.com/v3/rate_limit/)). ### Create GitHub Token To create a token: - Go to [this page](https://github.com/settings/tokens) - Click the `Generate new token` button - Give the token a name and click the `Generate token` button, see below ![prebuild-token](https://cloud.githubusercontent.com/assets/13285808/20844584/d0b85268-b8c0-11e6-8b08-2b19522165a9.png) The default scopes should be fine. ### Custom binaries The end user can override binary download location through environment variables in their .npmrc file. The variable needs to meet the mask `% your package name %_binary_host` or `% your package name %_binary_host_mirror`. For example: ``` leveldown_binary_host=http://overriden-host.com/overriden-path ``` Note that the package version subpath and file name will still be appended. So if you are installing `leveldown@1.2.3` the resulting url will be: ``` http://overriden-host.com/overriden-path/v1.2.3/leveldown-v1.2.3-node-v57-win32-x64.tar.gz ``` #### Local prebuilds If you want to use prebuilds from your local filesystem, you can use the `% your package name %_local_prebuilds` .npmrc variable to set a path to the folder containing prebuilds. For example: ``` leveldown_local_prebuilds=/path/to/prebuilds ``` This option will look directly in that folder for bundles created with `prebuild`, for example: ``` /path/to/prebuilds/leveldown-v1.2.3-node-v57-win32-x64.tar.gz ``` Non-absolute paths resolve relative to the directory of the package invoking prebuild-install, e.g. for nested dependencies. ### Cache All prebuilt binaries are cached to minimize traffic. So first `prebuild-install` picks binaries from the cache and if no binary could be found, it will be downloaded. Depending on the environment, the cache folder is determined in the following order: - `${npm_config_cache}/_prebuilds` - `${APP_DATA}/npm-cache/_prebuilds` - `${HOME}/.npm/_prebuilds` ## License MIT # is-yarn-global [![](https://img.shields.io/travis/LitoMore/is-yarn-global/master.svg)](https://travis-ci.org/LitoMore/is-yarn-global) [![](https://img.shields.io/npm/v/is-yarn-global.svg)](https://www.npmjs.com/package/is-yarn-global) [![](https://img.shields.io/npm/l/is-yarn-global.svg)](https://github.com/LitoMore/is-yarn-global/blob/master/LICENSE) [![](https://img.shields.io/badge/code_style-XO-5ed9c7.svg)](https://github.com/sindresorhus/xo) Check if installed by yarn globally without any `fs` calls ## Install ```bash $ npm install is-yarn-global ``` ## Usage Just require it in your package. ```javascript const isYarnGlobal = require('is-yarn-global'); console.log(isYarnGlobal()); ``` ## License MIT © [LitoMore](https://github.com/LitoMore) ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers Shims used when bundling asc for browser usage. [![NPM version](https://img.shields.io/npm/v/esprima.svg)](https://www.npmjs.com/package/esprima) [![npm download](https://img.shields.io/npm/dm/esprima.svg)](https://www.npmjs.com/package/esprima) [![Build Status](https://img.shields.io/travis/jquery/esprima/master.svg)](https://travis-ci.org/jquery/esprima) [![Coverage Status](https://img.shields.io/codecov/c/github/jquery/esprima/master.svg)](https://codecov.io/github/jquery/esprima) **Esprima** ([esprima.org](http://esprima.org), BSD license) is a high performance, standard-compliant [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) parser written in ECMAScript (also popularly known as [JavaScript](https://en.wikipedia.org/wiki/JavaScript)). Esprima is created and maintained by [Ariya Hidayat](https://twitter.com/ariyahidayat), with the help of [many contributors](https://github.com/jquery/esprima/contributors). ### Features - Full support for ECMAScript 2017 ([ECMA-262 8th Edition](http://www.ecma-international.org/publications/standards/Ecma-262.htm)) - Sensible [syntax tree format](https://github.com/estree/estree/blob/master/es5.md) as standardized by [ESTree project](https://github.com/estree/estree) - Experimental support for [JSX](https://facebook.github.io/jsx/), a syntax extension for [React](https://facebook.github.io/react/) - Optional tracking of syntax node location (index-based and line-column) - [Heavily tested](http://esprima.org/test/ci.html) (~1500 [unit tests](https://github.com/jquery/esprima/tree/master/test/fixtures) with [full code coverage](https://codecov.io/github/jquery/esprima)) ### API Esprima can be used to perform [lexical analysis](https://en.wikipedia.org/wiki/Lexical_analysis) (tokenization) or [syntactic analysis](https://en.wikipedia.org/wiki/Parsing) (parsing) of a JavaScript program. A simple example on Node.js REPL: ```javascript > var esprima = require('esprima'); > var program = 'const answer = 42'; > esprima.tokenize(program); [ { type: 'Keyword', value: 'const' }, { type: 'Identifier', value: 'answer' }, { type: 'Punctuator', value: '=' }, { type: 'Numeric', value: '42' } ] > esprima.parseScript(program); { type: 'Program', body: [ { type: 'VariableDeclaration', declarations: [Object], kind: 'const' } ], sourceType: 'script' } ``` For more information, please read the [complete documentation](http://esprima.org/doc). # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # sprintf.js **sprintf.js** is a complete open source JavaScript sprintf implementation for the *browser* and *node.js*. Its prototype is simple: string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]]) The placeholders in the format string are marked by `%` and are followed by one or more of these elements, in this order: * An optional number followed by a `$` sign that selects which argument index to use for the value. If not specified, arguments will be placed in the same order as the placeholders in the input string. * An optional `+` sign that forces to preceed the result with a plus or minus sign on numeric values. By default, only the `-` sign is used on negative numbers. * An optional padding specifier that says what character to use for padding (if specified). Possible values are `0` or any other character precedeed by a `'` (single quote). The default is to pad with *spaces*. * An optional `-` sign, that causes sprintf to left-align the result of this placeholder. The default is to right-align the result. * An optional number, that says how many characters the result should have. If the value to be returned is shorter than this number, the result will be padded. When used with the `j` (JSON) type specifier, the padding length specifies the tab size used for indentation. * An optional precision modifier, consisting of a `.` (dot) followed by a number, that says how many digits should be displayed for floating point numbers. When used with the `g` type specifier, it specifies the number of significant digits. When used on a string, it causes the result to be truncated. * A type specifier that can be any of: * `%` — yields a literal `%` character * `b` — yields an integer as a binary number * `c` — yields an integer as the character with that ASCII value * `d` or `i` — yields an integer as a signed decimal number * `e` — yields a float using scientific notation * `u` — yields an integer as an unsigned decimal number * `f` — yields a float as is; see notes on precision above * `g` — yields a float as is; see notes on precision above * `o` — yields an integer as an octal number * `s` — yields a string as is * `x` — yields an integer as a hexadecimal number (lower-case) * `X` — yields an integer as a hexadecimal number (upper-case) * `j` — yields a JavaScript object or array as a JSON encoded string ## JavaScript `vsprintf` `vsprintf` is the same as `sprintf` except that it accepts an array of arguments, rather than a variable number of arguments: vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) ## Argument swapping You can also swap the arguments. That is, the order of the placeholders doesn't have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to: sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") And, of course, you can repeat the placeholders without having to increase the number of arguments. ## Named arguments Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - `(` and `)` - and begin with a keyword that refers to a key: var user = { name: "Dolly" } sprintf("Hello %(name)s", user) // Hello Dolly Keywords in replacement fields can be optionally followed by any number of keywords or indexes: var users = [ {name: "Dolly"}, {name: "Molly"}, {name: "Polly"} ] sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly Note: mixing positional and named placeholders is not (yet) supported ## Computed values You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly. sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890 sprintf("Current date and time: %s", function() { return new Date().toString() }) # AngularJS You can now use `sprintf` and `vsprintf` (also aliased as `fmt` and `vfmt` respectively) in your AngularJS projects. See `demo/`. # Installation ## Via Bower bower install sprintf ## Or as a node.js module npm install sprintf-js ### Usage var sprintf = require("sprintf-js").sprintf, vsprintf = require("sprintf-js").vsprintf sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) # License **sprintf.js** is licensed under the terms of the 3-clause BSD license. gauge ===== A nearly stateless terminal based horizontal gauge / progress bar. ```javascript var Gauge = require("gauge") var gauge = new Gauge() gauge.show("test", 0.20) gauge.pulse("this") gauge.hide() ``` ![](gauge-demo.gif) ### CHANGES FROM 1.x Gauge 2.x is breaking release, please see the [changelog] for details on what's changed if you were previously a user of this module. [changelog]: CHANGELOG.md ### THE GAUGE CLASS This is the typical interface to the module– it provides a pretty fire-and-forget interface to displaying your status information. ``` var Gauge = require("gauge") var gauge = new Gauge([stream], [options]) ``` * **stream** – *(optional, default STDERR)* A stream that progress bar updates are to be written to. Gauge honors backpressure and will pause most writing if it is indicated. * **options** – *(optional)* An option object. Constructs a new gauge. Gauges are drawn on a single line, and are not drawn if **stream** isn't a tty and a tty isn't explicitly provided. If **stream** is a terminal or if you pass in **tty** to **options** then we will detect terminal resizes and redraw to fit. We do this by watching for `resize` events on the tty. (To work around a bug in verisons of Node prior to 2.5.0, we watch for them on stdout if the tty is stderr.) Resizes to larger window sizes will be clean, but shrinking the window will always result in some cruft. **IMPORTANT:** If you prevously were passing in a non-tty stream but you still want output (for example, a stream wrapped by the `ansi` module) then you need to pass in the **tty** option below, as `gauge` needs access to the underlying tty in order to do things like terminal resizes and terminal width detection. The **options** object can have the following properties, all of which are optional: * **updateInterval**: How often gauge updates should be drawn, in miliseconds. * **fixedFramerate**: Defaults to false on node 0.8, true on everything else. When this is true a timer is created to trigger once every `updateInterval` ms, when false, updates are printed as soon as they come in but updates more often than `updateInterval` are ignored. The reason 0.8 doesn't have this set to true is that it can't `unref` its timer and so it would stop your program from exiting– if you want to use this feature with 0.8 just make sure you call `gauge.disable()` before you expect your program to exit. * **themes**: A themeset to use when selecting the theme to use. Defaults to `gauge/themes`, see the [themes] documentation for details. * **theme**: Select a theme for use, it can be a: * Theme object, in which case the **themes** is not used. * The name of a theme, which will be looked up in the current *themes* object. * A configuration object with any of `hasUnicode`, `hasColor` or `platform` keys, which if wlll be used to override our guesses when making a default theme selection. If no theme is selected then a default is picked using a combination of our best guesses at your OS, color support and unicode support. * **template**: Describes what you want your gauge to look like. The default is what npm uses. Detailed [documentation] is later in this document. * **hideCursor**: Defaults to true. If true, then the cursor will be hidden while the gauge is displayed. * **tty**: The tty that you're ultimately writing to. Defaults to the same as **stream**. This is used for detecting the width of the terminal and resizes. The width used is `tty.columns - 1`. If no tty is available then a width of `79` is assumed. * **enabled**: Defaults to true if `tty` is a TTY, false otherwise. If true the gauge starts enabled. If disabled then all update commands are ignored and no gauge will be printed until you call `.enable()`. * **Plumbing**: The class to use to actually generate the gauge for printing. This defaults to `require('gauge/plumbing')` and ordinarly you shouldn't need to override this. * **cleanupOnExit**: Defaults to true. Ordinarily we register an exit handler to make sure your cursor is turned back on and the progress bar erased when your process exits, even if you Ctrl-C out or otherwise exit unexpectedly. You can disable this and it won't register the exit handler. [has-unicode]: https://www.npmjs.com/package/has-unicode [themes]: #themes [documentation]: #templates #### `gauge.show(section | status, [completed])` The first argument is either the section, the name of the current thing contributing to progress, or an object with keys like **section**, **subsection** & **completed** (or any others you have types for in a custom template). If you don't want to update or set any of these you can pass `null` and it will be ignored. The second argument is the percent completed as a value between 0 and 1. Without it, completion is just not updated. You'll also note that completion can be passed in as part of a status object as the first argument. If both it and the completed argument are passed in, the completed argument wins. #### `gauge.hide([cb])` Removes the gauge from the terminal. Optionally, callback `cb` after IO has had an opportunity to happen (currently this just means after `setImmediate` has called back.) It turns out this is important when you're pausing the progress bar on one filehandle and printing to another– otherwise (with a big enough print) node can end up printing the "end progress bar" bits to the progress bar filehandle while other stuff is printing to another filehandle. These getting interleaved can cause corruption in some terminals. #### `gauge.pulse([subsection])` * **subsection** – *(optional)* The specific thing that triggered this pulse Spins the spinner in the gauge to show output. If **subsection** is included then it will be combined with the last name passed to `gauge.show`. #### `gauge.disable()` Hides the gauge and ignores further calls to `show` or `pulse`. #### `gauge.enable()` Shows the gauge and resumes updating when `show` or `pulse` is called. #### `gauge.isEnabled()` Returns true if the gauge is enabled. #### `gauge.setThemeset(themes)` Change the themeset to select a theme from. The same as the `themes` option used in the constructor. The theme will be reselected from this themeset. #### `gauge.setTheme(theme)` Change the active theme, will be displayed with the next show or pulse. This can be: * Theme object, in which case the **themes** is not used. * The name of a theme, which will be looked up in the current *themes* object. * A configuration object with any of `hasUnicode`, `hasColor` or `platform` keys, which if wlll be used to override our guesses when making a default theme selection. If no theme is selected then a default is picked using a combination of our best guesses at your OS, color support and unicode support. #### `gauge.setTemplate(template)` Change the active template, will be displayed with the next show or pulse ### Tracking Completion If you have more than one thing going on that you want to track completion of, you may find the related [are-we-there-yet] helpful. It's `change` event can be wired up to the `show` method to get a more traditional progress bar interface. [are-we-there-yet]: https://www.npmjs.com/package/are-we-there-yet ### THEMES ``` var themes = require('gauge/themes') // fetch the default color unicode theme for this platform var ourTheme = themes({hasUnicode: true, hasColor: true}) // fetch the default non-color unicode theme for osx var ourTheme = themes({hasUnicode: true, hasColor: false, platform: 'darwin'}) // create a new theme based on the color ascii theme for this platform // that brackets the progress bar with arrows var ourTheme = themes.newTheme(theme(hasUnicode: false, hasColor: true}), { preProgressbar: '→', postProgressbar: '←' }) ``` The object returned by `gauge/themes` is an instance of the `ThemeSet` class. ``` var ThemeSet = require('gauge/theme-set') var themes = new ThemeSet() // or var themes = require('gauge/themes') var mythemes = themes.newThemeset() // creates a new themeset based on the default themes ``` #### themes(opts) #### themes.getDefault(opts) Theme objects are a function that fetches the default theme based on platform, unicode and color support. Options is an object with the following properties: * **hasUnicode** - If true, fetch a unicode theme, if no unicode theme is available then a non-unicode theme will be used. * **hasColor** - If true, fetch a color theme, if no color theme is available a non-color theme will be used. * **platform** (optional) - Defaults to `process.platform`. If no platform match is available then `fallback` is used instead. If no compatible theme can be found then an error will be thrown with a `code` of `EMISSINGTHEME`. #### themes.addTheme(themeName, themeObj) #### themes.addTheme(themeName, [parentTheme], newTheme) Adds a named theme to the themeset. You can pass in either a theme object, as returned by `themes.newTheme` or the arguments you'd pass to `themes.newTheme`. #### themes.getThemeNames() Return a list of all of the names of the themes in this themeset. Suitable for use in `themes.getTheme(…)`. #### themes.getTheme(name) Returns the theme object from this theme set named `name`. If `name` does not exist in this themeset an error will be thrown with a `code` of `EMISSINGTHEME`. #### themes.setDefault([opts], themeName) `opts` is an object with the following properties. * **platform** - Defaults to `'fallback'`. If your theme is platform specific, specify that here with the platform from `process.platform`, eg, `win32`, `darwin`, etc. * **hasUnicode** - Defaults to `false`. If your theme uses unicode you should set this to true. * **hasColor** - Defaults to `false`. If your theme uses color you should set this to true. `themeName` is the name of the theme (as given to `addTheme`) to use for this set of `opts`. #### themes.newTheme([parentTheme,] newTheme) Create a new theme object based on `parentTheme`. If no `parentTheme` is provided then a minimal parentTheme that defines functions for rendering the activity indicator (spinner) and progress bar will be defined. (This fallback parent is defined in `gauge/base-theme`.) newTheme should be a bare object– we'll start by discussing the properties defined by the default themes: * **preProgressbar** - displayed prior to the progress bar, if the progress bar is displayed. * **postProgressbar** - displayed after the progress bar, if the progress bar is displayed. * **progressBarTheme** - The subtheme passed through to the progress bar renderer, it's an object with `complete` and `remaining` properties that are the strings you want repeated for those sections of the progress bar. * **activityIndicatorTheme** - The theme for the activity indicator (spinner), this can either be a string, in which each character is a different step, or an array of strings. * **preSubsection** - Displayed as a separator between the `section` and `subsection` when the latter is printed. More generally, themes can have any value that would be a valid value when rendering templates. The properties in the theme are used when their name matches a type in the template. Their values can be: * **strings & numbers** - They'll be included as is * **function (values, theme, width)** - Should return what you want in your output. *values* is an object with values provided via `gauge.show`, *theme* is the theme specific to this item (see below) or this theme object, and *width* is the number of characters wide your result should be. There are a couple of special prefixes: * **pre** - Is shown prior to the property, if its displayed. * **post** - Is shown after the property, if its displayed. And one special suffix: * **Theme** - Its value is passed to a function-type item as the theme. #### themes.addToAllThemes(theme) This *mixes-in* `theme` into all themes currently defined. It also adds it to the default parent theme for this themeset, so future themes added to this themeset will get the values from `theme` by default. #### themes.newThemeset() Copy the current themeset into a new one. This allows you to easily inherit one themeset from another. ### TEMPLATES A template is an array of objects and strings that, after being evaluated, will be turned into the gauge line. The default template is: ```javascript [ {type: 'progressbar', length: 20}, {type: 'activityIndicator', kerning: 1, length: 1}, {type: 'section', kerning: 1, default: ''}, {type: 'subsection', kerning: 1, default: ''} ] ``` The various template elements can either be **plain strings**, in which case they will be be included verbatum in the output, or objects with the following properties: * *type* can be any of the following plus any keys you pass into `gauge.show` plus any keys you have on a custom theme. * `section` – What big thing you're working on now. * `subsection` – What component of that thing is currently working. * `activityIndicator` – Shows a spinner using the `activityIndicatorTheme` from your active theme. * `progressbar` – A progress bar representing your current `completed` using the `progressbarTheme` from your active theme. * *kerning* – Number of spaces that must be between this item and other items, if this item is displayed at all. * *maxLength* – The maximum length for this element. If its value is longer it will be truncated. * *minLength* – The minimum length for this element. If its value is shorter it will be padded according to the *align* value. * *align* – (Default: left) Possible values "left", "right" and "center". Works as you'd expect from word processors. * *length* – Provides a single value for both *minLength* and *maxLength*. If both *length* and *minLength or *maxLength* are specifed then the latter take precedence. * *value* – A literal value to use for this template item. * *default* – A default value to use for this template item if a value wasn't otherwise passed in. ### PLUMBING This is the super simple, assume nothing, do no magic internals used by gauge to implement its ordinary interface. ``` var Plumbing = require('gauge/plumbing') var gauge = new Plumbing(theme, template, width) ``` * **theme**: The theme to use. * **template**: The template to use. * **width**: How wide your gauge should be #### `gauge.setTheme(theme)` Change the active theme. #### `gauge.setTemplate(template)` Change the active template. #### `gauge.setWidth(width)` Change the width to render at. #### `gauge.hide()` Return the string necessary to hide the progress bar #### `gauge.hideCursor()` Return a string to hide the cursor. #### `gauge.showCursor()` Return a string to show the cursor. #### `gauge.show(status)` Using `status` for values, render the provided template with the theme and return a string that is suitable for printing to update the gauge. # capability.js - javascript environment capability detection [![Build Status](https://travis-ci.org/inf3rno/capability.png?branch=master)](https://travis-ci.org/inf3rno/capability) The capability.js library provides capability detection for different javascript environments. ## Documentation This project is empty yet. ### Installation ```bash npm install capability ``` ```bash bower install capability ``` #### Environment compatibility The lib requires only basic javascript features, so it will run in every js environments. #### Requirements If you want to use the lib in browser, you'll need a node module loader, e.g. browserify, webpack, etc... #### Usage In this documentation I used the lib as follows: ```js var capability = require("capability"); ``` ### Capabilities API #### Defining a capability You can define a capability by using the `define(name, test)` function. ```js capability.define("Object.create", function () { return Object.create; }); ``` The `name` parameter should contain the identifier of the capability and the `test` parameter should contain a function, which can detect the capability. If the capability is supported by the environment, then the `test()` should return `true`, otherwise it should return `false`. You don't have to convert the return value into a `Boolean`, the library will do that for you, so you won't have memory leaks because of this. #### Testing a capability The `test(name)` function will return a `Boolean` about whether the capability is supported by the actual environment. ```js console.log(capability.test("Object.create")); // true - in recent environments // false - by pre ES5 environments without Object.create ``` You can use `capability(name)` instead of `capability.test(name)` if you want a short code by optional requirements. #### Checking a capability The `check(name)` function will throw an Error when the capability is not supported by the actual environment. ```js capability.check("Object.create"); // this will throw an Error by pre ES5 environments without Object.create ``` #### Checking capability with require and modules It is possible to check the environments with `require()` by adding a module, which calls the `check(name)` function. By the capability definitions in this lib I added such modules by each definition, so you can do for example `require("capability/es5")`. Ofc. you can do fun stuff if you want, e.g. you can call multiple `check`s from a single `requirements.js` file in your lib, etc... ### Definitions Currently the following definitions are supported by the lib: - strict mode - `arguments.callee.caller` - es5 - `Array.prototype.forEach` - `Array.prototype.map` - `Function.prototype.bind` - `Object.create` - `Object.defineProperties` - `Object.defineProperty` - `Object.prototype.hasOwnProperty` - `Error.captureStackTrace` - `Error.prototype.stack` ## License MIT - 2016 Jánszky László Lajos # flatted [![Downloads](https://img.shields.io/npm/dm/flatted.svg)](https://www.npmjs.com/package/flatted) [![Coverage Status](https://coveralls.io/repos/github/WebReflection/flatted/badge.svg?branch=main)](https://coveralls.io/github/WebReflection/flatted?branch=main) [![Build Status](https://travis-ci.com/WebReflection/flatted.svg?branch=main)](https://travis-ci.com/WebReflection/flatted) [![License: ISC](https://img.shields.io/badge/License-ISC-yellow.svg)](https://opensource.org/licenses/ISC) ![WebReflection status](https://offline.report/status/webreflection.svg) ![snow flake](./flatted.jpg) <sup>**Social Media Photo by [Matt Seymour](https://unsplash.com/@mattseymour) on [Unsplash](https://unsplash.com/)**</sup> ## Announcement 📣 There is a standard approach to recursion and more data-types than what JSON allows, and it's part of the [Structured Clone polyfill](https://github.com/ungap/structured-clone/#readme). Beside acting as a polyfill, its `@ungap/structured-clone/json` export provides both `stringify` and `parse`, and it's been tested for being faster than *flatted*, but its produced output is also smaller than *flatted* in general. The *@ungap/structured-clone* module is, in short, a drop in replacement for *flatted*, but it's not compatible with *flatted* specialized syntax. However, if recursion, as well as more data-types, are what you are after, or interesting for your projects/use cases, consider switching to this new module whenever you can 👍 - - - A super light (0.5K) and fast circular JSON parser, directly from the creator of [CircularJSON](https://github.com/WebReflection/circular-json/#circularjson). Now available also for **[PHP](./php/flatted.php)**. ```js npm i flatted ``` Usable via [CDN](https://unpkg.com/flatted) or as regular module. ```js // ESM import {parse, stringify, toJSON, fromJSON} from 'flatted'; // CJS const {parse, stringify, toJSON, fromJSON} = require('flatted'); const a = [{}]; a[0].a = a; a.push(a); stringify(a); // [["1","0"],{"a":"0"}] ``` ## toJSON and fromJSON If you'd like to implicitly survive JSON serialization, these two helpers helps: ```js import {toJSON, fromJSON} from 'flatted'; class RecursiveMap extends Map { static fromJSON(any) { return new this(fromJSON(any)); } toJSON() { return toJSON([...this.entries()]); } } const recursive = new RecursiveMap; const same = {}; same.same = same; recursive.set('same', same); const asString = JSON.stringify(recursive); const asMap = RecursiveMap.fromJSON(JSON.parse(asString)); asMap.get('same') === asMap.get('same').same; // true ``` ## Flatted VS JSON As it is for every other specialized format capable of serializing and deserializing circular data, you should never `JSON.parse(Flatted.stringify(data))`, and you should never `Flatted.parse(JSON.stringify(data))`. The only way this could work is to `Flatted.parse(Flatted.stringify(data))`, as it is also for _CircularJSON_ or any other, otherwise there's no granted data integrity. Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won't be serialized and unserialized as expected. ### New in V1: Exact same JSON API * Added a [reviver](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Syntax) parameter to `.parse(string, reviver)` and revive your own objects. * Added a [replacer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Syntax) and a `space` parameter to `.stringify(object, replacer, space)` for feature parity with JSON signature. ### Compatibility All ECMAScript engines compatible with `Map`, `Set`, `Object.keys`, and `Array.prototype.reduce` will work, even if polyfilled. ### How does it work ? While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. `*` Once parsed, all indexes will be replaced through the flattened collection. <sup><sub>`*` represented as string to avoid conflicts with numbers</sub></sup> ```js // logic example var a = [{one: 1}, {two: '2'}]; a[0].a = a; // a is the main object, will be at index '0' // {one: 1} is the second object, index '1' // {two: '2'} the third, in '2', and it has a string // which will be found at index '3' Flatted.stringify(a); // [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"] // a[one,two] {one: 1, a} {two: '2'} '2' ``` write-file-atomic ----------------- This is an extension for node's `fs.writeFile` that makes its operation atomic and allows you set ownership (uid/gid of the file). ### var writeFileAtomic = require('write-file-atomic')<br>writeFileAtomic(filename, data, [options], [callback]) * filename **String** * data **String** | **Buffer** * options **Object** | **String** * chown **Object** default, uid & gid of existing file, if any * uid **Number** * gid **Number** * encoding **String** | **Null** default = 'utf8' * fsync **Boolean** default = true * mode **Number** default, from existing file, if any * tmpfileCreated **Function** called when the tmpfile is created * callback **Function** Atomically and asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer. The file is initially named `filename + "." + murmurhex(__filename, process.pid, ++invocations)`. Note that `require('worker_threads').threadId` is used in addition to `process.pid` if running inside of a worker thread. If writeFile completes successfully then, if passed the **chown** option it will change the ownership of the file. Finally it renames the file back to the filename you specified. If it encounters errors at any of these steps it will attempt to unlink the temporary file and then pass the error back to the caller. If multiple writes are concurrently issued to the same file, the write operations are put into a queue and serialized in the order they were called, using Promises. Writes to different files are still executed in parallel. If provided, the **chown** option requires both **uid** and **gid** properties or else you'll get an error. If **chown** is not specified it will default to using the owner of the previous file. To prevent chown from being ran you can also pass `false`, in which case the file will be created with the current user's credentials. If **mode** is not specified, it will default to using the permissions from an existing file, if any. Expicitly setting this to `false` remove this default, resulting in a file created with the system default permissions. If options is a String, it's assumed to be the **encoding** option. The **encoding** option is ignored if **data** is a buffer. It defaults to 'utf8'. If the **fsync** option is **false**, writeFile will skip the final fsync call. If the **tmpfileCreated** option is specified it will be called with the name of the tmpfile when created. Example: ```javascript writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}}, function (err) { if (err) throw err; console.log('It\'s saved!'); }); ``` This function also supports async/await: ```javascript (async () => { try { await writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}}); console.log('It\'s saved!'); } catch (err) { console.error(err); process.exit(1); } })(); ``` ### var writeFileAtomicSync = require('write-file-atomic').sync<br>writeFileAtomicSync(filename, data, [options]) The synchronous version of **writeFileAtomic**. # node-hid - Access USB HID devices from Node.js # [![npm](https://img.shields.io/npm/dm/node-hid.svg?maxAge=2592000)](http://npmjs.com/package/node-hid) [![build macos](https://github.com/node-hid/node-hid/workflows/macos/badge.svg)](https://github.com/node-hid/node-hid/actions?query=workflow%3Amacos) [![build windows](https://github.com/node-hid/node-hid/workflows/windows/badge.svg)](https://github.com/node-hid/node-hid/actions?query=workflow%3Awindows) [![build linux](https://github.com/node-hid/node-hid/workflows/linux/badge.svg)](https://github.com/node-hid/node-hid/actions?query=workflow%3Alinux) * [node-hid - Access USB HID devices from Node.js](#node-hid---access-usb-hid-devices-from-nodejs) * [Platform Support](#platform-support) * [Supported Platforms](#supported-platforms) * [Supported Node versions](#supported-node-versions) * [Supported Electron versions](#supported-electron-versions) * [Installation](#installation) * [Installation Special Cases](#installation-special-cases) * [Examples](#examples) * [Usage](#usage) * [List all HID devices connected](#list-all-hid-devices-connected) * [Cost of HID.devices() and <code>new HID.HID()</code> for detecting device plug/unplug](#cost-of-hiddevices-and-new-hidhid-for-detecting-device-plugunplug) * [Opening a device](#opening-a-device) * [Picking a device from the device list](#picking-a-device-from-the-device-list) * [Reading from a device](#reading-from-a-device) * [Writing to a device](#writing-to-a-device) * [Complete API](#complete-api) * [devices = HID.devices()](#devices--hiddevices) * [HID.setDriverType(type)](#hidsetdrivertypetype) * [device = new HID.HID(path)](#device--new-hidhidpath) * [device = new HID.HID(vid,pid)](#device--new-hidhidvidpid) * [device.on('data', function(data) {} )](#deviceondata-functiondata--) * [device.on('error, function(error) {} )](#deviceonerror-functionerror--) * [device.write(data)](#devicewritedata) * [device.close()](#deviceclose) * [device.pause()](#devicepause) * [device.resume()](#deviceresume) * [device.read(callback)](#devicereadcallback) * [device.readSync()](#devicereadsync) * [device.readTimeout(time_out)](#devicereadtimeouttime_out) * [device.sendFeatureReport(data)](#devicesendfeaturereportdata) * [device.getFeatureReport(report_id, report_length)](#devicegetfeaturereportreport_id-report_length) * [device.setNonBlocking(no_block)](#devicesetnonblockingno_block) * [General notes:](#general-notes) * [Thread safety, Worker threads, Context-aware modules](#thread-safety-worker-threads-context-aware-modules) * [Keyboards and Mice](#keyboards-and-mice) * [Mac notes](#mac-notes) * [Windows notes](#windows-notes) * [Xbox 360 Controller on Windows 10](#xbox-360-controller-on-windows-10) * [Linux notes](#linux-notes) * [Selecting driver type](#selecting-driver-type) * [udev device permissions](#udev-device-permissions) * [Compiling from source](#compiling-from-source) * [Linux (kernel 2.6 ) : (install examples shown for Debian/Ubuntu)](#linux-kernel-26--install-examples-shown-for-debianubuntu) * [FreeBSD](#freebsd) * [Mac OS X 10.8 ](#mac-os-x-108) * [Windows 7, 8, 10](#windows-7-8-10) * [Building node-hid from source, for your projects](#building-node-hid-from-source-for-your-projects) * [Build node-hid for <code>node-hid</code> development](#build-node-hid-for-node-hid-development) * [Building node-hid for cross-compiling](#building-node-hid-for-cross-compiling) * [Electron projects using node-hid](#electron-projects-using-node-hid) * [NW.js projects using node-hid](#nwjs-projects-using-node-hid) * [Support](#support) ## Platform Support `node-hid` supports Node.js v6 and upwards. For versions before that, you will need to build from source. The platforms, architectures and node versions `node-hid` supports are the following. In general we try to provide pre-built native library binaries for the most common platforms, Node and Electron versions. We strive to make `node-hid` cross-platform so there's a good chance any combination not listed here will compile and work. ### Supported Platforms ### - Windows x86 (32-bit) (¹) - Windows x64 (64-bit) - Mac OSX 10.9+ - Linux x64 (²) - Linux x86 (¹) - Linux ARM / Raspberry Pi (¹) - Linux MIPSel (¹) - Linux PPC64 (¹) ¹ prebuilt-binaries not provided for these platforms ² prebuilt binary built on Ubuntu 18.04 x64 ### Supported Node versions ### * Node v8 to * Node v14 ### Supported Electron versions ### * Electron v3 to * Electron v11 #### Any newer version of Electron or Node MAY NOT WORK Native modules like `node-hid` require upstream dependencies to be updated to work with newer Node and Electron versions. Unless you need the features in the most recent Electron or Node, use a supported version. ## Installation For most "standard" use cases (node v4.x on mac, linux, windows on a x86 or x64 processor), `node-hid` will install like a standard npm package: ``` npm install node-hid ``` If you install globally, the test program `src/show-devices.js` is installed as `hid-showdevices`. On Linux you can use it to try the difference between `hidraw` and `libusb` driverTypes: ``` $ npm install -g node-hid $ hid-showdevices libusb $ hid-showdevices hidraw ``` ### Installation Special Cases We are using [prebuild](https://github.com/mafintosh/prebuild) to compile and post binaries of the library for most common use cases (Linux, MacOS, Windows on standard processor platforms). If a prebuild is not available, `node-hid` will work, but `npm install node-hid` will compile the binary when you install. For more details on compiler setup, see [Compling from source](#compiling-from-source) below. ## Examples In the `src/` directory, various JavaScript programs can be found that talk to specific devices in some way. Some interesting ones: - [`show-devices.js`](./src/show-devices.js) - display all HID devices in the system - [`test-ps3-rumbleled.js`](./src/test-ps3-rumbleled.js) - Read PS3 joystick and control its LED & rumblers - [`test-powermate.js`](./src/test-powermate.js) - Read Griffin PowerMate knob and change its LED - [`test-blink1.js`](./src/test-blink1.js) - Fade colors on blink(1) RGB LED - [`test-bigredbutton.js`](./src/test-bigredbutton.js) - Read Dreamcheeky Big Red Button - [`test-teensyrawhid.js`](./src/test-teensyrawhid.js) - Read/write Teensy running RawHID "Basic" Arduino sketch To try them out, run them with `node src/showdevices.js` from within the node-hid directory. ---- ## Usage ### List all HID devices connected ```js var HID = require('node-hid'); var devices = HID.devices(); ``` `devices` will contain an array of objects, one for each HID device available. Of particular interest are the `vendorId` and `productId`, as they uniquely identify a device, and the `path`, which is needed to open a particular device. Sample output: ```js HID.devices(); { vendorId: 10168, productId: 493, path: 'IOService:/AppleACPIPl...HIDDevice@14210000,0', serialNumber: '20002E8C', manufacturer: 'ThingM', product: 'blink(1) mk2', release: 2, interface: -1, usagePage: 65280, usage: 1 }, { vendorId: 1452, productId: 610, path: 'IOService:/AppleACPIPl...Keyboard@14400000,0', serialNumber: '', manufacturer: 'Apple Inc.', product: 'Apple Internal Keyboard / Trackpad', release: 549, interface: -1, usagePage: 1, usage: 6 }, <and more> ``` #### Cost of `HID.devices()` and `new HID.HID()` for detecting device plug/unplug Both `HID.devices()` and `new HID.HID()` are relatively costly, each causing a USB (and potentially Bluetooth) enumeration. This takes time and OS resources. Doing either can slow down the read/write that you do in parallel with a device, and cause other USB devices to slow down too. This is how USB works. If you are polling `HID.devices()` or doing repeated `new HID.HID(vid,pid)` to detect device plug / unplug, consider instead using [node-usb-detection](https://github.com/MadLittleMods/node-usb-detection). `node-usb-detection` uses OS-specific, non-bus enumeration ways to detect device plug / unplug. ### Opening a device Before a device can be read from or written to, it must be opened. The `path` can be determined by a prior HID.devices() call. Use either the `path` from the list returned by a prior call to `HID.devices()`: ```js var device = new HID.HID(path); ``` or open the first device matching a VID/PID pair: ```js var device = new HID.HID(vid,pid); ``` The `device` variable will contain a handle to the device. If an error occurs opening the device, an exception will be thrown. A `node-hid` device is an `EventEmitter`. While it shares some method names and usage patterns with `Readable` and `Writable` streams, it is not a stream and the semantics vary. For example, `device.write` does not take encoding or callback args and `device.pause` does not do the same thing as `readable.pause`. There is also no `pipe` method. ### Picking a device from the device list If you need to filter down the `HID.devices()` list, you can use standard Javascript array techniques: ```js var devices = HID.devices(); var deviceInfo = devices.find( function(d) { var isTeensy = d.vendorId===0x16C0 && d.productId===0x0486; return isTeensy && d.usagePage===0xFFAB && d.usage===0x200; }); if( deviceInfo ) { var device = new HID.HID( deviceInfo.path ); // ... use device } ``` ### Reading from a device To receive FEATURE reports, use `device.getFeatureReport()`. To receive INPUT reports, use `device.on("data",...)`. A `node-hid` device is an EventEmitter. Reading from a device is performed by registering a "data" event handler: ```js device.on("data", function(data) {}); ``` You can also listen for errors like this: ```js device.on("error", function(err) {}); ``` For FEATURE reports: ```js var buf = device.getFeatureReport(reportId, reportLength) ``` Notes: - Reads via `device.on("data")` are asynchronous - Reads via `device.getFeatureReport()` are synchronous - To remove an event handler, close the device with `device.close()` - When there is not yet a data handler or no data handler exists, data is not read at all -- there is no buffer. ### Writing to a device To send FEATURE reports, use `device.sendFeatureReport()`. To send OUTPUT reports, use `device.write()`. All writing is synchronous. The ReportId is the first byte of the array sent to `device.sendFeatureReport()` or `device.write()`, meaning the array should be one byte bigger than your report. If your device does NOT use numbered reports, set the first byte of the 0x00. ```js device.write([0x00, 0x01, 0x01, 0x05, 0xff, 0xff]); ``` ```js device.sendFeatureReport( [0x01, 'c', 0, 0xff,0x33,0x00, 70,0, 0] ); ``` Notes: - You must send the exact number of bytes for your chosen OUTPUT or FEATURE report. - Both `device.write()` and `device.sendFeatureReport()` return number of bytes written + 1. - For devices using Report Ids, the first byte of the array to `write()` or `sendFeatureReport()` must be the Report Id. ## Complete API ### `devices = HID.devices()` - Return array listing all connected HID devices ### `HID.setDriverType(type)` - Linux only - Sets underlying HID driver type - `type` can be `"hidraw"` or `"libusb"`, defaults to `"hidraw"` ### `device = new HID.HID(path)` - Open a HID device at the specified platform-specific path ### `device = new HID.HID(vid,pid)` - Open first HID device with specific VendorId and ProductId ### `device.on('data', function(data) {} )` - `data` - Buffer - the data read from the device ### `device.on('error, function(error) {} )` - `error` - The error Object emitted ### `device.write(data)` - `data` - the data to be synchronously written to the device, first byte is Report Id or 0x00 if not using numbered reports. - Returns number of bytes actually written ### `device.close()` - Closes the device. Subsequent reads will raise an error. ### `device.pause()` - Pauses reading and the emission of `data` events. This means the underlying device is _silenced_ until resumption -- it is not like pausing a stream, where data continues to accumulate. ### `device.resume()` - This method will cause the HID device to resume emmitting `data` events. If no listeners are registered for the `data` event, data will be lost. - When a `data` event is registered for this HID device, this method will be automatically called. ### `device.read(callback)` - Low-level function call to initiate an asynchronous read from the device. - `callback` is of the form `callback(err, data)` ### `device.readSync()` - Return an array of numbers data. If an error occurs, an exception will be thrown. ### `device.readTimeout(time_out)` - `time_out` - timeout in milliseconds - Return an array of numbers data. If an error occurs, an exception will be thrown. ### `device.sendFeatureReport(data)` - `data` - data of HID feature report, with 0th byte being report_id (`[report_id,...]`) - Returns number of bytes actually written ### `device.getFeatureReport(report_id, report_length)` - `report_id` - HID feature report id to get - `report_length` - length of report ### `device.setNonBlocking(no_block)` - `no_block` - boolean. Set to `true` to enable non-blocking reads - exactly mirrors `hid_set_nonblocking()` in [`hidapi`](https://github.com/libusb/hidapi) ----- ## General notes: ### Thread safety, Worker threads, Context-aware modules In general `node-hid` is not thread-safe because the underlying C-library it wraps (`hidapi`) is not thread-safe. However, `node-hid` is now reporting as minimally Context Aware to allow use in Electron v9+. Until `node-hid` (or `hidapi`) is rewritten to be thread-safe, please constrain all accesses to it via a single thread. ### Keyboards and Mice Most OSes will prevent USB HID keyboards or mice, or devices that appear as a keyboard to the OS. This includes many RFID scanners, barcode readers, USB HID scales, and many other devices. This is a security precaution. Otherwise, it would be trivial to build keyloggers. There are non-standard work-arounds for this, but in general you cannot use `node-hid` to access keyboard-like devices. ## Mac notes See General notes above Keyboards ## Windows notes See General notes above about Keyboards ### Xbox 360 Controller on Windows 10 For reasons similar to mice & keyboards it appears you can't access this controller on Windows 10. ## Linux notes See General notes above about Keyboards ### udev device permissions Most Linux distros use `udev` to manage access to physical devices, and USB HID devices are normally owned by the `root` user. To allow non-root access, you must create a udev rule for the device, based on the devices vendorId and productId. This rule is a text file placed in `/etc/udev/rules.d`. For an example HID device (say a blink(1) light with vendorId = 0x27b8 and productId = 0x01ed, the rules file to support both `hidraw` and `libusb` would look like: ``` SUBSYSTEM=="input", GROUP="input", MODE="0666" SUBSYSTEM=="usb", ATTRS{idVendor}=="27b8", ATTRS{idProduct}=="01ed", MODE:="666", GROUP="plugdev" KERNEL=="hidraw*", ATTRS{idVendor}=="27b8", ATTRS{idProduct}=="01ed", MODE="0666", GROUP="plugdev" ``` Note that the values for idVendor and idProduct must be in hex and lower-case. Save this file as `/etc/udev/rules.d/51-blink1.rules`, unplug the HID device, and reload the rules with: ``` sudo udevadm control --reload-rules ``` For a complete example, see the [blink1 udev rules](https://github.com/todbot/blink1/blob/master/linux/51-blink1.rules). ### Selecting driver type By default as of `node-hid@0.7.0`, the [hidraw](https://www.kernel.org/doc/Documentation/hid/hidraw.txt) driver is used to talk to HID devices. Before `node-hid@0.7.0`, the more older but less capable [libusb](http://libusb.info/) driver was used. With `hidraw` Linux apps can now see `usage` and `usagePage` attributes of devices. If you would still like to use the `libusb` driver, then you can do either: During runtime, you can use `HID.setDriverType('libusb')` immediately after require()-ing `node-hid`: ```js var HID = require('node-hid'); HID.setDriverType('libusb'); ``` If you must have the libusb version and cannot use `setDriverType()`, you can install older node-hid or build from source: ``` npm install node-hid@0.5.7 ``` or: ``` npm install node-hid --build-from-source --driver=libusb ``` ## Compiling from source To compile & develop locally or if `prebuild` cannot download a pre-built binary for you, you will need the following compiler tools and libraries: ### Linux (kernel 2.6+) : (install examples shown for Debian/Ubuntu) * Compilation tools: `apt install build-essential git` * gcc-4.8+: `apt install gcc-4.8 g++-4.8 && export CXX=g++-4.8` * libusb-1.0-0 w/headers:`apt install libusb-1.0-0 libusb-1.0-0-dev` * libudev-dev: `apt install libudev-dev` (Debian/Ubuntu) / `yum install libusbx-devel` (Fedora) ### FreeBSD * Compilation tools: `pkg install git gcc gmake libiconv node npm` ### Mac OS X 10.8+ * [Xcode](https://itunes.apple.com/us/app/xcode/id497799835?mt=12) ### Windows 7, 8, 10 * Visual C++ compiler and Python 2.7 * either: * `npm install --global windows-build-tools` * add `%USERPROFILE%\.windows-build-tools\python27` to `PATH`, like PowerShell: `$env:Path += ";$env:USERPROFILE\.windows-build-tools\python27"` * or: * [Python 2.7](https://www.python.org/downloads/windows/) * [Visual Studio Express 2013 for Desktop](https://www.visualstudio.com/downloads/download-visual-studio-vs#d-2013-express) ### Building `node-hid` from source, for your projects ``` npm install node-hid --build-from-source ``` ### Build `node-hid` for `node-hid` development * check out a copy of this repo * change into its directory * update the submodules * build the node package For example: ``` git clone https://github.com/node-hid/node-hid.git cd node-hid # must change into node-hid directory npm install -g rimraf # just so it doesn't get 'clean'ed npm run prepublishOnly # get the needed hidapi submodule npm install --build-from-source # rebuilds the module with C code npm run showdevices # list connected HID devices node ./src/show-devices.js # same as above ``` You may see some warnings from the C compiler as it compiles [hidapi](https://github.com/libusb/hidapi) (the underlying C library `node-hid` uses). This is expected. For ease of development, there are also the scripts: ``` npm run gypclean # "node-gyp clean" clean gyp build directory npm run gypconfigure # "node-gyp configure" configure makefiles npm run gypbuild # "node-gyp build" build native code ``` ### Building `node-hid` for cross-compiling When cross-compiling you need to override `node-hid`'s normal behavior of using Linux `pkg-config` to determine CLFAGS and LDFLAGS for `libusb`. To do this, you can use the `node-gyp` variable `node_hid_no_pkg_config` and then invoke a `node-hid` rebuild with either: ``` node-gyp rebuild --node_hid_no_pkg_config=1 ``` or ``` npm gyprebuild --node_hid_no_pkg_config=1 ``` ## Electron projects using `node-hid` In your electron project, add `electron-rebuild` to your `devDependencies`. Then in your package.json `scripts` add: ``` "postinstall": "electron-rebuild" ``` This will cause `npm` to rebuild `node-hid` for the version of Node that is in Electron. If you get an error similar to `The module "HID.node" was compiled against a different version of Node.js` then `electron-rebuild` hasn't been run and Electron is trying to use `node-hid` compiled for Node.js and not for Electron. If using `node-hid` with `webpack` or similar bundler, you may need to exclude `node-hid` and other libraries with native code. In webpack, you say which `externals` you have in your `webpack-config.js`: ``` externals: { "node-hid": 'commonjs node-hid' } ``` Examples of `node-hid` in Electron: * [electron-hid-test](https://github.com/todbot/electron-hid-test) - Simple example of using `node-hid`, should track latest Electron release * [electron-hid-toy](https://github.com/todbot/electron-hid-toy) - Simple example of using `node-hid`, showing packaging and signing * [Blink1Control2](https://github.com/todbot/Blink1Control2/) - a complete application, using webpack (e.g. see its [webpack-config.js](https://github.com/todbot/Blink1Control2/blob/master/webpack.config.js)) ## NW.js projects using `node-hid` Without knowing much about NW.js, a quick hacky solution that works is: ``` cd my-nwjs-app npm install node-hid --save npm install -g nw-gyp cd node_modules/node-hid nw-gyp rebuild --target=0.42.3 --arch=x64 // or whatever NW.js version you have cd ../.. nwjs . ``` ## Support Please use the [node-hid github issues page](https://github.com/node-hid/node-hid/issues) for support questions and issues. <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/img/ajv.svg"> &nbsp; # Ajv JSON schema validator The fastest JSON validator for Node.js and browser. Supports JSON Schema draft-04/06/07/2019-09/2020-12 ([draft-04 support](https://ajv.js.org/json-schema.html#draft-04) requires ajv-draft-04 package) and JSON Type Definition [RFC8927](https://datatracker.ietf.org/doc/rfc8927/). [![build](https://github.com/ajv-validator/ajv/workflows/build/badge.svg)](https://github.com/ajv-validator/ajv/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![SimpleX](https://img.shields.io/badge/chat-on%20SimpleX-%2307b4b9)](https://simplex.chat/contact#/?v=1&smp=smp%3A%2F%2Fu2dS9sG8nMNURyZwqASV4yROM28Er0luVTx5X1CsMrU%3D%40smp4.simplex.im%2Fap4lMFzfXF8Hzmh-Vz0WNxp_1jKiOa-h%23MCowBQYDK2VuAyEAcdefddRvDfI8iAuBpztm_J3qFucj8MDZoVs_2EcMTzU%3D) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv sponsors [<img src="https://ajv.js.org/img/mozilla.svg" width="45%" alt="Mozilla">](https://www.mozilla.org)<img src="https://ajv.js.org/img/gap.svg" width="9%">[<img src="https://ajv.js.org/img/reserved.svg" width="45%">](https://opencollective.com/ajv) [<img src="https://ajv.js.org/img/microsoft.png" width="31%" alt="Microsoft">](https://opensource.microsoft.com)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/reserved.svg" width="31%">](https://opencollective.com/ajv)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/reserved.svg" width="31%">](https://opencollective.com/ajv) [<img src="https://ajv.js.org/img/retool.svg" width="22.5%" alt="Retool">](https://retool.com/?utm_source=sponsor&utm_campaign=ajv)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/tidelift.svg" width="22.5%" alt="Tidelift">](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=enterprise)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/simplex.svg" width="22.5%" alt="SimpleX">](https://github.com/simplex-chat/simplex-chat)<img src="https://ajv.js.org/img/gap.svg" width="3%">[<img src="https://ajv.js.org/img/reserved.svg" width="22.5%">](https://opencollective.com/ajv) ## Contributing More than 100 people contributed to Ajv, and we would love to have you join the development. We welcome implementing new features that will benefit many users and ideas to improve our documentation. Please review [Contributing guidelines](./CONTRIBUTING.md) and [Code components](https://ajv.js.org/components.html). ## Documentation All documentation is available on the [Ajv website](https://ajv.js.org). Some useful site links: - [Getting started](https://ajv.js.org/guide/getting-started.html) - [JSON Schema vs JSON Type Definition](https://ajv.js.org/guide/schema-language.html) - [API reference](https://ajv.js.org/api.html) - [Strict mode](https://ajv.js.org/strict-mode.html) - [Standalone validation code](https://ajv.js.org/standalone.html) - [Security considerations](https://ajv.js.org/security.html) - [Command line interface](https://ajv.js.org/packages/ajv-cli.html) - [Frequently Asked Questions](https://ajv.js.org/faq.html) ## <a name="sponsors"></a>Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/10/website"><img src="https://opencollective.com/ajv/organization/10/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/11/website"><img src="https://opencollective.com/ajv/organization/11/avatar.svg"></a> ## Performance Ajv generates code to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=62,4,1&chs=600x416&chxl=-1:|ajv|@exodus&#x2F;schemasafe|is-my-json-valid|djv|@cfworker&#x2F;json-schema|jsonschema&chd=t:100,69.2,51.5,13.1,5.1,1.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements JSON Schema [draft-06/07/2019-09/2020-12](http://json-schema.org/) standards (draft-04 is supported in v6): - all validation keywords (see [JSON Schema validation keywords](https://ajv.js.org/json-schema.html)) - [OpenAPI](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md) extensions: - NEW: keyword [discriminator](https://ajv.js.org/json-schema.html#discriminator). - keyword [nullable](https://ajv.js.org/json-schema.html#nullable). - full support of remote references (remote schemas have to be added with `addSchema` or compiled to be available) - support of recursive references between schemas - correct string lengths for strings with unicode pairs - JSON Schema [formats](https://ajv.js.org/guide/formats.html) (with [ajv-formats](https://github.com/ajv-validator/ajv-formats) plugin). - [validates schemas against meta-schema](https://ajv.js.org/api.html#api-validateschema) - NEW: supports [JSON Type Definition](https://datatracker.ietf.org/doc/rfc8927/): - all keywords (see [JSON Type Definition schema forms](https://ajv.js.org/json-type-definition.html)) - meta-schema for JTD schemas - "union" keyword and user-defined keywords (can be used inside "metadata" member of the schema) - supports [browsers](https://ajv.js.org/guide/environments.html#browsers) and Node.js 10.x - current - [asynchronous loading](https://ajv.js.org/guide/managing-schemas.html#asynchronous-schema-loading) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](https://ajv.js.org/options.html#allerrors) - [error messages with parameters](https://ajv.js.org/api.html#validation-errors) describing error reasons to allow error message generation - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [removing-additional-properties](https://ajv.js.org/guide/modifying-data.html#removing-additional-properties) - [assigning defaults](https://ajv.js.org/guide/modifying-data.html#assigning-defaults) to missing properties and items - [coercing data](https://ajv.js.org/guide/modifying-data.html#coercing-data-types) to the types specified in `type` keywords - [user-defined keywords](https://ajv.js.org/guide/user-keywords.html) - additional extension keywords with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [\$data reference](https://ajv.js.org/guide/combining-schemas.html#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](https://ajv.js.org/guide/async-validation.html) of user-defined formats and keywords ## Install To install version 8: ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://runkit.com/npm/ajv In JavaScript: ```javascript // or ESM/TypeScript import import Ajv from "ajv" // Node.js require: const Ajv = require("ajv") const ajv = new Ajv() // options can be passed, e.g. {allErrors: true} const schema = { type: "object", properties: { foo: {type: "integer"}, bar: {type: "string"} }, required: ["foo"], additionalProperties: false, } const data = { foo: 1, bar: "abc" } const validate = ajv.compile(schema) const valid = validate(data) if (!valid) console.log(validate.errors) ``` Learn how to use Ajv and see more examples in the [Guide: getting started](https://ajv.js.org/guide/getting-started.html) ## Changes history See [https://github.com/ajv-validator/ajv/releases](https://github.com/ajv-validator/ajv/releases) **Please note**: [Changes in version 8.0.0](https://github.com/ajv-validator/ajv/releases/tag/v8.0.0) [Version 7.0.0](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](./CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](./LICENSE) file-uri-to-path ================ ### Convert a `file:` URI to a file path [![Build Status](https://travis-ci.org/TooTallNate/file-uri-to-path.svg?branch=master)](https://travis-ci.org/TooTallNate/file-uri-to-path) Accepts a `file:` URI and returns a regular file path suitable for use with the `fs` module functions. Installation ------------ Install with `npm`: ``` bash $ npm install file-uri-to-path ``` Example ------- ``` js var uri2path = require('file-uri-to-path'); uri2path('file://localhost/c|/WINDOWS/clock.avi'); // "c:\\WINDOWS\\clock.avi" uri2path('file:///c|/WINDOWS/clock.avi'); // "c:\\WINDOWS\\clock.avi" uri2path('file://localhost/c:/WINDOWS/clock.avi'); // "c:\\WINDOWS\\clock.avi" uri2path('file://hostname/path/to/the%20file.txt'); // "\\\\hostname\\path\\to\\the file.txt" uri2path('file:///c:/path/to/the%20file.txt'); // "c:\\path\\to\\the file.txt" ``` API --- ### fileUriToPath(String uri) → String License ------- (The MIT License) Copyright (c) 2014 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> functional-red-black-tree ========================= A [fully persistent](http://en.wikipedia.org/wiki/Persistent_data_structure) [red-black tree](http://en.wikipedia.org/wiki/Red%E2%80%93black_tree) written 100% in JavaScript. Works both in node.js and in the browser via [browserify](http://browserify.org/). Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy. Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module. # Install npm install functional-red-black-tree # Example Here is an example of some basic usage: ```javascript //Load the library var createTree = require("functional-red-black-tree") //Create a tree var t1 = createTree() //Insert some items into the tree var t2 = t1.insert(1, "foo") var t3 = t2.insert(2, "bar") //Remove something var t4 = t3.remove(1) ``` # API ```javascript var createTree = require("functional-red-black-tree") ``` ## Overview - [Tree methods](#tree-methods) - [`var tree = createTree([compare])`](#var-tree-=-createtreecompare) - [`tree.keys`](#treekeys) - [`tree.values`](#treevalues) - [`tree.length`](#treelength) - [`tree.get(key)`](#treegetkey) - [`tree.insert(key, value)`](#treeinsertkey-value) - [`tree.remove(key)`](#treeremovekey) - [`tree.find(key)`](#treefindkey) - [`tree.ge(key)`](#treegekey) - [`tree.gt(key)`](#treegtkey) - [`tree.lt(key)`](#treeltkey) - [`tree.le(key)`](#treelekey) - [`tree.at(position)`](#treeatposition) - [`tree.begin`](#treebegin) - [`tree.end`](#treeend) - [`tree.forEach(visitor(key,value)[, lo[, hi]])`](#treeforEachvisitorkeyvalue-lo-hi) - [`tree.root`](#treeroot) - [Node properties](#node-properties) - [`node.key`](#nodekey) - [`node.value`](#nodevalue) - [`node.left`](#nodeleft) - [`node.right`](#noderight) - [Iterator methods](#iterator-methods) - [`iter.key`](#iterkey) - [`iter.value`](#itervalue) - [`iter.node`](#iternode) - [`iter.tree`](#itertree) - [`iter.index`](#iterindex) - [`iter.valid`](#itervalid) - [`iter.clone()`](#iterclone) - [`iter.remove()`](#iterremove) - [`iter.update(value)`](#iterupdatevalue) - [`iter.next()`](#iternext) - [`iter.prev()`](#iterprev) - [`iter.hasNext`](#iterhasnext) - [`iter.hasPrev`](#iterhasprev) ## Tree methods ### `var tree = createTree([compare])` Creates an empty functional tree * `compare` is an optional comparison function, same semantics as array.sort() **Returns** An empty tree ordered by `compare` ### `tree.keys` A sorted array of all the keys in the tree ### `tree.values` An array array of all the values in the tree ### `tree.length` The number of items in the tree ### `tree.get(key)` Retrieves the value associated to the given key * `key` is the key of the item to look up **Returns** The value of the first node associated to `key` ### `tree.insert(key, value)` Creates a new tree with the new pair inserted. * `key` is the key of the item to insert * `value` is the value of the item to insert **Returns** A new tree with `key` and `value` inserted ### `tree.remove(key)` Removes the first item with `key` in the tree * `key` is the key of the item to remove **Returns** A new tree with the given item removed if it exists ### `tree.find(key)` Returns an iterator pointing to the first item in the tree with `key`, otherwise `null`. ### `tree.ge(key)` Find the first item in the tree whose key is `>= key` * `key` is the key to search for **Returns** An iterator at the given element. ### `tree.gt(key)` Finds the first item in the tree whose key is `> key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.lt(key)` Finds the last item in the tree whose key is `< key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.le(key)` Finds the last item in the tree whose key is `<= key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.at(position)` Finds an iterator starting at the given element * `position` is the index at which the iterator gets created **Returns** An iterator starting at position ### `tree.begin` An iterator pointing to the first element in the tree ### `tree.end` An iterator pointing to the last element in the tree ### `tree.forEach(visitor(key,value)[, lo[, hi]])` Walks a visitor function over the nodes of the tree in order. * `visitor(key,value)` is a callback that gets executed on each node. If a truthy value is returned from the visitor, then iteration is stopped. * `lo` is an optional start of the range to visit (inclusive) * `hi` is an optional end of the range to visit (non-inclusive) **Returns** The last value returned by the callback ### `tree.root` Returns the root node of the tree ## Node properties Each node of the tree has the following properties: ### `node.key` The key associated to the node ### `node.value` The value associated to the node ### `node.left` The left subtree of the node ### `node.right` The right subtree of the node ## Iterator methods ### `iter.key` The key of the item referenced by the iterator ### `iter.value` The value of the item referenced by the iterator ### `iter.node` The value of the node at the iterator's current position. `null` is iterator is node valid. ### `iter.tree` The tree associated to the iterator ### `iter.index` Returns the position of this iterator in the sequence. ### `iter.valid` Checks if the iterator is valid ### `iter.clone()` Makes a copy of the iterator ### `iter.remove()` Removes the item at the position of the iterator **Returns** A new binary search tree with `iter`'s item removed ### `iter.update(value)` Updates the value of the node in the tree at this iterator **Returns** A new binary search tree with the corresponding node updated ### `iter.next()` Advances the iterator to the next position ### `iter.prev()` Moves the iterator backward one element ### `iter.hasNext` If true, then the iterator is not at the end of the sequence ### `iter.hasPrev` If true, then the iterator is not at the beginning of the sequence # Credits (c) 2013 Mikola Lysenko. MIT License # levn [![Build Status](https://travis-ci.org/gkz/levn.png)](https://travis-ci.org/gkz/levn) <a name="levn" /> __Light ECMAScript (JavaScript) Value Notation__ Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments). Levn aims to concisely describe JavaScript values in text, and allow for the extraction and validation of those values. Levn uses [type-check](https://github.com/gkz/type-check) for its type format, and to validate the results. MIT license. Version 0.4.1. __How is this different than JSON?__ levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose. npm install levn For updates on levn, [follow me on twitter](https://twitter.com/gkzahariev). ## Quick Examples ```js var parse = require('levn').parse; parse('Number', '2'); // 2 parse('String', '2'); // '2' parse('String', 'levn'); // 'levn' parse('String', 'a b'); // 'a b' parse('Boolean', 'true'); // true parse('Date', '#2011-11-11#'); // (Date object) parse('Date', '2011-11-11'); // (Date object) parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi parse('RegExp', 're'); // /re/ parse('Int', '2'); // 2 parse('Number | String', 'str'); // 'str' parse('Number | String', '2'); // 2 parse('[Number]', '[1,2,3]'); // [1,2,3] parse('(String, Boolean)', '(hi, false)'); // ['hi', false] parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2} // at the top level, you can ommit surrounding delimiters parse('[Number]', '1,2,3'); // [1,2,3] parse('(String, Boolean)', 'hi, false'); // ['hi', false] parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2} // wildcard - auto choose type parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}] ``` ## Usage `require('levn');` returns an object that exposes three properties. `VERSION` is the current version of the library as a string. `parse` and `parsedTypeParse` are functions. ```js // parse(type, input, options); parse('[Number]', '1,2,3'); // [1, 2, 3] // parsedTypeParse(parsedType, input, options); var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ### parse(type, input, options) `parse` casts the string `input` into a JavaScript value according to the specified `type` in the [type format](https://github.com/gkz/type-check#type-format) (and taking account the optional `options`) and returns the resulting JavaScript value. ##### arguments * type - `String` - the type written in the [type format](https://github.com/gkz/type-check#type-format) which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js parse('[Number]', '1,2,3'); // [1, 2, 3] ``` ### parsedTypeParse(parsedType, input, options) `parsedTypeParse` casts the string `input` into a JavaScript value according to the specified `type` which has already been parsed (and taking account the optional `options`) and returns the resulting JavaScript value. You can parse a type using the [type-check](https://github.com/gkz/type-check) library's `parseType` function. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ## Levn Format Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, `parse('Number', '2')` will produce the number `2`, but `parse('String', '2')` will produce the string `"2"`. If you do not provide type information, and simply use `*`, levn will parse the input according the unambiguous "explicit" mode, which we will now detail - you can also set the `explicit` option to true manually in the [options](#options). * `"string"`, `'string'` are parsed as a String, eg. `"a msg"` is `"a msg"` * `#date#` is parsed as a Date, eg. `#2011-11-11#` is `new Date('2011-11-11')` * `/regexp/flags` is parsed as a RegExp, eg. `/re/gi` is `/re/gi` * `undefined`, `null`, `NaN`, `true`, and `false` are all their JavaScript equivalents * `[element1, element2, etc]` is an Array, and the casting procedure is recursively applied to each element. Eg. `[1,2,3]` is `[1,2,3]`. * `(element1, element2, etc)` is an tuple, and the casting procedure is recursively applied to each element. Eg. `(1, a)` is `(1, a)` (is `[1, 'a']`). * `{key1: val1, key2: val2, ...}` is an Object, and the casting procedure is recursively applied to each property. Eg. `{a: 1, b: 2}` is `{a: 1, b: 2}`. * Any test which does not fall under the above, and which does not contain special characters (`[``]``(``)``{``}``:``,`) is a string, eg. `$12- blah` is `"$12- blah"`. If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the [type format](https://github.com/gkz/type-check#type-format) section of [type-check](https://github.com/gkz/type-check) for more information about how to specify types. There are some rules about what levn can do with the information: * If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. `[({})]` is `"[({})]"`, and `"hi"` is `'"hi"'`. * If a Date is expected, the surrounding `#` can be omitted from date literals. Eg. `2011-11-11` is `new Date('2011-11-11')`. * If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing `/` can be omitted - this will have the affect of setting the source of the regex to the input. Eg. `regex` is `/regex/`. * If an Array is expected, and it is the root node (at the top level), the opening `[` and closing `]` can be omitted. Eg. `1,2,3` is `[1,2,3]`. * If a tuple is expected, and it is the root node (at the top level), the opening `(` and closing `)` can be omitted. Eg. `1, a` is `(1, a)` (is `[1, 'a']`). * If an Object is expected, and it is the root node (at the top level), the opening `{` and closing `}` can be omitted. Eg `a: 1, b: 2` is `{a: 1, b: 2}`. If you list multiple types (eg. `Number | String`), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list. Whitespace between special characters and elements is inconsequential. ## Options Options is an object. It is an optional parameter to the `parse` and `parsedTypeParse` functions. ### Explicit A `Boolean`. By default it is `false`. __Example:__ ```js parse('RegExp', 're', {explicit: false}); // /re/ parse('RegExp', 're', {explicit: true}); // Error: ... does not type check... parse('RegExp | String', 're', {explicit: true}); // 're' ``` `explicit` sets whether to be in explicit mode or not. Using `*` automatically activates explicit mode. For more information, read the [levn format](#levn-format) section. ### customTypes An `Object`. Empty `{}` by default. __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function (x) { return x % 2 === 0; }, cast: function (x) { return {type: 'Just', value: parseInt(x)}; } } } } parse('Even', '2', options); // 2 parse('Even', '3', options); // Error: Value: "3" does not type check... ``` __Another Example:__ ```js function Person(name, age){ this.name = name; this.age = age; } var options = { customTypes: { Person: { typeOf: 'Object', validate: function (x) { x instanceof Person; }, cast: function (value, options, typesCast) { var name, age; if ({}.toString.call(value).slice(8, -1) !== 'Object') { return {type: 'Nothing'}; } name = typesCast(value.name, [{type: 'String'}], options); age = typesCast(value.age, [{type: 'Numger'}], options); return {type: 'Just', value: new Person(name, age)}; } } } parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25} ``` `customTypes` is an object whose keys are the name of the types, and whose values are an object with three properties, `typeOf`, `validate`, and `cast`. For more information about `typeOf` and `validate`, please see the [custom types](https://github.com/gkz/type-check#custom-types) section of type-check. `cast` is a function which receives three arguments, the value under question, options, and the typesCast function. In `cast`, attempt to cast the value into the specified type. If you are successful, return an object in the format `{type: 'Just', value: CAST-VALUE}`, if you know it won't work, return `{type: 'Nothing'}`. You can use the `typesCast` function to cast any child values. Remember to pass `options` to it. In your function you can also check for `options.explicit` and act accordingly. ## Technical About `levn` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [type-check](https://github.com/gkz/type-check) to both parse types and validate values. It also uses the [prelude.ls](http://preludels.com/) library. <p align="center"> <a href="http://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # v8flags [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Get available v8 and Node.js flags. ## Usage ```js const v8flags = require('v8flags'); v8flags(function(err, results) { console.log(results); // [ '--use_strict', // '--es5_readonly', // '--es52_globals', // '--harmony_typeof', // '--harmony_scoping', // '--harmony_modules', // '--harmony_proxies', // '--harmony_collections', // '--harmony', // ... }); ``` ## API ### `v8flags(cb)` Finds the available flags and calls the passed callback with any errors and an array of flag results. ### `v8flags.configfile` The name of the cache file for flags. ### `v8flags.configPath` The filepath location of the `configfile` above. ## License MIT [downloads-image]: http://img.shields.io/npm/dm/v8flags.svg [npm-url]: https://www.npmjs.com/package/v8flags [npm-image]: http://img.shields.io/npm/v/v8flags.svg [travis-url]: https://travis-ci.org/gulpjs/v8flags [travis-image]: http://img.shields.io/travis/gulpjs/v8flags.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/v8flags [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/v8flags.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/v8flags [coveralls-image]: http://img.shields.io/coveralls/gulpjs/v8flags/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # Console Control Strings A library of cross-platform tested terminal/console command strings for doing things like color and cursor positioning. This is a subset of both ansi and vt100. All control codes included work on both Windows & Unix-like OSes, except where noted. ## Usage ```js var consoleControl = require('console-control-strings') console.log(consoleControl.color('blue','bgRed', 'bold') + 'hi there' + consoleControl.color('reset')) process.stdout.write(consoleControl.goto(75, 10)) ``` ## Why Another? There are tons of libraries similar to this one. I wanted one that was: 1. Very clear about compatibility goals. 2. Could emit, for instance, a start color code without an end one. 3. Returned strings w/o writing to streams. 4. Was not weighed down with other unrelated baggage. ## Functions ### var code = consoleControl.up(_num = 1_) Returns the escape sequence to move _num_ lines up. ### var code = consoleControl.down(_num = 1_) Returns the escape sequence to move _num_ lines down. ### var code = consoleControl.forward(_num = 1_) Returns the escape sequence to move _num_ lines righ. ### var code = consoleControl.back(_num = 1_) Returns the escape sequence to move _num_ lines left. ### var code = consoleControl.nextLine(_num = 1_) Returns the escape sequence to move _num_ lines down and to the beginning of the line. ### var code = consoleControl.previousLine(_num = 1_) Returns the escape sequence to move _num_ lines up and to the beginning of the line. ### var code = consoleControl.eraseData() Returns the escape sequence to erase everything from the current cursor position to the bottom right of the screen. This is line based, so it erases the remainder of the current line and all following lines. ### var code = consoleControl.eraseLine() Returns the escape sequence to erase to the end of the current line. ### var code = consoleControl.goto(_x_, _y_) Returns the escape sequence to move the cursor to the designated position. Note that the origin is _1, 1_ not _0, 0_. ### var code = consoleControl.gotoSOL() Returns the escape sequence to move the cursor to the beginning of the current line. (That is, it returns a carriage return, `\r`.) ### var code = consoleControl.beep() Returns the escape sequence to cause the termianl to beep. (That is, it returns unicode character `\x0007`, a Control-G.) ### var code = consoleControl.hideCursor() Returns the escape sequence to hide the cursor. ### var code = consoleControl.showCursor() Returns the escape sequence to show the cursor. ### var code = consoleControl.color(_colors = []_) ### var code = consoleControl.color(_color1_, _color2_, _…_, _colorn_) Returns the escape sequence to set the current terminal display attributes (mostly colors). Arguments can either be a list of attributes or an array of attributes. The difference between passing in an array or list of colors and calling `.color` separately for each one, is that in the former case a single escape sequence will be produced where as in the latter each change will have its own distinct escape sequence. Each attribute can be one of: * Reset: * **reset** – Reset all attributes to the terminal default. * Styles: * **bold** – Display text as bold. In some terminals this means using a bold font, in others this means changing the color. In some it means both. * **italic** – Display text as italic. This is not available in most Windows terminals. * **underline** – Underline text. This is not available in most Windows Terminals. * **inverse** – Invert the foreground and background colors. * **stopBold** – Do not display text as bold. * **stopItalic** – Do not display text as italic. * **stopUnderline** – Do not underline text. * **stopInverse** – Do not invert foreground and background. * Colors: * **white** * **black** * **blue** * **cyan** * **green** * **magenta** * **red** * **yellow** * **grey** / **brightBlack** * **brightRed** * **brightGreen** * **brightYellow** * **brightBlue** * **brightMagenta** * **brightCyan** * **brightWhite** * Background Colors: * **bgWhite** * **bgBlack** * **bgBlue** * **bgCyan** * **bgGreen** * **bgMagenta** * **bgRed** * **bgYellow** * **bgGrey** / **bgBrightBlack** * **bgBrightRed** * **bgBrightGreen** * **bgBrightYellow** * **bgBrightBlue** * **bgBrightMagenta** * **bgBrightCyan** * **bgBrightWhite** long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` # mustache.js - Logic-less {{mustache}} templates with JavaScript > What could be more logical awesome than no logic at all? [![Build Status](https://travis-ci.org/janl/mustache.js.svg?branch=master)](https://travis-ci.org/janl/mustache.js) [mustache.js](http://github.com/janl/mustache.js) is a zero-dependency implementation of the [mustache](http://mustache.github.com/) template system in JavaScript. [Mustache](http://mustache.github.com/) is a logic-less template syntax. It can be used for HTML, config files, source code - anything. It works by expanding tags in a template using values provided in a hash or object. We call it "logic-less" because there are no if statements, else clauses, or for loops. Instead there are only tags. Some tags are replaced with a value, some nothing, and others a series of values. For a language-agnostic overview of mustache's template syntax, see the `mustache(5)` [manpage](http://mustache.github.com/mustache.5.html). ## Where to use mustache.js? You can use mustache.js to render mustache templates anywhere you can use JavaScript. This includes web browsers, server-side environments such as [Node.js](http://nodejs.org/), and [CouchDB](http://couchdb.apache.org/) views. mustache.js ships with support for the [CommonJS](http://www.commonjs.org/) module API, the [Asynchronous Module Definition](https://github.com/amdjs/amdjs-api/wiki/AMD) API (AMD) and [ECMAScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules). In addition to being a package to be used programmatically, you can use it as a [command line tool](#command-line-tool). And this will be your templates after you use Mustache: !['stache](https://cloud.githubusercontent.com/assets/288977/8779228/a3cf700e-2f02-11e5-869a-300312fb7a00.gif) ## Install You can get Mustache via [npm](http://npmjs.com). ```bash $ npm install mustache --save ``` ## Usage Below is a quick example how to use mustache.js: ```js var view = { title: "Joe", calc: function () { return 2 + 4; } }; var output = Mustache.render("{{title}} spends {{calc}}", view); ``` In this example, the `Mustache.render` function takes two parameters: 1) the [mustache](http://mustache.github.com/) template and 2) a `view` object that contains the data and code needed to render the template. ## Templates A [mustache](http://mustache.github.com/) template is a string that contains any number of mustache tags. Tags are indicated by the double mustaches that surround them. `{{person}}` is a tag, as is `{{#person}}`. In both examples we refer to `person` as the tag's key. There are several types of tags available in mustache.js, described below. There are several techniques that can be used to load templates and hand them to mustache.js, here are two of them: #### Include Templates If you need a template for a dynamic part in a static website, you can consider including the template in the static HTML file to avoid loading templates separately. Here's a small example: ```js // file: render.js function renderHello() { var template = document.getElementById('template').innerHTML; var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; } ``` ```html <html> <body onload="renderHello()"> <div id="target">Loading...</div> <script id="template" type="x-tmpl-mustache"> Hello {{ name }}! </script> <script src="https://unpkg.com/mustache@latest"></script> <script src="render.js"></script> </body> </html> ``` #### Load External Templates If your templates reside in individual files, you can load them asynchronously and render them when they arrive. Another example using [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch): ```js function renderHello() { fetch('template.mustache') .then((response) => response.text()) .then((template) => { var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; }); } ``` ### Variables The most basic tag type is a simple variable. A `{{name}}` tag renders the value of the `name` key in the current context. If there is no such key, nothing is rendered. All variables are HTML-escaped by default. If you want to render unescaped HTML, use the triple mustache: `{{{name}}}`. You can also use `&` to unescape a variable. If you'd like to change HTML-escaping behavior globally (for example, to template non-HTML formats), you can override Mustache's escape function. For example, to disable all escaping: `Mustache.escape = function(text) {return text;};`. If you want `{{name}}` _not_ to be interpreted as a mustache tag, but rather to appear exactly as `{{name}}` in the output, you must change and then restore the default delimiter. See the [Custom Delimiters](#custom-delimiters) section for more information. View: ```json { "name": "Chris", "company": "<b>GitHub</b>" } ``` Template: ``` * {{name}} * {{age}} * {{company}} * {{{company}}} * {{&company}} {{=<% %>=}} * {{company}} <%={{ }}=%> ``` Output: ```html * Chris * * &lt;b&gt;GitHub&lt;/b&gt; * <b>GitHub</b> * <b>GitHub</b> * {{company}} ``` JavaScript's dot notation may be used to access keys that are properties of objects in a view. View: ```json { "name": { "first": "Michael", "last": "Jackson" }, "age": "RIP" } ``` Template: ```html * {{name.first}} {{name.last}} * {{age}} ``` Output: ```html * Michael Jackson * RIP ``` ### Sections Sections render blocks of text zero or more times, depending on the value of the key in the current context. A section begins with a pound and ends with a slash. That is, `{{#person}}` begins a `person` section, while `{{/person}}` ends it. The text between the two tags is referred to as that section's "block". The behavior of the section is determined by the value of the key. #### False Values or Empty Lists If the `person` key does not exist, or exists and has a value of `null`, `undefined`, `false`, `0`, or `NaN`, or is an empty string or an empty list, the block will not be rendered. View: ```json { "person": false } ``` Template: ```html Shown. {{#person}} Never shown! {{/person}} ``` Output: ```html Shown. ``` #### Non-Empty Lists If the `person` key exists and is not `null`, `undefined`, or `false`, and is not an empty list the block will be rendered one or more times. When the value is a list, the block is rendered once for each item in the list. The context of the block is set to the current item in the list for each iteration. In this way we can loop over collections. View: ```json { "stooges": [ { "name": "Moe" }, { "name": "Larry" }, { "name": "Curly" } ] } ``` Template: ```html {{#stooges}} <b>{{name}}</b> {{/stooges}} ``` Output: ```html <b>Moe</b> <b>Larry</b> <b>Curly</b> ``` When looping over an array of strings, a `.` can be used to refer to the current item in the list. View: ```json { "musketeers": ["Athos", "Aramis", "Porthos", "D'Artagnan"] } ``` Template: ```html {{#musketeers}} * {{.}} {{/musketeers}} ``` Output: ```html * Athos * Aramis * Porthos * D'Artagnan ``` If the value of a section variable is a function, it will be called in the context of the current item in the list on each iteration. View: ```js { "beatles": [ { "firstName": "John", "lastName": "Lennon" }, { "firstName": "Paul", "lastName": "McCartney" }, { "firstName": "George", "lastName": "Harrison" }, { "firstName": "Ringo", "lastName": "Starr" } ], "name": function () { return this.firstName + " " + this.lastName; } } ``` Template: ```html {{#beatles}} * {{name}} {{/beatles}} ``` Output: ```html * John Lennon * Paul McCartney * George Harrison * Ringo Starr ``` #### Functions If the value of a section key is a function, it is called with the section's literal block of text, un-rendered, as its first argument. The second argument is a special rendering function that uses the current view as its view argument. It is called in the context of the current view object. View: ```js { "name": "Tater", "bold": function () { return function (text, render) { return "<b>" + render(text) + "</b>"; } } } ``` Template: ```html {{#bold}}Hi {{name}}.{{/bold}} ``` Output: ```html <b>Hi Tater.</b> ``` ### Inverted Sections An inverted section opens with `{{^section}}` instead of `{{#section}}`. The block of an inverted section is rendered only if the value of that section's tag is `null`, `undefined`, `false`, *falsy* or an empty list. View: ```json { "repos": [] } ``` Template: ```html {{#repos}}<b>{{name}}</b>{{/repos}} {{^repos}}No repos :({{/repos}} ``` Output: ```html No repos :( ``` ### Comments Comments begin with a bang and are ignored. The following template: ```html <h1>Today{{! ignore me }}.</h1> ``` Will render as follows: ```html <h1>Today.</h1> ``` Comments may contain newlines. ### Partials Partials begin with a greater than sign, like {{> box}}. Partials are rendered at runtime (as opposed to compile time), so recursive partials are possible. Just avoid infinite loops. They also inherit the calling context. Whereas in ERB you may have this: ```html+erb <%= partial :next_more, :start => start, :size => size %> ``` Mustache requires only this: ```html {{> next_more}} ``` Why? Because the `next_more.mustache` file will inherit the `size` and `start` variables from the calling context. In this way you may want to think of partials as includes, imports, template expansion, nested templates, or subtemplates, even though those aren't literally the case here. For example, this template and partial: base.mustache: <h2>Names</h2> {{#names}} {{> user}} {{/names}} user.mustache: <strong>{{name}}</strong> Can be thought of as a single, expanded template: ```html <h2>Names</h2> {{#names}} <strong>{{name}}</strong> {{/names}} ``` In mustache.js an object of partials may be passed as the third argument to `Mustache.render`. The object should be keyed by the name of the partial, and its value should be the partial text. ```js Mustache.render(template, view, { user: userTemplate }); ``` ### Custom Delimiters Custom delimiters can be used in place of `{{` and `}}` by setting the new values in JavaScript or in templates. #### Setting in JavaScript The `Mustache.tags` property holds an array consisting of the opening and closing tag values. Set custom values by passing a new array of tags to `render()`, which gets honored over the default values, or by overriding the `Mustache.tags` property itself: ```js var customTags = [ '<%', '%>' ]; ``` ##### Pass Value into Render Method ```js Mustache.render(template, view, {}, customTags); ``` ##### Override Tags Property ```js Mustache.tags = customTags; // Subsequent parse() and render() calls will use customTags ``` #### Setting in Templates Set Delimiter tags start with an equals sign and change the tag delimiters from `{{` and `}}` to custom strings. Consider the following contrived example: ```html+erb * {{ default_tags }} {{=<% %>=}} * <% erb_style_tags %> <%={{ }}=%> * {{ default_tags_again }} ``` Here we have a list with three items. The first item uses the default tag style, the second uses ERB style as defined by the Set Delimiter tag, and the third returns to the default style after yet another Set Delimiter declaration. According to [ctemplates](https://htmlpreview.github.io/?https://raw.githubusercontent.com/OlafvdSpek/ctemplate/master/doc/howto.html), this "is useful for languages like TeX, where double-braces may occur in the text and are awkward to use for markup." Custom delimiters may not contain whitespace or the equals sign. ## Pre-parsing and Caching Templates By default, when mustache.js first parses a template it keeps the full parsed token tree in a cache. The next time it sees that same template it skips the parsing step and renders the template much more quickly. If you'd like, you can do this ahead of time using `mustache.parse`. ```js Mustache.parse(template); // Then, sometime later. Mustache.render(template, view); ``` ## Command line tool mustache.js is shipped with a Node.js based command line tool. It might be installed as a global tool on your computer to render a mustache template of some kind ```bash $ npm install -g mustache $ mustache dataView.json myTemplate.mustache > output.html ``` also supports stdin. ```bash $ cat dataView.json | mustache - myTemplate.mustache > output.html ``` or as a package.json `devDependency` in a build process maybe? ```bash $ npm install mustache --save-dev ``` ```json { "scripts": { "build": "mustache dataView.json myTemplate.mustache > public/output.html" } } ``` ```bash $ npm run build ``` The command line tool is basically a wrapper around `Mustache.render` so you get all the features. If your templates use partials you should pass paths to partials using `-p` flag: ```bash $ mustache -p path/to/partial1.mustache -p path/to/partial2.mustache dataView.json myTemplate.mustache ``` ## Plugins for JavaScript Libraries mustache.js may be built specifically for several different client libraries, including the following: - [jQuery](http://jquery.com/) - [MooTools](http://mootools.net/) - [Dojo](http://www.dojotoolkit.org/) - [YUI](http://developer.yahoo.com/yui/) - [qooxdoo](http://qooxdoo.org/) These may be built using [Rake](http://rake.rubyforge.org/) and one of the following commands: ```bash $ rake jquery $ rake mootools $ rake dojo $ rake yui3 $ rake qooxdoo ``` ## TypeScript Since the source code of this package is written in JavaScript, we follow the [TypeScript publishing docs](https://www.typescriptlang.org/docs/handbook/declaration-files/publishing.html) preferred approach by having type definitions available via [@types/mustache](https://www.npmjs.com/package/@types/mustache). ## Testing In order to run the tests you'll need to install [Node.js](http://nodejs.org/). You also need to install the sub module containing [Mustache specifications](http://github.com/mustache/spec) in the project root. ```bash $ git submodule init $ git submodule update ``` Install dependencies. ```bash $ npm install ``` Then run the tests. ```bash $ npm test ``` The test suite consists of both unit and integration tests. If a template isn't rendering correctly for you, you can make a test for it by doing the following: 1. Create a template file named `mytest.mustache` in the `test/_files` directory. Replace `mytest` with the name of your test. 2. Create a corresponding view file named `mytest.js` in the same directory. This file should contain a JavaScript object literal enclosed in parentheses. See any of the other view files for an example. 3. Create a file with the expected output in `mytest.txt` in the same directory. Then, you can run the test with: ```bash $ TEST=mytest npm run test-render ``` ### Browser tests Browser tests are not included in `npm test` as they run for too long, although they are ran automatically on Travis when merged into master. Run browser tests locally in any browser: ```bash $ npm run test-browser-local ``` then point your browser to `http://localhost:8080/__zuul` ## Who uses mustache.js? An updated list of mustache.js users is kept [on the Github wiki](https://github.com/janl/mustache.js/wiki/Beard-Competition). Add yourself or your company if you use mustache.js! ## Contributing mustache.js is a mature project, but it continues to actively invite maintainers. You can help out a high-profile project that is used in a lot of places on the web. No big commitment required, if all you do is review a single [Pull Request](https://github.com/janl/mustache.js/pulls), you are a maintainer. And a hero. ### Your First Contribution - review a [Pull Request](https://github.com/janl/mustache.js/pulls) - fix an [Issue](https://github.com/janl/mustache.js/issues) - update the [documentation](https://github.com/janl/mustache.js#usage) - make a website - write a tutorial ## Thanks mustache.js wouldn't kick ass if it weren't for these fine souls: * Chris Wanstrath / defunkt * Alexander Lang / langalex * Sebastian Cohnen / tisba * J Chris Anderson / jchris * Tom Robinson / tlrobinson * Aaron Quint / quirkey * Douglas Crockford * Nikita Vasilyev / NV * Elise Wood / glytch * Damien Mathieu / dmathieu * Jakub Kuźma / qoobaa * Will Leinweber / will * dpree * Jason Smith / jhs * Aaron Gibralter / agibralter * Ross Boucher / boucher * Matt Sanford / mzsanford * Ben Cherry / bcherry * Michael Jackson / mjackson * Phillip Johnsen / phillipj * David da Silva Contín / dasilvacontin Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # rc The non-configurable configuration loader for lazy people. ## Usage The only option is to pass rc the name of your app, and your default configuration. ```javascript var conf = require('rc')(appname, { //defaults go here. port: 2468, //defaults which are objects will be merged, not replaced views: { engine: 'jade' } }); ``` `rc` will return your configuration options merged with the defaults you specify. If you pass in a predefined defaults object, it will be mutated: ```javascript var conf = {}; require('rc')(appname, conf); ``` If `rc` finds any config files for your app, the returned config object will have a `configs` array containing their paths: ```javascript var appCfg = require('rc')(appname, conf); appCfg.configs[0] // /etc/appnamerc appCfg.configs[1] // /home/dominictarr/.config/appname appCfg.config // same as appCfg.configs[appCfg.configs.length - 1] ``` ## Standards Given your application name (`appname`), rc will look in all the obvious places for configuration. * command line arguments, parsed by minimist _(e.g. `--foo baz`, also nested: `--foo.bar=baz`)_ * environment variables prefixed with `${appname}_` * or use "\_\_" to indicate nested properties <br/> _(e.g. `appname_foo__bar__baz` => `foo.bar.baz`)_ * if you passed an option `--config file` then from that file * a local `.${appname}rc` or the first found looking in `./ ../ ../../ ../../../` etc. * `$HOME/.${appname}rc` * `$HOME/.${appname}/config` * `$HOME/.config/${appname}` * `$HOME/.config/${appname}/config` * `/etc/${appname}rc` * `/etc/${appname}/config` * the defaults object you passed in. All configuration sources that were found will be flattened into one object, so that sources **earlier** in this list override later ones. ## Configuration File Formats Configuration files (e.g. `.appnamerc`) may be in either [json](http://json.org/example) or [ini](http://en.wikipedia.org/wiki/INI_file) format. **No** file extension (`.json` or `.ini`) should be used. The example configurations below are equivalent: #### Formatted as `ini` ``` ; You can include comments in `ini` format if you want. dependsOn=0.10.0 ; `rc` has built-in support for ini sections, see? [commands] www = ./commands/www console = ./commands/repl ; You can even do nested sections [generators.options] engine = ejs [generators.modules] new = generate-new engine = generate-backend ``` #### Formatted as `json` ```javascript { // You can even comment your JSON, if you want "dependsOn": "0.10.0", "commands": { "www": "./commands/www", "console": "./commands/repl" }, "generators": { "options": { "engine": "ejs" }, "modules": { "new": "generate-new", "backend": "generate-backend" } } } ``` Comments are stripped from JSON config via [strip-json-comments](https://github.com/sindresorhus/strip-json-comments). > Since ini, and env variables do not have a standard for types, your application needs be prepared for strings. To ensure that string representations of booleans and numbers are always converted into their proper types (especially useful if you intend to do strict `===` comparisons), consider using a module such as [parse-strings-in-object](https://github.com/anselanza/parse-strings-in-object) to wrap the config object returned from rc. ## Simple example demonstrating precedence Assume you have an application like this (notice the hard-coded defaults passed to rc): ``` const conf = require('rc')('myapp', { port: 12345, mode: 'test' }); console.log(JSON.stringify(conf, null, 2)); ``` You also have a file `config.json`, with these contents: ``` { "port": 9000, "foo": "from config json", "something": "else" } ``` And a file `.myapprc` in the same folder, with these contents: ``` { "port": "3001", "foo": "bar" } ``` Here is the expected output from various commands: `node .` ``` { "port": "3001", "mode": "test", "foo": "bar", "_": [], "configs": [ "/Users/stephen/repos/conftest/.myapprc" ], "config": "/Users/stephen/repos/conftest/.myapprc" } ``` *Default `mode` from hard-coded object is retained, but port is overridden by `.myapprc` file (automatically found based on appname match), and `foo` is added.* `node . --foo baz` ``` { "port": "3001", "mode": "test", "foo": "baz", "_": [], "configs": [ "/Users/stephen/repos/conftest/.myapprc" ], "config": "/Users/stephen/repos/conftest/.myapprc" } ``` *Same result as above but `foo` is overridden because command-line arguments take precedence over `.myapprc` file.* `node . --foo barbar --config config.json` ``` { "port": 9000, "mode": "test", "foo": "barbar", "something": "else", "_": [], "config": "config.json", "configs": [ "/Users/stephen/repos/conftest/.myapprc", "config.json" ] } ``` *Now the `port` comes from the `config.json` file specified (overriding the value from `.myapprc`), and `foo` value is overriden by command-line despite also being specified in the `config.json` file.* ## Advanced Usage #### Pass in your own `argv` You may pass in your own `argv` as the third argument to `rc`. This is in case you want to [use your own command-line opts parser](https://github.com/dominictarr/rc/pull/12). ```javascript require('rc')(appname, defaults, customArgvParser); ``` ## Pass in your own parser If you have a special need to use a non-standard parser, you can do so by passing in the parser as the 4th argument. (leave the 3rd as null to get the default args parser) ```javascript require('rc')(appname, defaults, null, parser); ``` This may also be used to force a more strict format, such as strict, valid JSON only. ## Note on Performance `rc` is running `fs.statSync`-- so make sure you don't use it in a hot code path (e.g. a request handler) ## License Multi-licensed under the two-clause BSD License, MIT License, or Apache License, version 2.0 <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. # readable-stream ***Node.js core streams for userland*** [![Build Status](https://travis-ci.com/nodejs/readable-stream.svg?branch=master)](https://travis-ci.com/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readabe-stream.svg)](https://saucelabs.com/u/readabe-stream) ```bash npm install --save readable-stream ``` This package is a mirror of the streams implementations in Node.js. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v10.19.0/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. ## Version 3.x.x v3.x.x of `readable-stream` is a cut from Node 10. This version supports Node 6, 8, and 10, as well as evergreen browsers, IE 11 and latest Safari. The breaking changes introduced by v3 are composed by the combined breaking changes in [Node v9](https://nodejs.org/en/blog/release/v9.0.0/) and [Node v10](https://nodejs.org/en/blog/release/v10.0.0/), as follows: 1. Error codes: https://github.com/nodejs/node/pull/13310, https://github.com/nodejs/node/pull/13291, https://github.com/nodejs/node/pull/16589, https://github.com/nodejs/node/pull/15042, https://github.com/nodejs/node/pull/15665, https://github.com/nodejs/readable-stream/pull/344 2. 'readable' have precedence over flowing https://github.com/nodejs/node/pull/18994 3. make virtual methods errors consistent https://github.com/nodejs/node/pull/18813 4. updated streams error handling https://github.com/nodejs/node/pull/18438 5. writable.end should return this. https://github.com/nodejs/node/pull/18780 6. readable continues to read when push('') https://github.com/nodejs/node/pull/18211 7. add custom inspect to BufferList https://github.com/nodejs/node/pull/17907 8. always defer 'readable' with nextTick https://github.com/nodejs/node/pull/17979 ## Version 2.x.x v2.x.x of `readable-stream` is a cut of the stream module from Node 8 (there have been no semver-major changes from Node 4 to 8). This version supports all Node.js versions from 0.8, as well as evergreen browsers and IE 10 & 11. ### Big Thanks Cross-browser Testing Platform and Open Source <3 Provided by [Sauce Labs][sauce] # Usage You can swap your `require('stream')` with `require('readable-stream')` without any changes, if you are just using one of the main classes and functions. ```js const { Readable, Writable, Transform, Duplex, pipeline, finished } = require('readable-stream') ```` Note that `require('stream')` will return `Stream`, while `require('readable-stream')` will return `Readable`. We discourage using whatever is exported directly, but rather use one of the properties as shown in the example above. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; * **Yoshua Wyuts** ([@yoshuawuyts](https://github.com/yoshuawuyts)) &lt;yoshuawuyts@gmail.com&gt; [sauce]: https://saucelabs.com <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/images/ajv_logo.png"> # Ajv: Another JSON Schema Validator The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07. [![Build Status](https://travis-ci.org/ajv-validator/ajv.svg?branch=master)](https://travis-ci.org/ajv-validator/ajv) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm (beta)](https://img.shields.io/npm/v/ajv/beta)](https://www.npmjs.com/package/ajv/v/7.0.0-beta.0) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv v7 beta is released [Ajv version 7.0.0-beta.0](https://github.com/ajv-validator/ajv/tree/v7-beta) is released with these changes: - to reduce the mistakes in JSON schemas and unexpected validation results, [strict mode](./docs/strict-mode.md) is added - it prohibits ignored or ambiguous JSON Schema elements. - to make code injection from untrusted schemas impossible, [code generation](./docs/codegen.md) is fully re-written to be safe. - to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas. - schemas are compiled to ES6 code (ES5 code generation is supported with an option). - to improve reliability and maintainability the code is migrated to TypeScript. **Please note**: - the support for JSON-Schema draft-04 is removed - if you have schemas using "id" attributes you have to replace them with "\$id" (or continue using version 6 that will be supported until 02/28/2021). - all formats are separated to ajv-formats package - they have to be explicitely added if you use them. See [release notes](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) for the details. To install the new version: ```bash npm install ajv@beta ``` See [Getting started with v7](https://github.com/ajv-validator/ajv/tree/v7-beta#usage) for code example. ## Mozilla MOSS grant and OpenJS Foundation [<img src="https://www.poberezkin.com/images/mozilla.png" width="240" height="68">](https://www.mozilla.org/en-US/moss/) &nbsp;&nbsp;&nbsp; [<img src="https://www.poberezkin.com/images/openjs.png" width="220" height="68">](https://openjsf.org/blog/2020/08/14/ajv-joins-openjs-foundation-as-an-incubation-project/) Ajv has been awarded a grant from Mozilla’s [Open Source Support (MOSS) program](https://www.mozilla.org/en-US/moss/) in the “Foundational Technology” track! It will sponsor the development of Ajv support of [JSON Schema version 2019-09](https://tools.ietf.org/html/draft-handrews-json-schema-02) and of [JSON Type Definition](https://tools.ietf.org/html/draft-ucarion-json-type-definition-04). Ajv also joined [OpenJS Foundation](https://openjsf.org/) – having this support will help ensure the longevity and stability of Ajv for all its users. This [blog post](https://www.poberezkin.com/posts/2020-08-14-ajv-json-validator-mozilla-open-source-grant-openjs-foundation.html) has more details. I am looking for the long term maintainers of Ajv – working with [ReadySet](https://www.thereadyset.co/), also sponsored by Mozilla, to establish clear guidelines for the role of a "maintainer" and the contribution standards, and to encourage a wider, more inclusive, contribution from the community. ## Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Using version 6 [JSON Schema draft-07](http://json-schema.org/latest/json-schema-validation.html) is published. [Ajv version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0) that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes). __Please note__: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance: ```javascript ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json')); ``` To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId: ```javascript var ajv = new Ajv({schemaId: 'id'}); // If you want to use both draft-04 and draft-06/07 schemas: // var ajv = new Ajv({schemaId: 'auto'}); ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json')); ``` ## Contents - [Performance](#performance) - [Features](#features) - [Getting started](#getting-started) - [Frequently Asked Questions](https://github.com/ajv-validator/ajv/blob/master/FAQ.md) - [Using in browser](#using-in-browser) - [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) - [Command line interface](#command-line-interface) - Validation - [Keywords](#validation-keywords) - [Annotation keywords](#annotation-keywords) - [Formats](#formats) - [Combining schemas with $ref](#ref) - [$data reference](#data-reference) - NEW: [$merge and $patch keywords](#merge-and-patch-keywords) - [Defining custom keywords](#defining-custom-keywords) - [Asynchronous schema compilation](#asynchronous-schema-compilation) - [Asynchronous validation](#asynchronous-validation) - [Security considerations](#security-considerations) - [Security contact](#security-contact) - [Untrusted schemas](#untrusted-schemas) - [Circular references in objects](#circular-references-in-javascript-objects) - [Trusted schemas](#security-risks-of-trusted-schemas) - [ReDoS attack](#redos-attack) - Modifying data during validation - [Filtering data](#filtering-data) - [Assigning defaults](#assigning-defaults) - [Coercing data types](#coercing-data-types) - API - [Methods](#api) - [Options](#options) - [Validation errors](#validation-errors) - [Plugins](#plugins) - [Related packages](#related-packages) - [Some packages using Ajv](#some-packages-using-ajv) - [Tests, Contributing, Changes history](#tests) - [Support, Code of conduct, License](#open-source-software-support) ## Performance Ajv generates code using [doT templates](https://github.com/olado/doT) to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=32,4,1&chs=600x416&chxl=-1:|djv|ajv|json-schema-validator-generator|jsen|is-my-json-valid|themis|z-schema|jsck|skeemas|json-schema-library|tv4&chd=t:100,98,72.1,66.8,50.1,15.1,6.1,3.8,1.2,0.7,0.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements full JSON Schema [draft-06/07](http://json-schema.org/) and draft-04 standards: - all validation keywords (see [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md)) - full support of remote refs (remote schemas have to be added with `addSchema` or compiled to be available) - support of circular references between schemas - correct string lengths for strings with unicode pairs (can be turned off) - [formats](#formats) defined by JSON Schema draft-07 standard and custom formats (can be turned off) - [validates schemas against meta-schema](#api-validateschema) - supports [browsers](#using-in-browser) and Node.js 0.10-14.x - [asynchronous loading](#asynchronous-schema-compilation) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](#options) - [error messages with parameters](#validation-errors) describing error reasons to allow creating custom error messages - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [filtering data](#filtering-data) from additional properties - [assigning defaults](#assigning-defaults) to missing properties and items - [coercing data](#coercing-data-types) to the types specified in `type` keywords - [custom keywords](#defining-custom-keywords) - draft-06/07 keywords `const`, `contains`, `propertyNames` and `if/then/else` - draft-06 boolean schemas (`true`/`false` as a schema to always pass/fail). - keywords `switch`, `patternRequired`, `formatMaximum` / `formatMinimum` and `formatExclusiveMaximum` / `formatExclusiveMinimum` from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [$data reference](#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](#asynchronous-validation) of custom formats and keywords ## Install ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://tonicdev.com/npm/ajv The fastest validation call: ```javascript // Node.js require: var Ajv = require('ajv'); // or ESM/TypeScript import import Ajv from 'ajv'; var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true} var validate = ajv.compile(schema); var valid = validate(data); if (!valid) console.log(validate.errors); ``` or with less code ```javascript // ... var valid = ajv.validate(schema, data); if (!valid) console.log(ajv.errors); // ... ``` or ```javascript // ... var valid = ajv.addSchema(schema, 'mySchema') .validate('mySchema', data); if (!valid) console.log(ajv.errorsText()); // ... ``` See [API](#api) and [Options](#options) for more details. Ajv compiles schemas to functions and caches them in all cases (using schema serialized with [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again. The best performance is achieved when using compiled functions returned by `compile` or `getSchema` methods (there is no additional function call). __Please note__: every time a validation function or `ajv.validate` are called `errors` property is overwritten. You need to copy `errors` array reference to another variable if you want to use it later (e.g., in the callback). See [Validation errors](#validation-errors) __Note for TypeScript users__: `ajv` provides its own TypeScript declarations out of the box, so you don't need to install the deprecated `@types/ajv` module. ## Using in browser You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle. If you need to use Ajv in several bundles you can create a separate UMD bundle using `npm run bundle` script (thanks to [siddo420](https://github.com/siddo420)). Then you need to load Ajv in the browser: ```html <script src="ajv.min.js"></script> ``` This bundle can be used with different module systems; it creates global `Ajv` if no module system is found. The browser bundle is available on [cdnjs](https://cdnjs.com/libraries/ajv). Ajv is tested with these browsers: [![Sauce Test Status](https://saucelabs.com/browser-matrix/epoberezkin.svg)](https://saucelabs.com/u/epoberezkin) __Please note__: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue [#234](https://github.com/ajv-validator/ajv/issues/234)). ### Ajv and Content Security Policies (CSP) If you're using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a `script-src` directive that includes the value `'unsafe-eval'`. :warning: NOTE, however, that `unsafe-eval` is NOT recommended in a secure CSP[[1]](https://developer.chrome.com/extensions/contentSecurityPolicy#relaxing-eval), as it has the potential to open the document to cross-site scripting (XSS) attacks. In order to make use of Ajv without easing your CSP, you can [pre-compile a schema using the CLI](https://github.com/ajv-validator/ajv-cli#compile-schemas). This will transpile the schema JSON into a JavaScript file that exports a `validate` function that works simlarly to a schema compiled at runtime. Note that pre-compilation of schemas is performed using [ajv-pack](https://github.com/ajv-validator/ajv-pack) and there are [some limitations to the schema features it can compile](https://github.com/ajv-validator/ajv-pack#limitations). A successfully pre-compiled schema is equivalent to the same schema compiled at runtime. ## Command line interface CLI is available as a separate npm package [ajv-cli](https://github.com/ajv-validator/ajv-cli). It supports: - compiling JSON Schemas to test their validity - BETA: generating standalone module exporting a validation function to be used without Ajv (using [ajv-pack](https://github.com/ajv-validator/ajv-pack)) - migrate schemas to draft-07 (using [json-schema-migrate](https://github.com/epoberezkin/json-schema-migrate)) - validating data file(s) against JSON Schema - testing expected validity of data against JSON Schema - referenced schemas - custom meta-schemas - files in JSON, JSON5, YAML, and JavaScript format - all Ajv options - reporting changes in data after validation in [JSON-patch](https://tools.ietf.org/html/rfc6902) format ## Validation keywords Ajv supports all validation keywords from draft-07 of JSON Schema standard: - [type](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#type) - [for numbers](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-numbers) - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf - [for strings](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-strings) - maxLength, minLength, pattern, format - [for arrays](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-arrays) - maxItems, minItems, uniqueItems, items, additionalItems, [contains](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#contains) - [for objects](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-objects) - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, [propertyNames](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#propertynames) - [for all types](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-all-types) - enum, [const](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#const) - [compound keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#compound-keywords) - not, oneOf, anyOf, allOf, [if/then/else](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#ifthenelse) With [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package Ajv also supports validation keywords from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) for JSON Schema standard: - [patternRequired](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#patternrequired-proposed) - like `required` but with patterns that some property should match. - [formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#formatmaximum--formatminimum-and-exclusiveformatmaximum--exclusiveformatminimum-proposed) - setting limits for date, time, etc. See [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md) for more details. ## Annotation keywords JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation. - `title` and `description`: information about the data represented by that schema - `$comment` (NEW in draft-07): information for developers. With option `$comment` Ajv logs or passes the comment string to the user-supplied function. See [Options](#options). - `default`: a default value of the data instance, see [Assigning defaults](#assigning-defaults). - `examples` (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema. - `readOnly` and `writeOnly` (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.). - `contentEncoding`: [RFC 2045](https://tools.ietf.org/html/rfc2045#section-6.1 ), e.g., "base64". - `contentMediaType`: [RFC 2046](https://tools.ietf.org/html/rfc2046), e.g., "image/png". __Please note__: Ajv does not implement validation of the keywords `examples`, `contentEncoding` and `contentMediaType` but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance. ## Formats Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use "format" keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see [ReDoS attack](#redos-attack). __Please note__: if you need to use "format" keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios. The following formats are implemented for string validation with "format" keyword: - _date_: full-date according to [RFC3339](http://tools.ietf.org/html/rfc3339#section-5.6). - _time_: time with optional time-zone. - _date-time_: date-time from the same source (time-zone is mandatory). `date`, `time` and `date-time` validate ranges in `full` mode and only regexp in `fast` mode (see [options](#options)). - _uri_: full URI. - _uri-reference_: URI reference, including full and relative URIs. - _uri-template_: URI template according to [RFC6570](https://tools.ietf.org/html/rfc6570) - _url_ (deprecated): [URL record](https://url.spec.whatwg.org/#concept-url). - _email_: email address. - _hostname_: host name according to [RFC1034](http://tools.ietf.org/html/rfc1034#section-3.5). - _ipv4_: IP address v4. - _ipv6_: IP address v6. - _regex_: tests whether a string is a valid regular expression by passing it to RegExp constructor. - _uuid_: Universally Unique IDentifier according to [RFC4122](http://tools.ietf.org/html/rfc4122). - _json-pointer_: JSON-pointer according to [RFC6901](https://tools.ietf.org/html/rfc6901). - _relative-json-pointer_: relative JSON-pointer according to [this draft](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00). __Please note__: JSON Schema draft-07 also defines formats `iri`, `iri-reference`, `idn-hostname` and `idn-email` for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here. There are two modes of format validation: `fast` and `full`. This mode affects formats `date`, `time`, `date-time`, `uri`, `uri-reference`, and `email`. See [Options](#options) for details. You can add additional formats and replace any of the formats above using [addFormat](#api-addformat) method. The option `unknownFormats` allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See [Options](#options) for details. You can find regular expressions used for format validation and the sources that were used in [formats.js](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js). ## <a name="ref"></a>Combining schemas with $ref You can structure your validation logic across multiple schema files and have schemas reference each other using `$ref` keyword. Example: ```javascript var schema = { "$id": "http://example.com/schemas/schema.json", "type": "object", "properties": { "foo": { "$ref": "defs.json#/definitions/int" }, "bar": { "$ref": "defs.json#/definitions/str" } } }; var defsSchema = { "$id": "http://example.com/schemas/defs.json", "definitions": { "int": { "type": "integer" }, "str": { "type": "string" } } }; ``` Now to compile your schema you can either pass all schemas to Ajv instance: ```javascript var ajv = new Ajv({schemas: [schema, defsSchema]}); var validate = ajv.getSchema('http://example.com/schemas/schema.json'); ``` or use `addSchema` method: ```javascript var ajv = new Ajv; var validate = ajv.addSchema(defsSchema) .compile(schema); ``` See [Options](#options) and [addSchema](#api) method. __Please note__: - `$ref` is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of `addSchema` method or as a property name in `schemas` option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using `compileAsync` method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See [Asynchronous schema compilation](#asynchronous-schema-compilation). ## $data reference With `$data` option you can use values from the validated data as the values for the schema keywords. See [proposal](https://github.com/json-schema-org/json-schema-spec/issues/51) for more information about how it works. `$data` reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems. The value of "$data" should be a [JSON-pointer](https://tools.ietf.org/html/rfc6901) to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a [relative JSON-pointer](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00) (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema). Examples. This schema requires that the value in property `smaller` is less or equal than the value in the property larger: ```javascript var ajv = new Ajv({$data: true}); var schema = { "properties": { "smaller": { "type": "number", "maximum": { "$data": "1/larger" } }, "larger": { "type": "number" } } }; var validData = { smaller: 5, larger: 7 }; ajv.validate(schema, validData); // true ``` This schema requires that the properties have the same format as their field names: ```javascript var schema = { "additionalProperties": { "type": "string", "format": { "$data": "0#" } } }; var validData = { 'date-time': '1963-06-19T08:30:06.283185Z', email: 'joe.bloggs@example.com' } ``` `$data` reference is resolved safely - it won't throw even if some property is undefined. If `$data` resolves to `undefined` the validation succeeds (with the exclusion of `const` keyword). If `$data` resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails. ## $merge and $patch keywords With the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) you can use the keywords `$merge` and `$patch` that allow extending JSON Schemas with patches using formats [JSON Merge Patch (RFC 7396)](https://tools.ietf.org/html/rfc7396) and [JSON Patch (RFC 6902)](https://tools.ietf.org/html/rfc6902). To add keywords `$merge` and `$patch` to Ajv instance use this code: ```javascript require('ajv-merge-patch')(ajv); ``` Examples. Using `$merge`: ```json { "$merge": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": { "properties": { "q": { "type": "number" } } } } } ``` Using `$patch`: ```json { "$patch": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": [ { "op": "add", "path": "/properties/q", "value": { "type": "number" } } ] } } ``` The schemas above are equivalent to this schema: ```json { "type": "object", "properties": { "p": { "type": "string" }, "q": { "type": "number" } }, "additionalProperties": false } ``` The properties `source` and `with` in the keywords `$merge` and `$patch` can use absolute or relative `$ref` to point to other schemas previously added to the Ajv instance or to the fragments of the current schema. See the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) for more information. ## Defining custom keywords The advantages of using custom keywords are: - allow creating validation scenarios that cannot be expressed using JSON Schema - simplify your schemas - help bringing a bigger part of the validation logic to your schemas - make your schemas more expressive, less verbose and closer to your application domain - implement custom data processors that modify your data (`modifying` option MUST be used in keyword definition) and/or create side effects while the data is being validated If a keyword is used only for side-effects and its validation result is pre-defined, use option `valid: true/false` in keyword definition to simplify both generated code (no error handling in case of `valid: true`) and your keyword functions (no need to return any validation result). The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas. You can define custom keywords with [addKeyword](#api-addkeyword) method. Keywords are defined on the `ajv` instance level - new instances will not have previously defined keywords. Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema. Example. `range` and `exclusiveRange` keywords using compiled schema: ```javascript ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) { var min = sch[0]; var max = sch[1]; return parentSchema.exclusiveRange === true ? function (data) { return data > min && data < max; } : function (data) { return data >= min && data <= max; } } }); var schema = { "range": [2, 4], "exclusiveRange": true }; var validate = ajv.compile(schema); console.log(validate(2.01)); // true console.log(validate(3.99)); // true console.log(validate(2)); // false console.log(validate(4)); // false ``` Several custom keywords (typeof, instanceof, range and propertyNames) are defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - they can be used for your schemas and as a starting point for your own custom keywords. See [Defining custom keywords](https://github.com/ajv-validator/ajv/blob/master/CUSTOM.md) for more details. ## Asynchronous schema compilation During asynchronous compilation remote references are loaded using supplied function. See `compileAsync` [method](#api-compileAsync) and `loadSchema` [option](#options). Example: ```javascript var ajv = new Ajv({ loadSchema: loadSchema }); ajv.compileAsync(schema).then(function (validate) { var valid = validate(data); // ... }); function loadSchema(uri) { return request.json(uri).then(function (res) { if (res.statusCode >= 400) throw new Error('Loading error: ' + res.statusCode); return res.body; }); } ``` __Please note__: [Option](#options) `missingRefs` should NOT be set to `"ignore"` or `"fail"` for asynchronous compilation to work. ## Asynchronous validation Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add `async: true` in the keyword or format definition (see [addFormat](#api-addformat), [addKeyword](#api-addkeyword) and [Defining custom keywords](#defining-custom-keywords)). If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have `"$async": true` keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without `$async` keyword Ajv will throw an exception during schema compilation. __Please note__: all asynchronous subschemas that are referenced from the current or other schemas should have `"$async": true` keyword as well, otherwise the schema compilation will fail. Validation function for an asynchronous custom format/keyword should return a promise that resolves with `true` or `false` (or rejects with `new Ajv.ValidationError(errors)` if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to [es7 async functions](http://tc39.github.io/ecmascript-asyncawait/) that can optionally be transpiled with [nodent](https://github.com/MatAtBread/nodent). Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via `processCode` option. See [Options](#options). The compiled validation function has `$async: true` property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas. Validation result will be a promise that resolves with validated data or rejects with an exception `Ajv.ValidationError` that contains the array of validation errors in `errors` property. Example: ```javascript var ajv = new Ajv; // require('ajv-async')(ajv); ajv.addKeyword('idExists', { async: true, type: 'number', validate: checkIdExists }); function checkIdExists(schema, data) { return knex(schema.table) .select('id') .where('id', data) .then(function (rows) { return !!rows.length; // true if record is found }); } var schema = { "$async": true, "properties": { "userId": { "type": "integer", "idExists": { "table": "users" } }, "postId": { "type": "integer", "idExists": { "table": "posts" } } } }; var validate = ajv.compile(schema); validate({ userId: 1, postId: 19 }) .then(function (data) { console.log('Data is valid', data); // { userId: 1, postId: 19 } }) .catch(function (err) { if (!(err instanceof Ajv.ValidationError)) throw err; // data is invalid console.log('Validation errors:', err.errors); }); ``` ### Using transpilers with asynchronous validation functions. [ajv-async](https://github.com/ajv-validator/ajv-async) uses [nodent](https://github.com/MatAtBread/nodent) to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser). #### Using nodent ```javascript var ajv = new Ajv; require('ajv-async')(ajv); // in the browser if you want to load ajv-async bundle separately you can: // window.ajvAsync(ajv); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` #### Using other transpilers ```javascript var ajv = new Ajv({ processCode: transpileFunc }); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` See [Options](#options). ## Security considerations JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider. ##### Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ##### Untrusted schemas Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application. If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. [#557](https://github.com/ajv-validator/ajv/issues/557)) - validating certain data can be slow It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in `pattern` and `patternProperties` keywords. Regardless the measures you take, using untrusted schemas increases security risks. ##### Circular references in JavaScript objects Ajv does not support schemas and validated data that have circular references in objects. See [issue #802](https://github.com/ajv-validator/ajv/issues/802). An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references. ##### Security risks of trusted schemas Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to): - `pattern` and `format` for large strings - in some cases using `maxLength` can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see [ReDoS attack](#redos-attack)). - `patternProperties` for large property names - use `propertyNames` to mitigate, but some regular expressions can have exponential evaluation time as well. - `uniqueItems` for large non-scalar arrays - use `maxItems` to mitigate __Please note__: The suggestions above to prevent slow validation would only work if you do NOT use `allErrors: true` in production code (using it would continue validation after validation errors). You can validate your JSON schemas against [this meta-schema](https://github.com/ajv-validator/ajv/blob/master/lib/refs/json-schema-secure.json) to check that these recommendations are followed: ```javascript const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json')); const schema1 = {format: 'email'}; isSchemaSecure(schema1); // false const schema2 = {format: 'email', maxLength: MAX_LENGTH}; isSchemaSecure(schema2); // true ``` __Please note__: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results. ##### Content Security Policies (CSP) See [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) ## ReDoS attack Certain regular expressions can lead to the exponential evaluation time even with relatively short strings. Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see [safe-regex](https://github.com/substack/safe-regex), for example. __Please note__: some formats that Ajv implements use [regular expressions](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js) that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources __it is strongly recommended__ to consider the following: - making assessment of "format" implementations in Ajv. - using `format: 'fast'` option that simplifies some of the regular expressions (although it does not guarantee that they are safe). - replacing format implementations provided by Ajv with your own implementations of "format" keyword that either uses different regular expressions or another approach to format validation. Please see [addFormat](#api-addformat) method. - disabling format validation by ignoring "format" keyword with option `format: false` Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios. ## Filtering data With [option `removeAdditional`](#options) (added by [andyscott](https://github.com/andyscott)) you can filter data during the validation. This option modifies original data. Example: ```javascript var ajv = new Ajv({ removeAdditional: true }); var schema = { "additionalProperties": false, "properties": { "foo": { "type": "number" }, "bar": { "additionalProperties": { "type": "number" }, "properties": { "baz": { "type": "string" } } } } } var data = { "foo": 0, "additional1": 1, // will be removed; `additionalProperties` == false "bar": { "baz": "abc", "additional2": 2 // will NOT be removed; `additionalProperties` != false }, } var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 } ``` If `removeAdditional` option in the example above were `"all"` then both `additional1` and `additional2` properties would have been removed. If the option were `"failing"` then property `additional1` would have been removed regardless of its value and property `additional2` would have been removed only if its value were failing the schema in the inner `additionalProperties` (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed). __Please note__: If you use `removeAdditional` option with `additionalProperties` keyword inside `anyOf`/`oneOf` keywords your validation can fail with this schema, for example: ```json { "type": "object", "oneOf": [ { "properties": { "foo": { "type": "string" } }, "required": [ "foo" ], "additionalProperties": false }, { "properties": { "bar": { "type": "integer" } }, "required": [ "bar" ], "additionalProperties": false } ] } ``` The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties. With the option `removeAdditional: true` the validation will pass for the object `{ "foo": "abc"}` but will fail for the object `{"bar": 1}`. It happens because while the first subschema in `oneOf` is validated, the property `bar` is removed because it is an additional property according to the standard (because it is not included in `properties` keyword in the same schema). While this behaviour is unexpected (issues [#129](https://github.com/ajv-validator/ajv/issues/129), [#134](https://github.com/ajv-validator/ajv/issues/134)), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way: ```json { "type": "object", "properties": { "foo": { "type": "string" }, "bar": { "type": "integer" } }, "additionalProperties": false, "oneOf": [ { "required": [ "foo" ] }, { "required": [ "bar" ] } ] } ``` The schema above is also more efficient - it will compile into a faster function. ## Assigning defaults With [option `useDefaults`](#options) Ajv will assign values from `default` keyword in the schemas of `properties` and `items` (when it is the array of schemas) to the missing properties and items. With the option value `"empty"` properties and items equal to `null` or `""` (empty string) will be considered missing and assigned defaults. This option modifies original data. __Please note__: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema. Example 1 (`default` in `properties`): ```javascript var ajv = new Ajv({ useDefaults: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "string", "default": "baz" } }, "required": [ "foo", "bar" ] }; var data = { "foo": 1 }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": "baz" } ``` Example 2 (`default` in `items`): ```javascript var schema = { "type": "array", "items": [ { "type": "number" }, { "type": "string", "default": "foo" } ] } var data = [ 1 ]; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // [ 1, "foo" ] ``` `default` keywords in other cases are ignored: - not in `properties` or `items` subschemas - in schemas inside `anyOf`, `oneOf` and `not` (see [#42](https://github.com/ajv-validator/ajv/issues/42)) - in `if` subschema of `switch` keyword - in schemas generated by custom macro keywords The [`strictDefaults` option](#options) customizes Ajv's behavior for the defaults that Ajv ignores (`true` raises an error, and `"log"` outputs a warning). ## Coercing data types When you are validating user inputs all your data properties are usually strings. The option `coerceTypes` allows you to have your data types coerced to the types specified in your schema `type` keywords, both to pass the validation and to use the correctly typed data afterwards. This option modifies original data. __Please note__: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value. Example 1: ```javascript var ajv = new Ajv({ coerceTypes: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "boolean" } }, "required": [ "foo", "bar" ] }; var data = { "foo": "1", "bar": "false" }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": false } ``` Example 2 (array coercions): ```javascript var ajv = new Ajv({ coerceTypes: 'array' }); var schema = { "properties": { "foo": { "type": "array", "items": { "type": "number" } }, "bar": { "type": "boolean" } } }; var data = { "foo": "1", "bar": ["false"] }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": [1], "bar": false } ``` The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords). See [Coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md) for details. ## API ##### new Ajv(Object options) -&gt; Object Create Ajv instance. ##### .compile(Object schema) -&gt; Function&lt;Object data&gt; Generate validating function and cache the compiled schema for future use. Validating function returns a boolean value. This function has properties `errors` and `schema`. Errors encountered during the last validation are assigned to `errors` property (it is assigned `null` if there was no errors). `schema` property contains the reference to the original schema. The schema passed to this method will be validated against meta-schema unless `validateSchema` option is false. If schema is invalid, an error will be thrown. See [options](#options). ##### <a name="api-compileAsync"></a>.compileAsync(Object schema [, Boolean meta] [, Function callback]) -&gt; Promise Asynchronous version of `compile` method that loads missing remote schemas using asynchronous function in `options.loadSchema`. This function returns a Promise that resolves to a validation function. An optional callback passed to `compileAsync` will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when: - missing schema can't be loaded (`loadSchema` returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved. - schema (or some loaded/referenced schema) is invalid. The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded. You can asynchronously compile meta-schema by passing `true` as the second parameter. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### .validate(Object schema|String key|String ref, data) -&gt; Boolean Validate data using passed schema (it will be compiled and cached). Instead of the schema you can use the key that was previously passed to `addSchema`, the schema id if it was present in the schema or any previously resolved reference. Validation errors will be available in the `errors` property of Ajv instance (`null` if there were no errors). __Please note__: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later. If the schema is asynchronous (has `$async` keyword on the top level) this method returns a Promise. See [Asynchronous validation](#asynchronous-validation). ##### .addSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole. Array of schemas can be passed (schemas should have ids), the second parameter will be ignored. Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key. Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data. Although `addSchema` does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time. By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by `validateSchema` option. __Please note__: Ajv uses the [method chaining syntax](https://en.wikipedia.org/wiki/Method_chaining) for all methods with the prefix `add*` and `remove*`. This allows you to do nice things like the following. ```javascript var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri); ``` ##### .addMetaSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of `addSchema` because there may be instance options that would compile a meta schema incorrectly (at the moment it is `removeAdditional` option). There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option `meta` is set to `false`. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See `validateSchema`. ##### <a name="api-validateschema"></a>.validateSchema(Object schema) -&gt; Boolean Validates schema. This method should be used to validate schemas rather than `validate` due to the inconsistency of `uri` format in JSON Schema standard. By default this method is called automatically when the schema is added, so you rarely need to use it directly. If schema doesn't have `$schema` property, it is validated against draft 6 meta-schema (option `meta` should not be false). If schema has `$schema` property, then the schema with this id (that should be previously added) is used to validate passed schema. Errors will be available at `ajv.errors`. ##### .getSchema(String key) -&gt; Function&lt;Object data&gt; Retrieve compiled schema previously added with `addSchema` by the key passed to `addSchema` or by its full reference (id). The returned validating function has `schema` property with the reference to the original schema. ##### .removeSchema([Object schema|String key|String ref|RegExp pattern]) -&gt; Ajv Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references. Schema can be removed using: - key passed to `addSchema` - it's full reference (id) - RegExp that should match schema id or key (meta-schemas won't be removed) - actual schema object that will be stable-stringified to remove schema from cache If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared. ##### <a name="api-addformat"></a>.addFormat(String name, String|RegExp|Function|Object format) -&gt; Ajv Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance. Strings are converted to RegExp. Function should return validation result as `true` or `false`. If object is passed it should have properties `validate`, `compare` and `async`: - _validate_: a string, RegExp or a function as described above. - _compare_: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords `formatMaximum`/`formatMinimum` (defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package). It should return `1` if the first value is bigger than the second value, `-1` if it is smaller and `0` if it is equal. - _async_: an optional `true` value if `validate` is an asynchronous function; in this case it should return a promise that resolves with a value `true` or `false`. - _type_: an optional type of data that the format applies to. It can be `"string"` (default) or `"number"` (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass. Custom formats can be also added via `formats` option. ##### <a name="api-addkeyword"></a>.addKeyword(String keyword, Object definition) -&gt; Ajv Add custom validation keyword to Ajv instance. Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance. Keyword must start with a letter, `_` or `$`, and may continue with letters, numbers, `_`, `$`, or `-`. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions. Example Keywords: - `"xyz-example"`: valid, and uses prefix for the xyz project to avoid name collisions. - `"example"`: valid, but not recommended as it could collide with future versions of JSON Schema etc. - `"3-example"`: invalid as numbers are not allowed to be the first character in a keyword Keyword definition is an object with the following properties: - _type_: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types. - _validate_: validating function - _compile_: compiling function - _macro_: macro function - _inline_: compiling function that returns code (as string) - _schema_: an optional `false` value used with "validate" keyword to not pass schema - _metaSchema_: an optional meta-schema for keyword schema - _dependencies_: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation - _modifying_: `true` MUST be passed if keyword modifies data - _statements_: `true` can be passed in case inline keyword generates statements (as opposed to expression) - _valid_: pass `true`/`false` to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - _$data_: an optional `true` value to support [$data reference](#data-reference) as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - _async_: an optional `true` value if the validation function is asynchronous (whether it is compiled or passed in _validate_ property); in this case it should return a promise that resolves with a value `true` or `false`. This option is ignored in case of "macro" and "inline" keywords. - _errors_: an optional boolean or string `"full"` indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation. _compile_, _macro_ and _inline_ are mutually exclusive, only one should be used at a time. _validate_ can be used separately or in addition to them to support $data reference. __Please note__: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate `type` keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed. See [Defining custom keywords](#defining-custom-keywords) for more details. ##### .getKeyword(String keyword) -&gt; Object|Boolean Returns custom keyword definition, `true` for pre-defined keywords and `false` if the keyword is unknown. ##### .removeKeyword(String keyword) -&gt; Ajv Removes custom or pre-defined keyword so you can redefine them. While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results. __Please note__: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use `removeSchema` method and compile them again. ##### .errorsText([Array&lt;Object&gt; errors [, Object options]]) -&gt; String Returns the text with all errors in a String. Options can have properties `separator` (string used to separate errors, ", " by default) and `dataVar` (the variable name that dataPaths are prefixed with, "data" by default). ## Options Defaults: ```javascript { // validation and reporting options: $data: false, allErrors: false, verbose: false, $comment: false, // NEW in Ajv version 6.0 jsonPointers: false, uniqueItems: true, unicode: true, nullable: false, format: 'fast', formats: {}, unknownFormats: true, schemas: {}, logger: undefined, // referenced schema options: schemaId: '$id', missingRefs: true, extendRefs: 'ignore', // recommended 'fail' loadSchema: undefined, // function(uri: string): Promise {} // options to modify validated data: removeAdditional: false, useDefaults: false, coerceTypes: false, // strict mode options strictDefaults: false, strictKeywords: false, strictNumbers: false, // asynchronous validation options: transpile: undefined, // requires ajv-async package // advanced options: meta: true, validateSchema: true, addUsedSchema: true, inlineRefs: true, passContext: false, loopRequired: Infinity, ownProperties: false, multipleOfPrecision: false, errorDataPath: 'object', // deprecated messages: true, sourceCode: false, processCode: undefined, // function (str: string, schema: object): string {} cache: new Cache, serialize: undefined } ``` ##### Validation and reporting options - _$data_: support [$data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See [API](#api). - _allErrors_: check all rules collecting all errors. Default is to return after the first error. - _verbose_: include the reference to the part of the schema (`schema` and `parentSchema`) and validated data in errors (false by default). - _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$comment` keyword to a function. Option values: - `false` (default): ignore $comment keyword. - `true`: log the keyword value to console. - function: pass the keyword value, its schema path and root schema to the specified function - _jsonPointers_: set `dataPath` property of errors using [JSON Pointers](https://tools.ietf.org/html/rfc6901) instead of JavaScript property access notation. - _uniqueItems_: validate `uniqueItems` keyword (true by default). - _unicode_: calculate correct length of strings with unicode pairs (true by default). Pass `false` to use `.length` of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - _nullable_: support keyword "nullable" from [Open API 3 specification](https://swagger.io/docs/specification/data-models/data-types/). - _format_: formats validation mode. Option values: - `"fast"` (default) - simplified and fast validation (see [Formats](#formats) for details of which formats are available and affected by this option). - `"full"` - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - `false` - ignore all format keywords. - _formats_: an object with custom formats. Keys and values will be passed to `addFormat` method. - _keywords_: an object with custom keywords. Keys and values will be passed to `addKeyword` method. - _unknownFormats_: handling of unknown formats. Option values: - `true` (default) - if an unknown format is encountered the exception is thrown during schema compilation. If `format` keyword value is [$data reference](#data-reference) and it is unknown the validation will fail. - `[String]` - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. If `format` keyword value is [$data reference](#data-reference) and it is not in this array the validation will fail. - `"ignore"` - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification. - _schemas_: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method `addSchema(value, key)` will be called for each schema in this object. - _logger_: sets the logging method. Default is the global `console` object that should have methods `log`, `warn` and `error`. See [Error logging](#error-logging). Option values: - custom logger - it should have methods `log`, `warn` and `error`. If any of these methods is missing an exception will be thrown. - `false` - logging is disabled. ##### Referenced schema options - _schemaId_: this option defines which keywords are used as schema URI. Option value: - `"$id"` (default) - only use `$id` keyword as schema URI (as specified in JSON Schema draft-06/07), ignore `id` keyword (if it is present a warning will be logged). - `"id"` - only use `id` keyword as schema URI (as specified in JSON Schema draft-04), ignore `$id` keyword (if it is present a warning will be logged). - `"auto"` - use both `$id` and `id` keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation. - _missingRefs_: handling of missing referenced schemas. Option values: - `true` (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties `missingRef` (with hash fragment) and `missingSchema` (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). - `"ignore"` - to log error during compilation and always pass validation. - `"fail"` - to log error and successfully compile schema but fail validation if this rule is checked. - _extendRefs_: validation of other keywords when `$ref` is present in the schema. Option values: - `"ignore"` (default) - when `$ref` is used other keywords are ignored (as per [JSON Reference](https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03#section-3) standard). A warning will be logged during the schema compilation. - `"fail"` (recommended) - if other validation keywords are used together with `$ref` the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing. - `true` - validate all keywords in the schemas with `$ref` (the default behaviour in versions before 5.0.0). - _loadSchema_: asynchronous function that will be used to load remote schemas when `compileAsync` [method](#api-compileAsync) is used and some reference is missing (option `missingRefs` should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### Options to modify validated data - _removeAdditional_: remove additional properties - see example in [Filtering data](#filtering-data). This option is not used if schema is added with `addMetaSchema` method. Option values: - `false` (default) - not to remove additional properties - `"all"` - all additional properties are removed, regardless of `additionalProperties` keyword in schema (and no validation is made for them). - `true` - only additional properties with `additionalProperties` keyword equal to `false` are removed. - `"failing"` - additional properties that fail schema validation will be removed (where `additionalProperties` keyword is `false` or schema). - _useDefaults_: replace missing or undefined properties and items with the values from corresponding `default` keywords. Default behaviour is to ignore `default` keywords. This option is not used if schema is added with `addMetaSchema` method. See examples in [Assigning defaults](#assigning-defaults). Option values: - `false` (default) - do not use defaults - `true` - insert defaults by value (object literal is used). - `"empty"` - in addition to missing or undefined, use defaults for properties and items that are equal to `null` or `""` (an empty string). - `"shared"` (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well. - _coerceTypes_: change data type of data to match `type` keyword. See the example in [Coercing data types](#coercing-data-types) and [coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md). Option values: - `false` (default) - no type coercion. - `true` - coerce scalar data types. - `"array"` - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema). ##### Strict mode options - _strictDefaults_: report ignored `default` keywords in schemas. Option values: - `false` (default) - ignored defaults are not reported - `true` - if an ignored default is present, throw an error - `"log"` - if an ignored default is present, log warning - _strictKeywords_: report unknown keywords in schemas. Option values: - `false` (default) - unknown keywords are not reported - `true` - if an unknown keyword is present, throw an error - `"log"` - if an unknown keyword is present, log warning - _strictNumbers_: validate numbers strictly, failing validation for NaN and Infinity. Option values: - `false` (default) - NaN or Infinity will pass validation for numeric types - `true` - NaN or Infinity will not pass validation for numeric types ##### Asynchronous validation options - _transpile_: Requires [ajv-async](https://github.com/ajv-validator/ajv-async) package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values: - `undefined` (default) - transpile with [nodent](https://github.com/MatAtBread/nodent) if async functions are not supported. - `true` - always transpile with nodent. - `false` - do not transpile; if async functions are not supported an exception will be thrown. ##### Advanced options - _meta_: add [meta-schema](http://json-schema.org/documentation.html) so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no `$schema` keyword. This default meta-schema MUST have `$schema` keyword. - _validateSchema_: validate added/compiled schemas against meta-schema (true by default). `$schema` property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with `addMetaSchema` method. Option values: - `true` (default) - if the validation fails, throw the exception. - `"log"` - if the validation fails, log error. - `false` - skip schema validation. - _addUsedSchema_: by default methods `compile` and `validate` add schemas to the instance if they have `$id` (or `id`) property that doesn't start with "#". If `$id` is present and it is not unique the exception will be thrown. Set this option to `false` to skip adding schemas to the instance and the `$id` uniqueness check when these methods are used. This option does not affect `addSchema` method. - _inlineRefs_: Affects compilation of referenced schemas. Option values: - `true` (default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. - `false` - to not inline referenced schemas (they will be compiled as separate functions). - integer number - to limit the maximum number of keywords of the schema that will be inlined. - _passContext_: pass validation context to custom keyword functions. If this option is `true` and you pass some context to the compiled validation function with `validate.call(context, data)`, the `context` will be available as `this` in your custom keywords. By default `this` is Ajv instance. - _loopRequired_: by default `required` keyword is compiled into a single expression (or a sequence of statements in `allErrors` mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which `required` keyword will be validated in a loop - smaller validation function size but also worse performance. - _ownProperties_: by default Ajv iterates over all enumerable object properties; when this option is `true` only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - _multipleOfPrecision_: by default `multipleOf` keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue [#84](https://github.com/ajv-validator/ajv/issues/84)). If you need to use fractional dividers set this option to some positive integer N to have `multipleOf` validated using this formula: `Math.abs(Math.round(division) - division) < 1e-N` (it is slower but allows for float arithmetics deviations). - _errorDataPath_ (deprecated): set `dataPath` to point to 'object' (default) or to 'property' when validating keywords `required`, `additionalProperties` and `dependencies`. - _messages_: Include human-readable messages in errors. `true` by default. `false` can be passed when custom messages are used (e.g. with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n)). - _sourceCode_: add `sourceCode` property to validating function (for debugging; this code can be different from the result of toString call). - _processCode_: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options: - `beautify` that formatted the generated function using [js-beautify](https://github.com/beautify-web/js-beautify). If you want to beautify the generated code pass a function calling `require('js-beautify').js_beautify` as `processCode: code => js_beautify(code)`. - `transpile` that transpiled asynchronous validation function. You can still use `transpile` option with [ajv-async](https://github.com/ajv-validator/ajv-async) package. See [Asynchronous validation](#asynchronous-validation) for more information. - _cache_: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache [sacjs](https://github.com/epoberezkin/sacjs) can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods `put(key, value)`, `get(key)`, `del(key)` and `clear()`. - _serialize_: an optional function to serialize schema to cache key. Pass `false` to use schema itself as a key (e.g., if WeakMap used as a cache). By default [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) is used. ## Validation errors In case of validation failure, Ajv assigns the array of errors to `errors` property of validation function (or to `errors` property of Ajv instance when `validate` or `validateSchema` methods were called). In case of [asynchronous validation](#asynchronous-validation), the returned promise is rejected with exception `Ajv.ValidationError` that has `errors` property. ### Error objects Each error is an object with the following properties: - _keyword_: validation keyword. - _dataPath_: the path to the part of the data that was validated. By default `dataPath` uses JavaScript property access notation (e.g., `".prop[1].subProp"`). When the option `jsonPointers` is true (see [Options](#options)) `dataPath` will be set using JSON pointer standard (e.g., `"/prop/1/subProp"`). - _schemaPath_: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation. - _params_: the object with the additional information about error that can be used to create custom error messages (e.g., using [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package). See below for parameters set by all keywords. - _message_: the standard error message (can be excluded with option `messages` set to false). - _schema_: the schema of the keyword (added with `verbose` option). - _parentSchema_: the schema containing the keyword (added with `verbose` option) - _data_: the data validated by the keyword (added with `verbose` option). __Please note__: `propertyNames` keyword schema validation errors have an additional property `propertyName`, `dataPath` points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property `keyword` equal to `"propertyNames"`. ### Error parameters Properties of `params` object in errors depend on the keyword that failed validation. - `maxItems`, `minItems`, `maxLength`, `minLength`, `maxProperties`, `minProperties` - property `limit` (number, the schema of the keyword). - `additionalItems` - property `limit` (the maximum number of allowed items in case when `items` keyword is an array of schemas and `additionalItems` is false). - `additionalProperties` - property `additionalProperty` (the property not used in `properties` and `patternProperties` keywords). - `dependencies` - properties: - `property` (dependent property), - `missingProperty` (required missing dependency - only the first one is reported currently) - `deps` (required dependencies, comma separated list as a string), - `depsCount` (the number of required dependencies). - `format` - property `format` (the schema of the keyword). - `maximum`, `minimum` - properties: - `limit` (number, the schema of the keyword), - `exclusive` (boolean, the schema of `exclusiveMaximum` or `exclusiveMinimum`), - `comparison` (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=") - `multipleOf` - property `multipleOf` (the schema of the keyword) - `pattern` - property `pattern` (the schema of the keyword) - `required` - property `missingProperty` (required property that is missing). - `propertyNames` - property `propertyName` (an invalid property name). - `patternRequired` (in ajv-keywords) - property `missingPattern` (required pattern that did not match any property). - `type` - property `type` (required type(s), a string, can be a comma-separated list) - `uniqueItems` - properties `i` and `j` (indices of duplicate items). - `const` - property `allowedValue` pointing to the value (the schema of the keyword). - `enum` - property `allowedValues` pointing to the array of values (the schema of the keyword). - `$ref` - property `ref` with the referenced schema URI. - `oneOf` - property `passingSchemas` (array of indices of passing schemas, null if no schema passes). - custom keywords (in case keyword definition doesn't create errors) - property `keyword` (the keyword name). ### Error logging Using the `logger` option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - **Required Methods**: `log`, `warn`, `error` ```javascript var otherLogger = new OtherLogger(); var ajv = new Ajv({ logger: { log: console.log.bind(console), warn: function warn() { otherLogger.logWarn.apply(otherLogger, arguments); }, error: function error() { otherLogger.logError.apply(otherLogger, arguments); console.error.apply(console, arguments); } } }); ``` ## Plugins Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions: - it exports a function - this function accepts ajv instance as the first parameter and returns the same instance to allow chaining - this function can accept an optional configuration as the second parameter If you have published a useful plugin please submit a PR to add it to the next section. ## Related packages - [ajv-async](https://github.com/ajv-validator/ajv-async) - plugin to configure async validation mode - [ajv-bsontype](https://github.com/BoLaMN/ajv-bsontype) - plugin to validate mongodb's bsonType formats - [ajv-cli](https://github.com/jessedc/ajv-cli) - command line interface - [ajv-errors](https://github.com/ajv-validator/ajv-errors) - plugin for custom error messages - [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) - internationalised error messages - [ajv-istanbul](https://github.com/ajv-validator/ajv-istanbul) - plugin to instrument generated validation code to measure test coverage of your schemas - [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) - plugin with custom validation keywords (select, typeof, etc.) - [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) - plugin with keywords $merge and $patch - [ajv-pack](https://github.com/ajv-validator/ajv-pack) - produces a compact module exporting validation functions - [ajv-formats-draft2019](https://github.com/luzlab/ajv-formats-draft2019) - format validators for draft2019 that aren't already included in ajv (ie. `idn-hostname`, `idn-email`, `iri`, `iri-reference` and `duration`). ## Some packages using Ajv - [webpack](https://github.com/webpack/webpack) - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser - [jsonscript-js](https://github.com/JSONScript/jsonscript-js) - the interpreter for [JSONScript](http://www.jsonscript.org) - scripted processing of existing endpoints and services - [osprey-method-handler](https://github.com/mulesoft-labs/osprey-method-handler) - Express middleware for validating requests and responses based on a RAML method object, used in [osprey](https://github.com/mulesoft/osprey) - validating API proxy generated from a RAML definition - [har-validator](https://github.com/ahmadnassri/har-validator) - HTTP Archive (HAR) validator - [jsoneditor](https://github.com/josdejong/jsoneditor) - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org - [JSON Schema Lint](https://github.com/nickcmaynard/jsonschemalint) - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com - [objection](https://github.com/vincit/objection.js) - SQL-friendly ORM for Node.js - [table](https://github.com/gajus/table) - formats data into a string table - [ripple-lib](https://github.com/ripple/ripple-lib) - a JavaScript API for interacting with [Ripple](https://ripple.com) in Node.js and the browser - [restbase](https://github.com/wikimedia/restbase) - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content - [hippie-swagger](https://github.com/CacheControl/hippie-swagger) - [Hippie](https://github.com/vesln/hippie) wrapper that provides end to end API testing with swagger validation - [react-form-controlled](https://github.com/seeden/react-form-controlled) - React controlled form components with validation - [rabbitmq-schema](https://github.com/tjmehta/rabbitmq-schema) - a schema definition module for RabbitMQ graphs and messages - [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema - [chai-ajv-json-schema](https://github.com/peon374/chai-ajv-json-schema) - chai plugin to us JSON Schema with expect in mocha tests - [grunt-jsonschema-ajv](https://github.com/SignpostMarv/grunt-jsonschema-ajv) - Grunt plugin for validating files against JSON Schema - [extract-text-webpack-plugin](https://github.com/webpack-contrib/extract-text-webpack-plugin) - extract text from bundle into a file - [electron-builder](https://github.com/electron-userland/electron-builder) - a solution to package and build a ready for distribution Electron app - [addons-linter](https://github.com/mozilla/addons-linter) - Mozilla Add-ons Linter - [gh-pages-generator](https://github.com/epoberezkin/gh-pages-generator) - multi-page site generator converting markdown files to GitHub pages - [ESLint](https://github.com/eslint/eslint) - the pluggable linting utility for JavaScript and JSX ## Tests ``` npm install git submodule update --init npm test ``` ## Contributing All validation functions are generated using doT templates in [dot](https://github.com/ajv-validator/ajv/tree/master/lib/dot) folder. Templates are precompiled so doT is not a run-time dependency. `npm run build` - compiles templates to [dotjs](https://github.com/ajv-validator/ajv/tree/master/lib/dotjs) folder. `npm run watch` - automatically compiles templates when files in dot folder change Please see [Contributing guidelines](https://github.com/ajv-validator/ajv/blob/master/CONTRIBUTING.md) ## Changes history See https://github.com/ajv-validator/ajv/releases __Please note__: [Changes in version 7.0.0-beta](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](https://github.com/ajv-validator/ajv/blob/master/CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](https://github.com/ajv-validator/ajv/blob/master/LICENSE) # is-glob [![NPM version](https://img.shields.io/npm/v/is-glob.svg?style=flat)](https://www.npmjs.com/package/is-glob) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![NPM total downloads](https://img.shields.io/npm/dt/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![Build Status](https://img.shields.io/github/workflow/status/micromatch/is-glob/dev)](https://github.com/micromatch/is-glob/actions) > Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-glob ``` You might also be interested in [is-valid-glob](https://github.com/jonschlinkert/is-valid-glob) and [has-glob](https://github.com/jonschlinkert/has-glob). ## Usage ```js var isGlob = require('is-glob'); ``` ### Default behavior **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js'); isGlob('*.js'); isGlob('**/abc.js'); isGlob('abc/*.js'); isGlob('abc/(aaa|bbb).js'); isGlob('abc/[a-z].js'); isGlob('abc/{a,b}.js'); //=> true ``` Extglobs ```js isGlob('abc/@(a).js'); isGlob('abc/!(a).js'); isGlob('abc/+(a).js'); isGlob('abc/*(a).js'); isGlob('abc/?(a).js'); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('abc/\\@(a).js'); isGlob('abc/\\!(a).js'); isGlob('abc/\\+(a).js'); isGlob('abc/\\*(a).js'); isGlob('abc/\\?(a).js'); isGlob('\\!foo.js'); isGlob('\\*.js'); isGlob('\\*\\*/abc.js'); isGlob('abc/\\*.js'); isGlob('abc/\\(aaa|bbb).js'); isGlob('abc/\\[a-z].js'); isGlob('abc/\\{a,b}.js'); //=> false ``` Patterns that do not have glob patterns return `false`: ```js isGlob('abc.js'); isGlob('abc/def/ghi.js'); isGlob('foo.js'); isGlob('abc/@.js'); isGlob('abc/+.js'); isGlob('abc/?.js'); isGlob(); isGlob(null); //=> false ``` Arrays are also `false` (If you want to check if an array has a glob pattern, use [has-glob](https://github.com/jonschlinkert/has-glob)): ```js isGlob(['**/*.js']); isGlob(['foo.js']); //=> false ``` ### Option strict When `options.strict === false` the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return `false` may return `true`. This is done so that matching libraries like [micromatch](https://github.com/micromatch/micromatch) have a chance at determining if the pattern is a glob or not. **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js', {strict: false}); isGlob('*.js', {strict: false}); isGlob('**/abc.js', {strict: false}); isGlob('abc/*.js', {strict: false}); isGlob('abc/(aaa|bbb).js', {strict: false}); isGlob('abc/[a-z].js', {strict: false}); isGlob('abc/{a,b}.js', {strict: false}); //=> true ``` Extglobs ```js isGlob('abc/@(a).js', {strict: false}); isGlob('abc/!(a).js', {strict: false}); isGlob('abc/+(a).js', {strict: false}); isGlob('abc/*(a).js', {strict: false}); isGlob('abc/?(a).js', {strict: false}); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('\\!foo.js', {strict: false}); isGlob('\\*.js', {strict: false}); isGlob('\\*\\*/abc.js', {strict: false}); isGlob('abc/\\*.js', {strict: false}); isGlob('abc/\\(aaa|bbb).js', {strict: false}); isGlob('abc/\\[a-z].js', {strict: false}); isGlob('abc/\\{a,b}.js', {strict: false}); //=> false ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [assemble](https://www.npmjs.com/package/assemble): Get the rocks out of your socks! Assemble makes you fast at creating web projects… [more](https://github.com/assemble/assemble) | [homepage](https://github.com/assemble/assemble "Get the rocks out of your socks! Assemble makes you fast at creating web projects. Assemble is used by thousands of projects for rapid prototyping, creating themes, scaffolds, boilerplates, e-books, UI components, API documentation, blogs, building websit") * [base](https://www.npmjs.com/package/base): Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks | [homepage](https://github.com/node-base/base "Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks") * [update](https://www.npmjs.com/package/update): Be scalable! Update is a new, open source developer framework and CLI for automating updates… [more](https://github.com/update/update) | [homepage](https://github.com/update/update "Be scalable! Update is a new, open source developer framework and CLI for automating updates of any kind in code projects.") * [verb](https://www.npmjs.com/package/verb): Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used… [more](https://github.com/verbose/verb) | [homepage](https://github.com/verbose/verb "Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used on hundreds of projects of all sizes to generate everything from API docs to readmes.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 47 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [doowb](https://github.com/doowb) | | 1 | [phated](https://github.com/phated) | | 1 | [danhper](https://github.com/danhper) | | 1 | [paulmillr](https://github.com/paulmillr) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 27, 2019._ ## HIDAPI library for Windows, Linux, FreeBSD and macOS | CI instance | Status | |----------------------|--------| | `macOS master` | [![Build Status](https://travis-ci.org/libusb/hidapi.svg?branch=master)](https://travis-ci.org/libusb/hidapi) | | `Windows master` | [![Build status](https://ci.appveyor.com/api/projects/status/r482aevuigmi86rk/branch/master?svg=true)](https://ci.appveyor.com/project/Youw/hidapi/branch/master) | | `Linux/BSD, last build (branch/PR)` | [![builds.sr.ht status](https://builds.sr.ht/~qbicz/hidapi.svg)](https://builds.sr.ht/~qbicz/hidapi?) | HIDAPI is a multi-platform library which allows an application to interface with USB and Bluetooth HID-Class devices on Windows, Linux, FreeBSD, and macOS. HIDAPI can be either built as a shared library (`.so`, `.dll` or `.dylib`) or can be embedded directly into a target application by adding a single source file (per platform) and a single header. HIDAPI library was originally developed by Alan Ott ([signal11](https://github.com/signal11)). It was moved to [libusb/hidapi](https://github.com/libusb/hidapi) on June 4th, 2019, in order to merge important bugfixes and continue development of the library. ## Table of Contents * [About](#about) * [What Does the API Look Like?](#what-does-the-api-look-like) * [License](#license) * [Download](#download) * [Build Instructions](#build-instructions) * [Prerequisites](#prerequisites) * [Linux](#linux) * [FreeBSD](#freebsd) * [Mac](#mac) * [Windows](#windows) * [Building HIDAPI into a shared library on Unix Platforms](#building-hidapi-into-a-shared-library-on-unix-platforms) * [Building the manual way on Unix platforms](#building-the-manual-way-on-unix-platforms) * [Building on Windows](#building-on-windows) * [Cross Compiling](#cross-compiling) * [Prerequisites](#prerequisites-1) * [Building HIDAPI](#building-hidapi) ## About HIDAPI has five back-ends: * Windows (using `hid.dll`) * Linux/hidraw (using the Kernel's hidraw driver) * Linux/libusb (using libusb-1.0) * FreeBSD (using libusb-1.0) * Mac (using IOHidManager) On Linux, either the hidraw or the libusb back-end can be used. There are tradeoffs, and the functionality supported is slightly different. __Linux/hidraw__ (`linux/hid.c`): This back-end uses the hidraw interface in the Linux kernel, and supports both USB and Bluetooth HID devices. It requires kernel version at least 2.6.39 to build. In addition, it will only communicate with devices which have hidraw nodes associated with them. Keyboards, mice, and some other devices which are blacklisted from having hidraw nodes will not work. Fortunately, for nearly all the uses of hidraw, this is not a problem. __Linux/FreeBSD/libusb__ (`libusb/hid.c`): This back-end uses libusb-1.0 to communicate directly to a USB device. This back-end will of course not work with Bluetooth devices. HIDAPI also comes with a Test GUI. The Test GUI is cross-platform and uses Fox Toolkit <http://www.fox-toolkit.org>. It will build on every platform which HIDAPI supports. Since it relies on a 3rd party library, building it is optional but recommended because it is so useful when debugging hardware. ## What Does the API Look Like? The API provides the most commonly used HID functions including sending and receiving of input, output, and feature reports. The sample program, which communicates with a heavily hacked up version of the Microchip USB Generic HID sample looks like this (with error checking removed for simplicity): **Warning: Only run the code you understand, and only when it conforms to the device spec. Writing data at random to your HID devices can break them.** ```c #ifdef WIN32 #include <windows.h> #endif #include <stdio.h> #include <stdlib.h> #include "hidapi.h" #define MAX_STR 255 int main(int argc, char* argv[]) { int res; unsigned char buf[65]; wchar_t wstr[MAX_STR]; hid_device *handle; int i; // Initialize the hidapi library res = hid_init(); // Open the device using the VID, PID, // and optionally the Serial number. handle = hid_open(0x4d8, 0x3f, NULL); // Read the Manufacturer String res = hid_get_manufacturer_string(handle, wstr, MAX_STR); wprintf(L"Manufacturer String: %s\n", wstr); // Read the Product String res = hid_get_product_string(handle, wstr, MAX_STR); wprintf(L"Product String: %s\n", wstr); // Read the Serial Number String res = hid_get_serial_number_string(handle, wstr, MAX_STR); wprintf(L"Serial Number String: (%d) %s\n", wstr[0], wstr); // Read Indexed String 1 res = hid_get_indexed_string(handle, 1, wstr, MAX_STR); wprintf(L"Indexed String 1: %s\n", wstr); // Toggle LED (cmd 0x80). The first byte is the report number (0x0). buf[0] = 0x0; buf[1] = 0x80; res = hid_write(handle, buf, 65); // Request state (cmd 0x81). The first byte is the report number (0x0). buf[0] = 0x0; buf[1] = 0x81; res = hid_write(handle, buf, 65); // Read requested state res = hid_read(handle, buf, 65); // Print out the returned buffer. for (i = 0; i < 4; i++) printf("buf[%d]: %d\n", i, buf[i]); // Close the device hid_close(handle); // Finalize the hidapi library res = hid_exit(); return 0; } ``` You can also use [hidtest/test.c](hidtest/test.c) as a starting point for your applications. ## License HIDAPI may be used by one of three licenses as outlined in [LICENSE.txt](LICENSE.txt). ## Download HIDAPI can be downloaded from GitHub ```sh git clone git://github.com/libusb/hidapi.git ``` ## Build Instructions This section is long. Don't be put off by this. It's not long because it's complicated to build HIDAPI; it's quite the opposite. This section is long because of the flexibility of HIDAPI and the large number of ways in which it can be built and used. You will likely pick a single build method. HIDAPI can be built in several different ways. If you elect to build a shared library, you will need to build it from the HIDAPI source distribution. If you choose instead to embed HIDAPI directly into your application, you can skip the building and look at the provided platform Makefiles for guidance. These platform Makefiles are located in `linux/`, `libusb/`, `mac/` and `windows/` and are called `Makefile-manual`. In addition, Visual Studio projects are provided. Even if you're going to embed HIDAPI into your project, it is still beneficial to build the example programs. ### Prerequisites: #### Linux: On Linux, you will need to install development packages for libudev, libusb and optionally Fox-toolkit (for the test GUI). On Debian/Ubuntu systems these can be installed by running: ```sh sudo apt-get install libudev-dev libusb-1.0-0-dev libfox-1.6-dev ``` If you downloaded the source directly from the git repository (using git clone), you'll need Autotools: ```sh sudo apt-get install autotools-dev autoconf automake libtool ``` #### FreeBSD: On FreeBSD you will need to install GNU make, libiconv, and optionally Fox-Toolkit (for the test GUI). This is done by running the following: ```sh pkg_add -r gmake libiconv fox16 ``` If you downloaded the source directly from the git repository (using git clone), you'll need Autotools: ```sh pkg_add -r autotools ``` #### Mac: On Mac, you will need to install Fox-Toolkit if you wish to build the Test GUI. There are two ways to do this, and each has a slight complication. Which method you use depends on your use case. If you wish to build the Test GUI just for your own testing on your own computer, then the easiest method is to install Fox-Toolkit using ports: ```sh sudo port install fox ``` If you wish to build the TestGUI app bundle to redistribute to others, you will need to install Fox-toolkit from source. This is because the version of fox that gets installed using ports uses the ports X11 libraries which are not compatible with the Apple X11 libraries. If you install Fox with ports and then try to distribute your built app bundle, it will simply fail to run on other systems. To install Fox-Toolkit manually, download the source package from <http://www.fox-toolkit.org>, extract it, and run the following from within the extracted source: ```sh ./configure && make && make install ``` #### Windows: On Windows, if you want to build the test GUI, you will need to get the `hidapi-externals.zip` package from the download site. This contains pre-built binaries for Fox-toolkit. Extract `hidapi-externals.zip` just outside of hidapi, so that hidapi-externals and hidapi are on the same level, as shown: ``` Parent_Folder | +hidapi +hidapi-externals ``` Again, this step is not required if you do not wish to build the test GUI. ### Building HIDAPI into a shared library on Unix Platforms: On Unix-like systems such as Linux, FreeBSD, macOS, and even Windows, using MinGW or Cygwin, the easiest way to build a standard system-installed shared library is to use the GNU Autotools build system. If you checked out the source from the git repository, run the following: ```sh ./bootstrap ./configure make make install # as root, or using sudo ``` If you downloaded a source package (i.e.: if you did not run git clone), you can skip the `./bootstrap` step. `./configure` can take several arguments which control the build. The two most likely to be used are: ```sh --enable-testgui Enable build of the Test GUI. This requires Fox toolkit to be installed. Instructions for installing Fox-Toolkit on each platform are in the Prerequisites section above. --prefix=/usr Specify where you want the output headers and libraries to be installed. The example above will put the headers in /usr/include and the binaries in /usr/lib. The default is to install into /usr/local which is fine on most systems. ``` ### Building the manual way on Unix platforms: Manual Makefiles are provided mostly to give the user and idea what it takes to build a program which embeds HIDAPI directly inside of it. These should really be used as examples only. If you want to build a system-wide shared library, use the Autotools method described above. To build HIDAPI using the manual Makefiles, change to the directory of your platform and run make. For example, on Linux run: ```sh cd linux/ make -f Makefile-manual ``` To build the Test GUI using the manual makefiles: ```sh cd testgui/ make -f Makefile-manual ``` ### Building on Windows: To build the HIDAPI DLL on Windows using Visual Studio, build the `.sln` file in the `windows/` directory. To build the Test GUI on windows using Visual Studio, build the `.sln` file in the `testgui/` directory. To build HIDAPI using MinGW or Cygwin using Autotools, use the instructions in the section [Building HIDAPI into a shared library on Unix Platforms](#building-hidapi-into-a-shared-library-on-unix-platforms) above. Note that building the Test GUI with MinGW or Cygwin will require the Windows procedure in the [Prerequisites](#prerequisites-1) section above (i.e.: `hidapi-externals.zip`). To build HIDAPI using MinGW using the Manual Makefiles, see the section [Building the manual way on Unix platforms](#building-the-manual-way-on-unix-platforms) above. HIDAPI can also be built using the Windows DDK (now also called the Windows Driver Kit or WDK). This method was originally required for the HIDAPI build but not anymore. However, some users still prefer this method. It is not as well supported anymore but should still work. Patches are welcome if it does not. To build using the DDK: 1. Install the Windows Driver Kit (WDK) from Microsoft. 2. From the Start menu, in the Windows Driver Kits folder, select Build Environments, then your operating system, then the x86 Free Build Environment (or one that is appropriate for your system). 3. From the console, change directory to the `windows/ddk_build/` directory, which is part of the HIDAPI distribution. 4. Type build. 5. You can find the output files (DLL and LIB) in a subdirectory created by the build system which is appropriate for your environment. On Windows XP, this directory is `objfre_wxp_x86/i386`. ## Cross Compiling This section talks about cross compiling HIDAPI for Linux using Autotools. This is useful for using HIDAPI on embedded Linux targets. These instructions assume the most raw kind of embedded Linux build, where all prerequisites will need to be built first. This process will of course vary based on your embedded Linux build system if you are using one, such as OpenEmbedded or Buildroot. For the purpose of this section, it will be assumed that the following environment variables are exported. ```sh $ export STAGING=$HOME/out $ export HOST=arm-linux ``` `STAGING` and `HOST` can be modified to suit your setup. ### Prerequisites Note that the build of libudev is the very basic configuration. Build libusb. From the libusb source directory, run: ```sh ./configure --host=$HOST --prefix=$STAGING make make install ``` Build libudev. From the libudev source directory, run: ```sh ./configure --disable-gudev --disable-introspection --disable-hwdb \ --host=$HOST --prefix=$STAGING make make install ``` ### Building HIDAPI Build HIDAPI: ``` PKG_CONFIG_DIR= \ PKG_CONFIG_LIBDIR=$STAGING/lib/pkgconfig:$STAGING/share/pkgconfig \ PKG_CONFIG_SYSROOT_DIR=$STAGING \ ./configure --host=$HOST --prefix=$STAGING ``` # prelude.ls [![Build Status](https://travis-ci.org/gkz/prelude-ls.png?branch=master)](https://travis-ci.org/gkz/prelude-ls) is a functionally oriented utility library. It is powerful and flexible. Almost all of its functions are curried. It is written in, and is the recommended base library for, <a href="http://livescript.net">LiveScript</a>. See **[the prelude.ls site](http://preludels.com)** for examples, a reference, and more. You can install via npm `npm install prelude-ls` ### Development `make test` to test `make build` to build `lib` from `src` `make build-browser` to build browser versions long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://img.shields.io/github/workflow/status/dcodeIO/long.js/Test/main?label=test&logo=github)](https://github.com/dcodeIO/long.js/actions?query=workflow%3ATest) [![Publish Status](https://img.shields.io/github/workflow/status/dcodeIO/long.js/Publish/main?label=publish&logo=github)](https://github.com/dcodeIO/long.js/actions?query=workflow%3APublish) [![npm](https://img.shields.io/npm/v/long.svg?label=npm&color=007acc&logo=npm)](https://www.npmjs.com/package/long) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The package exports an ECMAScript module with an UMD fallback. ``` $> npm install long ``` ```js import Long from "long"; var value = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(value.toString()); ... ``` Note that mixing ESM and CommonJS is not recommended as it yields different classes, albeit with the same functionality. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/dcodeIO/long.js@TAG/index.js` (ESM) * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/long@VERSION/index.js` (ESM)<br /> `https://cdn.jsdelivr.net/npm/long@VERSION/umd/index.js` (UMD) * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/long@VERSION/index.js` (ESM)<br /> `https://unpkg.com/long@VERSION/umd/index.js` (UMD) Replace `TAG` respectively `VERSION` with a [specific version](https://github.com/dcodeIO/long.js/releases) or omit it (not recommended in production) to use main/latest. API --- ### Constructor * new **Long**(low: `number`, high?: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive or zero. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**countLeadingZeros**/**clz**(): `number`<br /> Returns count leading zeros of this Long. * Long#**countTrailingZeros**/**ctz**(): `number`<br /> Returns count trailing zeros of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**rotateLeft**/**rotl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits rotated to the left by the given amount. * Long#**rotateRight**/**rotr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits rotated to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. WebAssembly support ------------------- [WebAssembly](http://webassembly.org) supports 64-bit integer arithmetic out of the box, hence a [tiny WebAssembly module](./wasm.wat) is used to compute operations like multiplication, division and remainder more efficiently (slow operations like division are around twice as fast), falling back to floating point based computations in JavaScript where WebAssembly is not yet supported, e.g., in older versions of node. Building -------- Building the UMD fallback: ``` $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` # u3 - Utility Functions This lib contains utility functions for e3, dataflower and other projects. ## Documentation ### Installation ```bash npm install u3 ``` ```bash bower install u3 ``` #### Usage In this documentation I used the lib as follows: ```js var u3 = require("u3"), cache = u3.cache, eachCombination = u3.eachCombination; ``` ### Function wrappers #### cache The `cache(fn)` function caches the fn results, so by the next calls it will return the result of the first call. You can use different arguments, but they won't affect the return value. ```js var a = cache(function fn(x, y, z){ return x + y + z; }); console.log(a(1, 2, 3)); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` It is possible to cache a value too. ```js var a = cache(1 + 2 + 3); console.log(a()); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` ### Math #### eachCombination The `eachCombination(alternativesByDimension, callback)` calls the `callback(a,b,c,...)` on each combination of the `alternatives[a[],b[],c[],...]`. ```js eachCombination([ [1, 2, 3], ["a", "b"] ], console.log); /* 1, "a" 1, "b" 2, "a" 2, "b" 3, "a" 3, "b" */ ``` You can use any dimension and number of alternatives. In the current example we used 2 dimensions. By the first dimension we used 3 alternatives: `[1, 2, 3]` and by the second dimension we used 2 alternatives: `["a", "b"]`. ## License MIT - 2016 Jánszky László Lajos # bl *(BufferList)* [![Build Status](https://api.travis-ci.com/rvagg/bl.svg?branch=master)](https://travis-ci.com/rvagg/bl/) **A Node.js Buffer list collector, reader and streamer thingy.** [![NPM](https://nodei.co/npm/bl.svg)](https://nodei.co/npm/bl/) **bl** is a storage object for collections of Node Buffers, exposing them with the main Buffer readable API. Also works as a duplex stream so you can collect buffers from a stream that emits them and emit buffers to a stream that consumes them! The original buffers are kept intact and copies are only done as necessary. Any reads that require the use of a single original buffer will return a slice of that buffer only (which references the same memory as the original buffer). Reads that span buffers perform concatenation as required and return the results transparently. ```js const { BufferList } = require('bl') const bl = new BufferList() bl.append(Buffer.from('abcd')) bl.append(Buffer.from('efg')) bl.append('hi') // bl will also accept & convert Strings bl.append(Buffer.from('j')) bl.append(Buffer.from([ 0x3, 0x4 ])) console.log(bl.length) // 12 console.log(bl.slice(0, 10).toString('ascii')) // 'abcdefghij' console.log(bl.slice(3, 10).toString('ascii')) // 'defghij' console.log(bl.slice(3, 6).toString('ascii')) // 'def' console.log(bl.slice(3, 8).toString('ascii')) // 'defgh' console.log(bl.slice(5, 10).toString('ascii')) // 'fghij' console.log(bl.indexOf('def')) // 3 console.log(bl.indexOf('asdf')) // -1 // or just use toString! console.log(bl.toString()) // 'abcdefghij\u0003\u0004' console.log(bl.toString('ascii', 3, 8)) // 'defgh' console.log(bl.toString('ascii', 5, 10)) // 'fghij' // other standard Buffer readables console.log(bl.readUInt16BE(10)) // 0x0304 console.log(bl.readUInt16LE(10)) // 0x0403 ``` Give it a callback in the constructor and use it just like **[concat-stream](https://github.com/maxogden/node-concat-stream)**: ```js const { BufferListStream } = require('bl') const fs = require('fs') fs.createReadStream('README.md') .pipe(BufferListStream((err, data) => { // note 'new' isn't strictly required // `data` is a complete Buffer object containing the full data console.log(data.toString()) })) ``` Note that when you use the *callback* method like this, the resulting `data` parameter is a concatenation of all `Buffer` objects in the list. If you want to avoid the overhead of this concatenation (in cases of extreme performance consciousness), then avoid the *callback* method and just listen to `'end'` instead, like a standard Stream. Or to fetch a URL using [hyperquest](https://github.com/substack/hyperquest) (should work with [request](http://github.com/mikeal/request) and even plain Node http too!): ```js const hyperquest = require('hyperquest') const { BufferListStream } = require('bl') const url = 'https://raw.github.com/rvagg/bl/master/README.md' hyperquest(url).pipe(BufferListStream((err, data) => { console.log(data.toString()) })) ``` Or, use it as a readable stream to recompose a list of Buffers to an output source: ```js const { BufferListStream } = require('bl') const fs = require('fs') var bl = new BufferListStream() bl.append(Buffer.from('abcd')) bl.append(Buffer.from('efg')) bl.append(Buffer.from('hi')) bl.append(Buffer.from('j')) bl.pipe(fs.createWriteStream('gibberish.txt')) ``` ## API * <a href="#ctor"><code><b>new BufferList([ buf ])</b></code></a> * <a href="#isBufferList"><code><b>BufferList.isBufferList(obj)</b></code></a> * <a href="#length"><code>bl.<b>length</b></code></a> * <a href="#append"><code>bl.<b>append(buffer)</b></code></a> * <a href="#get"><code>bl.<b>get(index)</b></code></a> * <a href="#indexOf"><code>bl.<b>indexOf(value[, byteOffset][, encoding])</b></code></a> * <a href="#slice"><code>bl.<b>slice([ start[, end ] ])</b></code></a> * <a href="#shallowSlice"><code>bl.<b>shallowSlice([ start[, end ] ])</b></code></a> * <a href="#copy"><code>bl.<b>copy(dest, [ destStart, [ srcStart [, srcEnd ] ] ])</b></code></a> * <a href="#duplicate"><code>bl.<b>duplicate()</b></code></a> * <a href="#consume"><code>bl.<b>consume(bytes)</b></code></a> * <a href="#toString"><code>bl.<b>toString([encoding, [ start, [ end ]]])</b></code></a> * <a href="#readXX"><code>bl.<b>readDoubleBE()</b></code>, <code>bl.<b>readDoubleLE()</b></code>, <code>bl.<b>readFloatBE()</b></code>, <code>bl.<b>readFloatLE()</b></code>, <code>bl.<b>readInt32BE()</b></code>, <code>bl.<b>readInt32LE()</b></code>, <code>bl.<b>readUInt32BE()</b></code>, <code>bl.<b>readUInt32LE()</b></code>, <code>bl.<b>readInt16BE()</b></code>, <code>bl.<b>readInt16LE()</b></code>, <code>bl.<b>readUInt16BE()</b></code>, <code>bl.<b>readUInt16LE()</b></code>, <code>bl.<b>readInt8()</b></code>, <code>bl.<b>readUInt8()</b></code></a> * <a href="#ctorStream"><code><b>new BufferListStream([ callback ])</b></code></a> -------------------------------------------------------- <a name="ctor"></a> ### new BufferList([ Buffer | Buffer array | BufferList | BufferList array | String ]) No arguments are _required_ for the constructor, but you can initialise the list by passing in a single `Buffer` object or an array of `Buffer` objects. `new` is not strictly required, if you don't instantiate a new object, it will be done automatically for you so you can create a new instance simply with: ```js const { BufferList } = require('bl') const bl = BufferList() // equivalent to: const { BufferList } = require('bl') const bl = new BufferList() ``` -------------------------------------------------------- <a name="isBufferList"></a> ### BufferList.isBufferList(obj) Determines if the passed object is a `BufferList`. It will return `true` if the passed object is an instance of `BufferList` **or** `BufferListStream` and `false` otherwise. N.B. this won't return `true` for `BufferList` or `BufferListStream` instances created by versions of this library before this static method was added. -------------------------------------------------------- <a name="length"></a> ### bl.length Get the length of the list in bytes. This is the sum of the lengths of all of the buffers contained in the list, minus any initial offset for a semi-consumed buffer at the beginning. Should accurately represent the total number of bytes that can be read from the list. -------------------------------------------------------- <a name="append"></a> ### bl.append(Buffer | Buffer array | BufferList | BufferList array | String) `append(buffer)` adds an additional buffer or BufferList to the internal list. `this` is returned so it can be chained. -------------------------------------------------------- <a name="get"></a> ### bl.get(index) `get()` will return the byte at the specified index. -------------------------------------------------------- <a name="indexOf"></a> ### bl.indexOf(value[, byteOffset][, encoding]) `get()` will return the byte at the specified index. `indexOf()` method returns the first index at which a given element can be found in the BufferList, or -1 if it is not present. -------------------------------------------------------- <a name="slice"></a> ### bl.slice([ start, [ end ] ]) `slice()` returns a new `Buffer` object containing the bytes within the range specified. Both `start` and `end` are optional and will default to the beginning and end of the list respectively. If the requested range spans a single internal buffer then a slice of that buffer will be returned which shares the original memory range of that Buffer. If the range spans multiple buffers then copy operations will likely occur to give you a uniform Buffer. -------------------------------------------------------- <a name="shallowSlice"></a> ### bl.shallowSlice([ start, [ end ] ]) `shallowSlice()` returns a new `BufferList` object containing the bytes within the range specified. Both `start` and `end` are optional and will default to the beginning and end of the list respectively. No copies will be performed. All buffers in the result share memory with the original list. -------------------------------------------------------- <a name="copy"></a> ### bl.copy(dest, [ destStart, [ srcStart [, srcEnd ] ] ]) `copy()` copies the content of the list in the `dest` buffer, starting from `destStart` and containing the bytes within the range specified with `srcStart` to `srcEnd`. `destStart`, `start` and `end` are optional and will default to the beginning of the `dest` buffer, and the beginning and end of the list respectively. -------------------------------------------------------- <a name="duplicate"></a> ### bl.duplicate() `duplicate()` performs a **shallow-copy** of the list. The internal Buffers remains the same, so if you change the underlying Buffers, the change will be reflected in both the original and the duplicate. This method is needed if you want to call `consume()` or `pipe()` and still keep the original list.Example: ```js var bl = new BufferListStream() bl.append('hello') bl.append(' world') bl.append('\n') bl.duplicate().pipe(process.stdout, { end: false }) console.log(bl.toString()) ``` -------------------------------------------------------- <a name="consume"></a> ### bl.consume(bytes) `consume()` will shift bytes *off the start of the list*. The number of bytes consumed don't need to line up with the sizes of the internal Buffers&mdash;initial offsets will be calculated accordingly in order to give you a consistent view of the data. -------------------------------------------------------- <a name="toString"></a> ### bl.toString([encoding, [ start, [ end ]]]) `toString()` will return a string representation of the buffer. The optional `start` and `end` arguments are passed on to `slice()`, while the `encoding` is passed on to `toString()` of the resulting Buffer. See the [Buffer#toString()](http://nodejs.org/docs/latest/api/buffer.html#buffer_buf_tostring_encoding_start_end) documentation for more information. -------------------------------------------------------- <a name="readXX"></a> ### bl.readDoubleBE(), bl.readDoubleLE(), bl.readFloatBE(), bl.readFloatLE(), bl.readInt32BE(), bl.readInt32LE(), bl.readUInt32BE(), bl.readUInt32LE(), bl.readInt16BE(), bl.readInt16LE(), bl.readUInt16BE(), bl.readUInt16LE(), bl.readInt8(), bl.readUInt8() All of the standard byte-reading methods of the `Buffer` interface are implemented and will operate across internal Buffer boundaries transparently. See the <b><code>[Buffer](http://nodejs.org/docs/latest/api/buffer.html)</code></b> documentation for how these work. -------------------------------------------------------- <a name="ctorStream"></a> ### new BufferListStream([ callback | Buffer | Buffer array | BufferList | BufferList array | String ]) **BufferListStream** is a Node **[Duplex Stream](http://nodejs.org/docs/latest/api/stream.html#stream_class_stream_duplex)**, so it can be read from and written to like a standard Node stream. You can also `pipe()` to and from a **BufferListStream** instance. The constructor takes an optional callback, if supplied, the callback will be called with an error argument followed by a reference to the **bl** instance, when `bl.end()` is called (i.e. from a piped stream). This is a convenient method of collecting the entire contents of a stream, particularly when the stream is *chunky*, such as a network stream. Normally, no arguments are required for the constructor, but you can initialise the list by passing in a single `Buffer` object or an array of `Buffer` object. `new` is not strictly required, if you don't instantiate a new object, it will be done automatically for you so you can create a new instance simply with: ```js const { BufferListStream } = require('bl') const bl = BufferListStream() // equivalent to: const { BufferListStream } = require('bl') const bl = new BufferListStream() ``` N.B. For backwards compatibility reasons, `BufferListStream` is the **default** export when you `require('bl')`: ```js const { BufferListStream } = require('bl') // equivalent to: const BufferListStream = require('bl') ``` -------------------------------------------------------- ## Contributors **bl** is brought to you by the following hackers: * [Rod Vagg](https://github.com/rvagg) * [Matteo Collina](https://github.com/mcollina) * [Jarett Cruger](https://github.com/jcrugzz) <a name="license"></a> ## License &amp; copyright Copyright (c) 2013-2019 bl contributors (listed above). bl is licensed under the MIT license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE.md file for more details. # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # Can I cache this? [![Build Status](https://travis-ci.org/kornelski/http-cache-semantics.svg?branch=master)](https://travis-ci.org/kornelski/http-cache-semantics) `CachePolicy` tells when responses can be reused from a cache, taking into account [HTTP RFC 7234](http://httpwg.org/specs/rfc7234.html) rules for user agents and shared caches. It also implements [RFC 5861](https://tools.ietf.org/html/rfc5861), implementing `stale-if-error` and `stale-while-revalidate`. It's aware of many tricky details such as the `Vary` header, proxy revalidation, and authenticated responses. ## Usage Cacheability of an HTTP response depends on how it was requested, so both `request` and `response` are required to create the policy. ```js const policy = new CachePolicy(request, response, options); if (!policy.storable()) { // throw the response away, it's not usable at all return; } // Cache the data AND the policy object in your cache // (this is pseudocode, roll your own cache (lru-cache package works)) letsPretendThisIsSomeCache.set( request.url, { policy, response }, policy.timeToLive() ); ``` ```js // And later, when you receive a new request: const { policy, response } = letsPretendThisIsSomeCache.get(newRequest.url); // It's not enough that it exists in the cache, it has to match the new request, too: if (policy && policy.satisfiesWithoutRevalidation(newRequest)) { // OK, the previous response can be used to respond to the `newRequest`. // Response headers have to be updated, e.g. to add Age and remove uncacheable headers. response.headers = policy.responseHeaders(); return response; } ``` It may be surprising, but it's not enough for an HTTP response to be [fresh](#yo-fresh) to satisfy a request. It may need to match request headers specified in `Vary`. Even a matching fresh response may still not be usable if the new request restricted cacheability, etc. The key method is `satisfiesWithoutRevalidation(newRequest)`, which checks whether the `newRequest` is compatible with the original request and whether all caching conditions are met. ### Constructor options Request and response must have a `headers` property with all header names in lower case. `url`, `status` and `method` are optional (defaults are any URL, status `200`, and `GET` method). ```js const request = { url: '/', method: 'GET', headers: { accept: '*/*', }, }; const response = { status: 200, headers: { 'cache-control': 'public, max-age=7234', }, }; const options = { shared: true, cacheHeuristic: 0.1, immutableMinTimeToLive: 24 * 3600 * 1000, // 24h ignoreCargoCult: false, }; ``` If `options.shared` is `true` (default), then the response is evaluated from a perspective of a shared cache (i.e. `private` is not cacheable and `s-maxage` is respected). If `options.shared` is `false`, then the response is evaluated from a perspective of a single-user cache (i.e. `private` is cacheable and `s-maxage` is ignored). `shared: true` is recommended for HTTP clients. `options.cacheHeuristic` is a fraction of response's age that is used as a fallback cache duration. The default is 0.1 (10%), e.g. if a file hasn't been modified for 100 days, it'll be cached for 100\*0.1 = 10 days. `options.immutableMinTimeToLive` is a number of milliseconds to assume as the default time to cache responses with `Cache-Control: immutable`. Note that [per RFC](http://httpwg.org/http-extensions/immutable.html) these can become stale, so `max-age` still overrides the default. If `options.ignoreCargoCult` is true, common anti-cache directives will be completely ignored if the non-standard `pre-check` and `post-check` directives are present. These two useless directives are most commonly found in bad StackOverflow answers and PHP's "session limiter" defaults. ### `storable()` Returns `true` if the response can be stored in a cache. If it's `false` then you MUST NOT store either the request or the response. ### `satisfiesWithoutRevalidation(newRequest)` This is the most important method. Use this method to check whether the cached response is still fresh in the context of the new request. If it returns `true`, then the given `request` matches the original response this cache policy has been created with, and the response can be reused without contacting the server. Note that the old response can't be returned without being updated, see `responseHeaders()`. If it returns `false`, then the response may not be matching at all (e.g. it's for a different URL or method), or may require to be refreshed first (see `revalidationHeaders()`). ### `responseHeaders()` Returns updated, filtered set of response headers to return to clients receiving the cached response. This function is necessary, because proxies MUST always remove hop-by-hop headers (such as `TE` and `Connection`) and update response's `Age` to avoid doubling cache time. ```js cachedResponse.headers = cachePolicy.responseHeaders(cachedResponse); ``` ### `timeToLive()` Returns approximate time in _milliseconds_ until the response becomes stale (i.e. not fresh). After that time (when `timeToLive() <= 0`) the response might not be usable without revalidation. However, there are exceptions, e.g. a client can explicitly allow stale responses, so always check with `satisfiesWithoutRevalidation()`. `stale-if-error` and `stale-while-revalidate` extend the time to live of the cache, that can still be used if stale. ### `toObject()`/`fromObject(json)` Chances are you'll want to store the `CachePolicy` object along with the cached response. `obj = policy.toObject()` gives a plain JSON-serializable object. `policy = CachePolicy.fromObject(obj)` creates an instance from it. ### Refreshing stale cache (revalidation) When a cached response has expired, it can be made fresh again by making a request to the origin server. The server may respond with status 304 (Not Modified) without sending the response body again, saving bandwidth. The following methods help perform the update efficiently and correctly. #### `revalidationHeaders(newRequest)` Returns updated, filtered set of request headers to send to the origin server to check if the cached response can be reused. These headers allow the origin server to return status 304 indicating the response is still fresh. All headers unrelated to caching are passed through as-is. Use this method when updating cache from the origin server. ```js updateRequest.headers = cachePolicy.revalidationHeaders(updateRequest); ``` #### `revalidatedPolicy(revalidationRequest, revalidationResponse)` Use this method to update the cache after receiving a new response from the origin server. It returns an object with two keys: - `policy` — A new `CachePolicy` with HTTP headers updated from `revalidationResponse`. You can always replace the old cached `CachePolicy` with the new one. - `modified` — Boolean indicating whether the response body has changed. - If `false`, then a valid 304 Not Modified response has been received, and you can reuse the old cached response body. This is also affected by `stale-if-error`. - If `true`, you should use new response's body (if present), or make another request to the origin server without any conditional headers (i.e. don't use `revalidationHeaders()` this time) to get the new resource. ```js // When serving requests from cache: const { oldPolicy, oldResponse } = letsPretendThisIsSomeCache.get( newRequest.url ); if (!oldPolicy.satisfiesWithoutRevalidation(newRequest)) { // Change the request to ask the origin server if the cached response can be used newRequest.headers = oldPolicy.revalidationHeaders(newRequest); // Send request to the origin server. The server may respond with status 304 const newResponse = await makeRequest(newRequest); // Create updated policy and combined response from the old and new data const { policy, modified } = oldPolicy.revalidatedPolicy( newRequest, newResponse ); const response = modified ? newResponse : oldResponse; // Update the cache with the newer/fresher response letsPretendThisIsSomeCache.set( newRequest.url, { policy, response }, policy.timeToLive() ); // And proceed returning cached response as usual response.headers = policy.responseHeaders(); return response; } ``` # Yo, FRESH ![satisfiesWithoutRevalidation](fresh.jpg) ## Used by - [ImageOptim API](https://imageoptim.com/api), [make-fetch-happen](https://github.com/zkat/make-fetch-happen), [cacheable-request](https://www.npmjs.com/package/cacheable-request) ([got](https://www.npmjs.com/package/got)), [npm/registry-fetch](https://github.com/npm/registry-fetch), [etc.](https://github.com/kornelski/http-cache-semantics/network/dependents) ## Implemented - `Cache-Control` response header with all the quirks. - `Expires` with check for bad clocks. - `Pragma` response header. - `Age` response header. - `Vary` response header. - Default cacheability of statuses and methods. - Requests for stale data. - Filtering of hop-by-hop headers. - Basic revalidation request - `stale-if-error` ## Unimplemented - Merging of range requests, `If-Range` (but correctly supports them as non-cacheable) - Revalidation of multiple representations ### Trusting server `Date` Per the RFC, the cache should take into account the time between server-supplied `Date` and the time it received the response. The RFC-mandated behavior creates two problems: * Servers with incorrectly set timezone may add several hours to cache age (or more, if the clock is completely wrong). * Even reasonably correct clocks may be off by a couple of seconds, breaking `max-age=1` trick (which is useful for reverse proxies on high-traffic servers). Previous versions of this library had an option to ignore the server date if it was "too inaccurate". To support the `max-age=1` trick the library also has to ignore dates that pretty accurate. There's no point of having an option to trust dates that are only a bit inaccurate, so this library won't trust any server dates. `max-age` will be interpreted from the time the response has been received, not from when it has been sent. This will affect only [RFC 1149 networks](https://tools.ietf.org/html/rfc1149). A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> Overview [![Build Status](https://travis-ci.org/lydell/js-tokens.svg?branch=master)](https://travis-ci.org/lydell/js-tokens) ======== A regex that tokenizes JavaScript. ```js var jsTokens = require("js-tokens").default var jsString = "var foo=opts.foo;\n..." jsString.match(jsTokens) // ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...] ``` Installation ============ `npm install js-tokens` ```js import jsTokens from "js-tokens" // or: var jsTokens = require("js-tokens").default ``` Usage ===== ### `jsTokens` ### A regex with the `g` flag that matches JavaScript tokens. The regex _always_ matches, even invalid JavaScript and the empty string. The next match is always directly after the previous. ### `var token = matchToToken(match)` ### ```js import {matchToToken} from "js-tokens" // or: var matchToToken = require("js-tokens").matchToToken ``` Takes a `match` returned by `jsTokens.exec(string)`, and returns a `{type: String, value: String}` object. The following types are available: - string - comment - regex - number - name - punctuator - whitespace - invalid Multi-line comments and strings also have a `closed` property indicating if the token was closed or not (see below). Comments and strings both come in several flavors. To distinguish them, check if the token starts with `//`, `/*`, `'`, `"` or `` ` ``. Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use [is-keyword-js] to tell them apart. Whitespace includes both line terminators and other whitespace. [is-keyword-js]: https://github.com/crissdev/is-keyword-js ECMAScript support ================== The intention is to always support the latest ECMAScript version whose feature set has been finalized. If adding support for a newer version requires changes, a new version with a major verion bump will be released. Currently, ECMAScript 2018 is supported. Invalid code handling ===================== Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input. Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input. Unterminated regex literals are likely matched as division and whatever is inside the regex. Invalid ASCII characters have their own capturing group. Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the [ES2018](#es2018) section. Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple. Strings may contain invalid escape sequences. Limitations =========== Tokenizing JavaScript using regexes—in fact, _one single regex_—won’t be perfect. But that’s not the point either. You may compare jsTokens with [esprima] by using `esprima-compare.js`. See `npm run esprima-compare`! [esprima]: http://esprima.org/ ### Template string interpolation ### Template strings are matched as single tokens, from the starting `` ` `` to the ending `` ` ``, including interpolations (whose tokens are not matched individually). Matching template string interpolations requires recursive balancing of `{` and `}`—something that JavaScript regexes cannot do. Only one level of nesting is supported. ### Division and regex literals collision ### Consider this example: ```js var g = 9.82 var number = bar / 2/g var regex = / 2/g ``` A human can easily understand that in the `number` line we’re dealing with division, and in the `regex` line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the `/` characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the [ES2018](#es2018) section). When the `jsTokens` regex scans throught the above, it will see the following at the end of both the `number` and `regex` rows: ```js / 2/g ``` It is then impossible to know if that is a regex literal, or part of an expression dealing with division. Here is a similar case: ```js foo /= 2/g foo(/= 2/g) ``` The first line divides the `foo` variable with `2/g`. The second line calls the `foo` function with the regex literal `/= 2/g`. Again, since `jsTokens` only sees forwards, it cannot tell the two cases apart. There are some cases where we _can_ tell division and regex literals apart, though. First off, we have the simple cases where there’s only one slash in the line: ```js var foo = 2/g foo /= 2 ``` Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line. Secondly, not every character is a valid regex flag. ```js var number = bar / 2/e ``` The above example is also correctly identified as division, because `e` is not a valid regex flag. I initially wanted to future-proof by allowing `[a-zA-Z]*` (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard `g`, `m`, `i`, `y` and `u` flags are allowed. This means that the above example will be identified as division as long as you don’t rename the `e` variable to some permutation of `gmiyus` 1 to 6 characters long. Lastly, we can look _forward_ for information. - If the token following what looks like a regex literal is not valid after a regex literal, but is valid in a division expression, then the regex literal is treated as division instead. For example, a flagless regex cannot be followed by a string, number or name, but all of those three can be the denominator of a division. - Generally, if what looks like a regex literal is followed by an operator, the regex literal is treated as division instead. This is because regexes are seldomly used with operators (such as `+`, `*`, `&&` and `==`), but division could likely be part of such an expression. Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as: If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code). ### ES2018 ### ES2018 added some nice regex improvements to the language. - [Unicode property escapes] should allow telling names and invalid non-ASCII characters apart without blowing up the regex size. - [Lookbehind assertions] should allow matching telling division and regex literals apart in more cases. - [Named capture groups] might simplify some things. These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features. [Unicode property escapes]: http://2ality.com/2017/07/regexp-unicode-property-escapes.html [Lookbehind assertions]: http://2ality.com/2017/05/regexp-lookbehind-assertions.html [Named capture groups]: http://2ality.com/2017/05/regexp-named-capture-groups.html License ======= [MIT](LICENSE). # pbkdf2 [![NPM Package](https://img.shields.io/npm/v/pbkdf2.svg?style=flat-square)](https://www.npmjs.org/package/pbkdf2) [![Build Status](https://img.shields.io/travis/crypto-browserify/pbkdf2.svg?branch=master&style=flat-square)](https://travis-ci.org/crypto-browserify/pbkdf2) [![Dependency status](https://img.shields.io/david/crypto-browserify/pbkdf2.svg?style=flat-square)](https://david-dm.org/crypto-browserify/pbkdf2#info=dependencies) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) This library provides the functionality of PBKDF2 with the ability to use any supported hashing algorithm returned from `crypto.getHashes()` ## Usage ```js var pbkdf2 = require('pbkdf2') var derivedKey = pbkdf2.pbkdf2Sync('password', 'salt', 1, 32, 'sha512') ... ``` For more information on the API, please see the relevant [Node documentation](https://nodejs.org/api/crypto.html#crypto_crypto_pbkdf2_password_salt_iterations_keylen_digest_callback). For high performance, use the `async` variant (`pbkdf2.pbkdf2`), not `pbkdf2.pbkdf2Sync`, this variant has the oppurtunity to use `window.crypto.subtle` when browserified. ## Credits This module is a derivative of [cryptocoinjs/pbkdf2-sha256](https://github.com/cryptocoinjs/pbkdf2-sha256/), so thanks to [JP Richardson](https://github.com/jprichardson/) for laying the ground work. Thank you to [FangDun Cai](https://github.com/fundon) for donating the package name on npm, if you're looking for his previous module it is located at [fundon/pbkdf2](https://github.com/fundon/pbkdf2). # to-regex-range [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/to-regex-range.svg?style=flat)](https://www.npmjs.com/package/to-regex-range) [![NPM monthly downloads](https://img.shields.io/npm/dm/to-regex-range.svg?style=flat)](https://npmjs.org/package/to-regex-range) [![NPM total downloads](https://img.shields.io/npm/dt/to-regex-range.svg?style=flat)](https://npmjs.org/package/to-regex-range) [![Linux Build Status](https://img.shields.io/travis/micromatch/to-regex-range.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/to-regex-range) > Pass two numbers, get a regex-compatible source string for matching ranges. Validated against more than 2.78 million test assertions. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save to-regex-range ``` <details> <summary><strong>What does this do?</strong></summary> <br> This libary generates the `source` string to be passed to `new RegExp()` for matching a range of numbers. **Example** ```js const toRegexRange = require('to-regex-range'); const regex = new RegExp(toRegexRange('15', '95')); ``` A string is returned so that you can do whatever you need with it before passing it to `new RegExp()` (like adding `^` or `$` boundaries, defining flags, or combining it another string). <br> </details> <details> <summary><strong>Why use this library?</strong></summary> <br> ### Convenience Creating regular expressions for matching numbers gets deceptively complicated pretty fast. For example, let's say you need a validation regex for matching part of a user-id, postal code, social security number, tax id, etc: * regex for matching `1` => `/1/` (easy enough) * regex for matching `1` through `5` => `/[1-5]/` (not bad...) * regex for matching `1` or `5` => `/(1|5)/` (still easy...) * regex for matching `1` through `50` => `/([1-9]|[1-4][0-9]|50)/` (uh-oh...) * regex for matching `1` through `55` => `/([1-9]|[1-4][0-9]|5[0-5])/` (no prob, I can do this...) * regex for matching `1` through `555` => `/([1-9]|[1-9][0-9]|[1-4][0-9]{2}|5[0-4][0-9]|55[0-5])/` (maybe not...) * regex for matching `0001` through `5555` => `/(0{3}[1-9]|0{2}[1-9][0-9]|0[1-9][0-9]{2}|[1-4][0-9]{3}|5[0-4][0-9]{2}|55[0-4][0-9]|555[0-5])/` (okay, I get the point!) The numbers are contrived, but they're also really basic. In the real world you might need to generate a regex on-the-fly for validation. **Learn more** If you're interested in learning more about [character classes](http://www.regular-expressions.info/charclass.html) and other regex features, I personally have always found [regular-expressions.info](http://www.regular-expressions.info/charclass.html) to be pretty useful. ### Heavily tested As of April 07, 2019, this library runs [>1m test assertions](./test/test.js) against generated regex-ranges to provide brute-force verification that results are correct. Tests run in ~280ms on my MacBook Pro, 2.5 GHz Intel Core i7. ### Optimized Generated regular expressions are optimized: * duplicate sequences and character classes are reduced using quantifiers * smart enough to use `?` conditionals when number(s) or range(s) can be positive or negative * uses fragment caching to avoid processing the same exact string more than once <br> </details> ## Usage Add this library to your javascript application with the following line of code ```js const toRegexRange = require('to-regex-range'); ``` The main export is a function that takes two integers: the `min` value and `max` value (formatted as strings or numbers). ```js const source = toRegexRange('15', '95'); //=> 1[5-9]|[2-8][0-9]|9[0-5] const regex = new RegExp(`^${source}$`); console.log(regex.test('14')); //=> false console.log(regex.test('50')); //=> true console.log(regex.test('94')); //=> true console.log(regex.test('96')); //=> false ``` ## Options ### options.capture **Type**: `boolean` **Deafault**: `undefined` Wrap the returned value in parentheses when there is more than one regex condition. Useful when you're dynamically generating ranges. ```js console.log(toRegexRange('-10', '10')); //=> -[1-9]|-?10|[0-9] console.log(toRegexRange('-10', '10', { capture: true })); //=> (-[1-9]|-?10|[0-9]) ``` ### options.shorthand **Type**: `boolean` **Deafault**: `undefined` Use the regex shorthand for `[0-9]`: ```js console.log(toRegexRange('0', '999999')); //=> [0-9]|[1-9][0-9]{1,5} console.log(toRegexRange('0', '999999', { shorthand: true })); //=> \d|[1-9]\d{1,5} ``` ### options.relaxZeros **Type**: `boolean` **Default**: `true` This option relaxes matching for leading zeros when when ranges are zero-padded. ```js const source = toRegexRange('-0010', '0010'); const regex = new RegExp(`^${source}$`); console.log(regex.test('-10')); //=> true console.log(regex.test('-010')); //=> true console.log(regex.test('-0010')); //=> true console.log(regex.test('10')); //=> true console.log(regex.test('010')); //=> true console.log(regex.test('0010')); //=> true ``` When `relaxZeros` is false, matching is strict: ```js const source = toRegexRange('-0010', '0010', { relaxZeros: false }); const regex = new RegExp(`^${source}$`); console.log(regex.test('-10')); //=> false console.log(regex.test('-010')); //=> false console.log(regex.test('-0010')); //=> true console.log(regex.test('10')); //=> false console.log(regex.test('010')); //=> false console.log(regex.test('0010')); //=> true ``` ## Examples | **Range** | **Result** | **Compile time** | | --- | --- | --- | | `toRegexRange(-10, 10)` | `-[1-9]\|-?10\|[0-9]` | _132μs_ | | `toRegexRange(-100, -10)` | `-1[0-9]\|-[2-9][0-9]\|-100` | _50μs_ | | `toRegexRange(-100, 100)` | `-[1-9]\|-?[1-9][0-9]\|-?100\|[0-9]` | _42μs_ | | `toRegexRange(001, 100)` | `0{0,2}[1-9]\|0?[1-9][0-9]\|100` | _109μs_ | | `toRegexRange(001, 555)` | `0{0,2}[1-9]\|0?[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5]` | _51μs_ | | `toRegexRange(0010, 1000)` | `0{0,2}1[0-9]\|0{0,2}[2-9][0-9]\|0?[1-9][0-9]{2}\|1000` | _31μs_ | | `toRegexRange(1, 50)` | `[1-9]\|[1-4][0-9]\|50` | _24μs_ | | `toRegexRange(1, 55)` | `[1-9]\|[1-4][0-9]\|5[0-5]` | _23μs_ | | `toRegexRange(1, 555)` | `[1-9]\|[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5]` | _30μs_ | | `toRegexRange(1, 5555)` | `[1-9]\|[1-9][0-9]{1,2}\|[1-4][0-9]{3}\|5[0-4][0-9]{2}\|55[0-4][0-9]\|555[0-5]` | _43μs_ | | `toRegexRange(111, 555)` | `11[1-9]\|1[2-9][0-9]\|[2-4][0-9]{2}\|5[0-4][0-9]\|55[0-5]` | _38μs_ | | `toRegexRange(29, 51)` | `29\|[34][0-9]\|5[01]` | _24μs_ | | `toRegexRange(31, 877)` | `3[1-9]\|[4-9][0-9]\|[1-7][0-9]{2}\|8[0-6][0-9]\|87[0-7]` | _32μs_ | | `toRegexRange(5, 5)` | `5` | _8μs_ | | `toRegexRange(5, 6)` | `5\|6` | _11μs_ | | `toRegexRange(1, 2)` | `1\|2` | _6μs_ | | `toRegexRange(1, 5)` | `[1-5]` | _15μs_ | | `toRegexRange(1, 10)` | `[1-9]\|10` | _22μs_ | | `toRegexRange(1, 100)` | `[1-9]\|[1-9][0-9]\|100` | _25μs_ | | `toRegexRange(1, 1000)` | `[1-9]\|[1-9][0-9]{1,2}\|1000` | _31μs_ | | `toRegexRange(1, 10000)` | `[1-9]\|[1-9][0-9]{1,3}\|10000` | _34μs_ | | `toRegexRange(1, 100000)` | `[1-9]\|[1-9][0-9]{1,4}\|100000` | _36μs_ | | `toRegexRange(1, 1000000)` | `[1-9]\|[1-9][0-9]{1,5}\|1000000` | _42μs_ | | `toRegexRange(1, 10000000)` | `[1-9]\|[1-9][0-9]{1,6}\|10000000` | _42μs_ | ## Heads up! **Order of arguments** When the `min` is larger than the `max`, values will be flipped to create a valid range: ```js toRegexRange('51', '29'); ``` Is effectively flipped to: ```js toRegexRange('29', '51'); //=> 29|[3-4][0-9]|5[0-1] ``` **Steps / increments** This library does not support steps (increments). A pr to add support would be welcome. ## History ### v2.0.0 - 2017-04-21 **New features** Adds support for zero-padding! ### v1.0.0 **Optimizations** Repeating ranges are now grouped using quantifiers. rocessing time is roughly the same, but the generated regex is much smaller, which should result in faster matching. ## Attribution Inspired by the python library [range-regex](https://github.com/dimka665/range-regex). ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [expand-range](https://www.npmjs.com/package/expand-range): Fast, bash-like range expansion. Expand a range of numbers or letters, uppercase or lowercase. Used… [more](https://github.com/jonschlinkert/expand-range) | [homepage](https://github.com/jonschlinkert/expand-range "Fast, bash-like range expansion. Expand a range of numbers or letters, uppercase or lowercase. Used by micromatch.") * [fill-range](https://www.npmjs.com/package/fill-range): Fill in a range of numbers or letters, optionally passing an increment or `step` to… [more](https://github.com/jonschlinkert/fill-range) | [homepage](https://github.com/jonschlinkert/fill-range "Fill in a range of numbers or letters, optionally passing an increment or `step` to use, or create a regex-compatible range with `options.toRegex`") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/micromatch/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") * [repeat-element](https://www.npmjs.com/package/repeat-element): Create an array by repeating the given value n times. | [homepage](https://github.com/jonschlinkert/repeat-element "Create an array by repeating the given value n times.") * [repeat-string](https://www.npmjs.com/package/repeat-string): Repeat the given string n times. Fastest implementation for repeating a string. | [homepage](https://github.com/jonschlinkert/repeat-string "Repeat the given string n times. Fastest implementation for repeating a string.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 63 | [jonschlinkert](https://github.com/jonschlinkert) | | 3 | [doowb](https://github.com/doowb) | | 2 | [realityking](https://github.com/realityking) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) Please consider supporting me on Patreon, or [start your own Patreon page](https://patreon.com/invite/bxpbvm)! <a href="https://www.patreon.com/jonschlinkert"> <img src="https://c5.patreon.com/external/logo/become_a_patron_button@2x.png" height="50"> </a> ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 07, 2019._ # <img src="docs_app/assets/Rx_Logo_S.png" alt="RxJS Logo" width="86" height="86"> RxJS: Reactive Extensions For JavaScript [![CircleCI](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x.svg?style=svg)](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x) [![npm version](https://badge.fury.io/js/%40reactivex%2Frxjs.svg)](http://badge.fury.io/js/%40reactivex%2Frxjs) [![Join the chat at https://gitter.im/Reactive-Extensions/RxJS](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/Reactive-Extensions/RxJS?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # RxJS 6 Stable ### MIGRATION AND RELEASE INFORMATION: Find out how to update to v6, **automatically update your TypeScript code**, and more! - [Current home is MIGRATION.md](./docs_app/content/guide/v6/migration.md) ### FOR V 5.X PLEASE GO TO [THE 5.0 BRANCH](https://github.com/ReactiveX/rxjs/tree/5.x) Reactive Extensions Library for JavaScript. This is a rewrite of [Reactive-Extensions/RxJS](https://github.com/Reactive-Extensions/RxJS) and is the latest production-ready version of RxJS. This rewrite is meant to have better performance, better modularity, better debuggable call stacks, while staying mostly backwards compatible, with some breaking changes that reduce the API surface. [Apache 2.0 License](LICENSE.txt) - [Code of Conduct](CODE_OF_CONDUCT.md) - [Contribution Guidelines](CONTRIBUTING.md) - [Maintainer Guidelines](doc_app/content/maintainer-guidelines.md) - [API Documentation](https://rxjs.dev/) ## Versions In This Repository - [master](https://github.com/ReactiveX/rxjs/commits/master) - This is all of the current, unreleased work, which is against v6 of RxJS right now - [stable](https://github.com/ReactiveX/rxjs/commits/stable) - This is the branch for the latest version you'd get if you do `npm install rxjs` ## Important By contributing or commenting on issues in this repository, whether you've read them or not, you're agreeing to the [Contributor Code of Conduct](CODE_OF_CONDUCT.md). Much like traffic laws, ignorance doesn't grant you immunity. ## Installation and Usage ### ES6 via npm ```sh npm install rxjs ``` It's recommended to pull in the Observable creation methods you need directly from `'rxjs'` as shown below with `range`. And you can pull in any operator you need from one spot, under `'rxjs/operators'`. ```ts import { range } from "rxjs"; import { map, filter } from "rxjs/operators"; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` Here, we're using the built-in `pipe` method on Observables to combine operators. See [pipeable operators](https://github.com/ReactiveX/rxjs/blob/master/doc/pipeable-operators.md) for more information. ### CommonJS via npm To install this library for CommonJS (CJS) usage, use the following command: ```sh npm install rxjs ``` (Note: destructuring available in Node 8+) ```js const { range } = require('rxjs'); const { map, filter } = require('rxjs/operators'); range(1, 200).pipe( filter(x => x % 2 === 1), map(x => x + x) ).subscribe(x => console.log(x)); ``` ### CDN For CDN, you can use [unpkg](https://unpkg.com/): https://unpkg.com/rxjs/bundles/rxjs.umd.min.js The global namespace for rxjs is `rxjs`: ```js const { range } = rxjs; const { map, filter } = rxjs.operators; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` ## Goals - Smaller overall bundles sizes - Provide better performance than preceding versions of RxJS - To model/follow the [Observable Spec Proposal](https://github.com/zenparsing/es-observable) to the observable - Provide more modular file structure in a variety of formats - Provide more debuggable call stacks than preceding versions of RxJS ## Building/Testing - `npm run build_all` - builds everything - `npm test` - runs tests - `npm run test_no_cache` - run test with `ts-node` set to false ## Performance Tests Run `npm run build_perf` or `npm run perf` to run the performance tests with `protractor`. Run `npm run perf_micro [operator]` to run micro performance test benchmarking operator. ## Adding documentation We appreciate all contributions to the documentation of any type. All of the information needed to get the docs app up and running locally as well as how to contribute can be found in the [documentation directory](./docs_app). ## Generating PNG marble diagrams The script `npm run tests2png` requires some native packages installed locally: `imagemagick`, `graphicsmagick`, and `ghostscript`. For Mac OS X with [Homebrew](http://brew.sh/): - `brew install imagemagick` - `brew install graphicsmagick` - `brew install ghostscript` - You may need to install the Ghostscript fonts manually: - Download the tarball from the [gs-fonts project](https://sourceforge.net/projects/gs-fonts) - `mkdir -p /usr/local/share/ghostscript && tar zxvf /path/to/ghostscript-fonts.tar.gz -C /usr/local/share/ghostscript` For Debian Linux: - `sudo add-apt-repository ppa:dhor/myway` - `apt-get install imagemagick` - `apt-get install graphicsmagick` - `apt-get install ghostscript` For Windows and other Operating Systems, check the download instructions here: - http://imagemagick.org - http://www.graphicsmagick.org - http://www.ghostscript.com/ Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it JS-YAML - YAML 1.2 parser / writer for JavaScript ================================================= [![Build Status](https://travis-ci.org/nodeca/js-yaml.svg?branch=master)](https://travis-ci.org/nodeca/js-yaml) [![NPM version](https://img.shields.io/npm/v/js-yaml.svg)](https://www.npmjs.org/package/js-yaml) __[Online Demo](http://nodeca.github.com/js-yaml/)__ This is an implementation of [YAML](http://yaml.org/), a human-friendly data serialization language. Started as [PyYAML](http://pyyaml.org/) port, it was completely rewritten from scratch. Now it's very fast, and supports 1.2 spec. Installation ------------ ### YAML module for node.js ``` npm install js-yaml ``` ### CLI executable If you want to inspect your YAML files from CLI, install js-yaml globally: ``` npm install -g js-yaml ``` #### Usage ``` usage: js-yaml [-h] [-v] [-c] [-t] file Positional arguments: file File with YAML document(s) Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -c, --compact Display errors in compact mode -t, --trace Show stack trace on error ``` ### Bundled YAML library for browsers ``` html <!-- esprima required only for !!js/function --> <script src="esprima.js"></script> <script src="js-yaml.min.js"></script> <script type="text/javascript"> var doc = jsyaml.load('greeting: hello\nname: world'); </script> ``` Browser support was done mostly for the online demo. If you find any errors - feel free to send pull requests with fixes. Also note, that IE and other old browsers needs [es5-shims](https://github.com/kriskowal/es5-shim) to operate. Notes: 1. We have no resources to support browserified version. Don't expect it to be well tested. Don't expect fast fixes if something goes wrong there. 2. `!!js/function` in browser bundle will not work by default. If you really need it - load `esprima` parser first (via amd or directly). 3. `!!bin` in browser will return `Array`, because browsers do not support node.js `Buffer` and adding Buffer shims is completely useless on practice. API --- Here we cover the most 'useful' methods. If you need advanced details (creating your own tags), see [wiki](https://github.com/nodeca/js-yaml/wiki) and [examples](https://github.com/nodeca/js-yaml/tree/master/examples) for more info. ``` javascript const yaml = require('js-yaml'); const fs = require('fs'); // Get document, or throw exception on error try { const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8')); console.log(doc); } catch (e) { console.log(e); } ``` ### safeLoad (string [ , options ]) **Recommended loading way.** Parses `string` as single YAML document. Returns either a plain object, a string or `undefined`, or throws `YAMLException` on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data. options: - `filename` _(default: null)_ - string to be used as a file path in error/warning messages. - `onWarning` _(default: null)_ - function to call on warning messages. Loader will call this function with an instance of `YAMLException` for each warning. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ - specifies a schema to use. - `FAILSAFE_SCHEMA` - only strings, arrays and plain objects: http://www.yaml.org/spec/1.2/spec.html#id2802346 - `JSON_SCHEMA` - all JSON-supported types: http://www.yaml.org/spec/1.2/spec.html#id2803231 - `CORE_SCHEMA` - same as `JSON_SCHEMA`: http://www.yaml.org/spec/1.2/spec.html#id2804923 - `DEFAULT_SAFE_SCHEMA` - all supported YAML types, without unsafe ones (`!!js/undefined`, `!!js/regexp` and `!!js/function`): http://yaml.org/type/ - `DEFAULT_FULL_SCHEMA` - all supported YAML types. - `json` _(default: false)_ - compatibility with JSON.parse behaviour. If true, then duplicate keys in a mapping will override values rather than throwing an error. NOTE: This function **does not** understand multi-document sources, it throws exception on those. NOTE: JS-YAML **does not** support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use `Null` and `NULL` as `null`, etc. The core schema also has no such restrictions. It allows binary notation for integers. ### load (string [ , options ]) **Use with care with untrusted sources**. The same as `safeLoad()` but uses `DEFAULT_FULL_SCHEMA` by default - adds some JavaScript-specific types: `!!js/function`, `!!js/regexp` and `!!js/undefined`. For untrusted sources, you must additionally validate object structure to avoid injections: ``` javascript const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"'; // I'm just converting that string, what could possibly go wrong? require('js-yaml').load(untrusted_code) + '' ``` ### safeLoadAll (string [, iterator] [, options ]) Same as `safeLoad()`, but understands multi-document sources. Applies `iterator` to each document if specified, or returns array of documents. ``` javascript const yaml = require('js-yaml'); yaml.safeLoadAll(data, function (doc) { console.log(doc); }); ``` ### loadAll (string [, iterator] [ , options ]) Same as `safeLoadAll()` but uses `DEFAULT_FULL_SCHEMA` by default. ### safeDump (object [ , options ]) Serializes `object` as a YAML document. Uses `DEFAULT_SAFE_SCHEMA`, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the `skipInvalid` option to `true`. options: - `indent` _(default: 2)_ - indentation width to use (in spaces). - `noArrayIndent` _(default: false)_ - when true, will not add an indentation level to array elements - `skipInvalid` _(default: false)_ - do not throw on invalid types (like function in the safe schema) and skip pairs and single values with such types. - `flowLevel` (default: -1) - specifies level of nesting, when to switch from block to flow style for collections. -1 means block style everwhere - `styles` - "tag" => "style" map. Each tag may have own set of styles. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ specifies a schema to use. - `sortKeys` _(default: `false`)_ - if `true`, sort keys when dumping YAML. If a function, use the function to sort the keys. - `lineWidth` _(default: `80`)_ - set max line width. - `noRefs` _(default: `false`)_ - if `true`, don't convert duplicate objects into references - `noCompatMode` _(default: `false`)_ - if `true` don't try to be compatible with older yaml versions. Currently: don't quote "yes", "no" and so on, as required for YAML 1.1 - `condenseFlow` _(default: `false`)_ - if `true` flow sequences will be condensed, omitting the space between `a, b`. Eg. `'[a,b]'`, and omitting the space between `key: value` and quoting the key. Eg. `'{"a":b}'` Can be useful when using yaml for pretty URL query params as spaces are %-encoded. The following table show availlable styles (e.g. "canonical", "binary"...) available for each tag (.e.g. !!null, !!int ...). Yaml output is shown on the right side after `=>` (default setting) or `->`: ``` none !!null "canonical" -> "~" "lowercase" => "null" "uppercase" -> "NULL" "camelcase" -> "Null" !!int "binary" -> "0b1", "0b101010", "0b1110001111010" "octal" -> "01", "052", "016172" "decimal" => "1", "42", "7290" "hexadecimal" -> "0x1", "0x2A", "0x1C7A" !!bool "lowercase" => "true", "false" "uppercase" -> "TRUE", "FALSE" "camelcase" -> "True", "False" !!float "lowercase" => ".nan", '.inf' "uppercase" -> ".NAN", '.INF' "camelcase" -> ".NaN", '.Inf' ``` Example: ``` javascript safeDump (object, { 'styles': { '!!null': 'canonical' // dump null as ~ }, 'sortKeys': true // sort object keys }); ``` ### dump (object [ , options ]) Same as `safeDump()` but without limits (uses `DEFAULT_FULL_SCHEMA` by default). Supported YAML types -------------------- The list of standard YAML tags and corresponding JavaScipt types. See also [YAML tag discussion](http://pyyaml.org/wiki/YAMLTagDiscussion) and [YAML types repository](http://yaml.org/type/). ``` !!null '' # null !!bool 'yes' # bool !!int '3...' # number !!float '3.14...' # number !!binary '...base64...' # buffer !!timestamp 'YYYY-...' # date !!omap [ ... ] # array of key-value pairs !!pairs [ ... ] # array or array pairs !!set { ... } # array of objects with given keys and null values !!str '...' # string !!seq [ ... ] # array !!map { ... } # object ``` **JavaScript-specific tags** ``` !!js/regexp /pattern/gim # RegExp !!js/undefined '' # Undefined !!js/function 'function () {...}' # Function ``` Caveats ------- Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling `toString()` method) them at the moment of adding them. ``` yaml --- ? [ foo, bar ] : - baz ? { foo: bar } : - baz - baz ``` ``` javascript { "foo,bar": ["baz"], "[object Object]": ["baz", "baz"] } ``` Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded. ``` yaml &anchor foo: foo: bar *anchor: duplicate key baz: bat *anchor: duplicate key ``` js-yaml for enterprise ---------------------- Available as part of the Tidelift Subscription The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-js-yaml?utm_source=npm-js-yaml&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # signal-exit [![Build Status](https://travis-ci.org/tapjs/signal-exit.png)](https://travis-ci.org/tapjs/signal-exit) [![Coverage](https://coveralls.io/repos/tapjs/signal-exit/badge.svg?branch=master)](https://coveralls.io/r/tapjs/signal-exit?branch=master) [![NPM version](https://img.shields.io/npm/v/signal-exit.svg)](https://www.npmjs.com/package/signal-exit) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) When you want to fire an event no matter how a process exits: * reaching the end of execution. * explicitly having `process.exit(code)` called. * having `process.kill(pid, sig)` called. * receiving a fatal signal from outside the process Use `signal-exit`. ```js var onExit = require('signal-exit') onExit(function (code, signal) { console.log('process exited!') }) ``` ## API `var remove = onExit(function (code, signal) {}, options)` The return value of the function is a function that will remove the handler. Note that the function *only* fires for signals if the signal would cause the process to exit. That is, there are no other listeners, and it is a fatal signal. ## Options * `alwaysLast`: Run this handler after any other signal or exit handlers. This causes `process.emit` to be monkeypatched. # ansi-align > align-text with ANSI support for CLIs [![Build Status](https://travis-ci.org/nexdrew/ansi-align.svg?branch=master)](https://travis-ci.org/nexdrew/ansi-align) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/ansi-align/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/ansi-align?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) [![Greenkeeper badge](https://badges.greenkeeper.io/nexdrew/ansi-align.svg)](https://greenkeeper.io/) Easily center- or right- align a block of text, carefully ignoring ANSI escape codes. E.g. turn this: <img width="281" alt="ansi text block no alignment :(" src="https://cloud.githubusercontent.com/assets/1929625/14937509/7c3076dc-0ed7-11e6-8c16-4f6a4ccc8346.png"> Into this: <img width="278" alt="ansi text block center aligned!" src="https://cloud.githubusercontent.com/assets/1929625/14937510/7c3ca0b0-0ed7-11e6-8f0a-541ca39b6e0a.png"> ## Install ```sh npm install --save ansi-align ``` ```js var ansiAlign = require('ansi-align') ``` ## API ### `ansiAlign(text, [opts])` Align the given text per the line with the greatest [`string-width`](https://github.com/sindresorhus/string-width), returning a new string (or array). #### Arguments - `text`: required, string or array The text to align. If a string is given, it will be split using either the `opts.split` value or `'\n'` by default. If an array is given, a different array of modified strings will be returned. - `opts`: optional, object Options to change behavior, see below. #### Options - `opts.align`: string, default `'center'` The alignment mode. Use `'center'` for center-alignment, `'right'` for right-alignment, or `'left'` for left-alignment. Note that the given `text` is assumed to be left-aligned already, so specifying `align: 'left'` just returns the `text` as is (no-op). - `opts.split`: string or RegExp, default `'\n'` The separator to use when splitting the text. Only used if text is given as a string. - `opts.pad`: string, default `' '` The value used to left-pad (prepend to) lines of lesser width. Will be repeated as necessary to adjust alignment to the line with the greatest width. ### `ansiAlign.center(text)` Alias for `ansiAlign(text, { align: 'center' })`. ### `ansiAlign.right(text)` Alias for `ansiAlign(text, { align: 'right' })`. ### `ansiAlign.left(text)` Alias for `ansiAlign(text, { align: 'left' })`, which is a no-op. ## Similar Packages - [`center-align`](https://github.com/jonschlinkert/center-align): Very close to this package, except it doesn't support ANSI codes. - [`left-pad`](https://github.com/camwest/left-pad): Great for left-padding but does not support center alignment or ANSI codes. - Pretty much anything by the [chalk](https://github.com/chalk) team ## License ISC © Contributors ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde (olalonde@gmail.com) - James Talmage (james@talmage.io) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) # string_decoder ***Node-core v8.9.4 string_decoder for userland*** [![NPM](https://nodei.co/npm/string_decoder.png?downloads=true&downloadRank=true)](https://nodei.co/npm/string_decoder/) [![NPM](https://nodei.co/npm-dl/string_decoder.png?&months=6&height=3)](https://nodei.co/npm/string_decoder/) ```bash npm install --save string_decoder ``` ***Node-core string_decoder for userland*** This package is a mirror of the string_decoder implementation in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.9.4/docs/api/). As of version 1.0.0 **string_decoder** uses semantic versioning. ## Previous versions Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10. ## Update The *build/* directory contains a build script that will scrape the source from the [nodejs/node](https://github.com/nodejs/node) repo given a specific Node version. ## Streams Working Group `string_decoder` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. See [readable-stream](https://github.com/nodejs/readable-stream) for more details. # near-ledger-js A JavaScript library for communication with [Ledger](https://www.ledger.com/) Hardware Wallet. # Example usage ```javascript import { createClient, getSupportedTransport } from "near-ledger-js"; const transport = await getSupportedTransport(); transport.setScrambleKey("NEAR"); transport.on('disconnect', () => {...}); ``` In an onClick handler: ```javascript const client = await createClient(transport); // If no error thrown, ledger is available. NOTE: U2F transport will still get here even if device is not present ``` To see debug logging for `getSupportedTransport()`, import `setDebugLogging()` and call `setDebugLogging(true)` before using the package. # How to run demo project 1. `yarn` to install dependencies 2. `yarn start` to start local server with Parcel 3. Open https://localhost:1234 in your browser 4. Open browser console 5. Try examples shown on the page # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE](LICENSE) and [LICENSE-APACHE](LICENSE-APACHE) for details. # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) argparse ======== [![Build Status](https://secure.travis-ci.org/nodeca/argparse.svg?branch=master)](http://travis-ci.org/nodeca/argparse) [![NPM version](https://img.shields.io/npm/v/argparse.svg)](https://www.npmjs.org/package/argparse) CLI arguments parser for node.js. Javascript port of python's [argparse](http://docs.python.org/dev/library/argparse.html) module (original version 3.2). That's a full port, except some very rare options, recorded in issue tracker. **NB. Difference with original.** - Method names changed to camelCase. See [generated docs](http://nodeca.github.com/argparse/). - Use `defaultValue` instead of `default`. - Use `argparse.Const.REMAINDER` instead of `argparse.REMAINDER`, and similarly for constant values `OPTIONAL`, `ZERO_OR_MORE`, and `ONE_OR_MORE` (aliases for `nargs` values `'?'`, `'*'`, `'+'`, respectively), and `SUPPRESS`. Example ======= test.js file: ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse example' }); parser.addArgument( [ '-f', '--foo' ], { help: 'foo bar' } ); parser.addArgument( [ '-b', '--bar' ], { help: 'bar foo' } ); parser.addArgument( '--baz', { help: 'baz bar' } ); var args = parser.parseArgs(); console.dir(args); ``` Display help: ``` $ ./test.js -h usage: example.js [-h] [-v] [-f FOO] [-b BAR] [--baz BAZ] Argparse example Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -f FOO, --foo FOO foo bar -b BAR, --bar BAR bar foo --baz BAZ baz bar ``` Parse arguments: ``` $ ./test.js -f=3 --bar=4 --baz 5 { foo: '3', bar: '4', baz: '5' } ``` More [examples](https://github.com/nodeca/argparse/tree/master/examples). ArgumentParser objects ====================== ``` new ArgumentParser({parameters hash}); ``` Creates a new ArgumentParser object. **Supported params:** - ```description``` - Text to display before the argument help. - ```epilog``` - Text to display after the argument help. - ```addHelp``` - Add a -h/–help option to the parser. (default: true) - ```argumentDefault``` - Set the global default value for arguments. (default: null) - ```parents``` - A list of ArgumentParser objects whose arguments should also be included. - ```prefixChars``` - The set of characters that prefix optional arguments. (default: ‘-‘) - ```formatterClass``` - A class for customizing the help output. - ```prog``` - The name of the program (default: `path.basename(process.argv[1])`) - ```usage``` - The string describing the program usage (default: generated) - ```conflictHandler``` - Usually unnecessary, defines strategy for resolving conflicting optionals. **Not supported yet** - ```fromfilePrefixChars``` - The set of characters that prefix files from which additional arguments should be read. Details in [original ArgumentParser guide](http://docs.python.org/dev/library/argparse.html#argumentparser-objects) addArgument() method ==================== ``` ArgumentParser.addArgument(name or flag or [name] or [flags...], {options}) ``` Defines how a single command-line argument should be parsed. - ```name or flag or [name] or [flags...]``` - Either a positional name (e.g., `'foo'`), a single option (e.g., `'-f'` or `'--foo'`), an array of a single positional name (e.g., `['foo']`), or an array of options (e.g., `['-f', '--foo']`). Options: - ```action``` - The basic type of action to be taken when this argument is encountered at the command line. - ```nargs```- The number of command-line arguments that should be consumed. - ```constant``` - A constant value required by some action and nargs selections. - ```defaultValue``` - The value produced if the argument is absent from the command line. - ```type``` - The type to which the command-line argument should be converted. - ```choices``` - A container of the allowable values for the argument. - ```required``` - Whether or not the command-line option may be omitted (optionals only). - ```help``` - A brief description of what the argument does. - ```metavar``` - A name for the argument in usage messages. - ```dest``` - The name of the attribute to be added to the object returned by parseArgs(). Details in [original add_argument guide](http://docs.python.org/dev/library/argparse.html#the-add-argument-method) Action (some details) ================ ArgumentParser objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them, though most actions simply add an attribute to the object returned by parseArgs(). The action keyword argument specifies how the command-line arguments should be handled. The supported actions are: - ```store``` - Just stores the argument’s value. This is the default action. - ```storeConst``` - Stores value, specified by the const keyword argument. (Note that the const keyword argument defaults to the rather unhelpful None.) The 'storeConst' action is most commonly used with optional arguments, that specify some sort of flag. - ```storeTrue``` and ```storeFalse``` - Stores values True and False respectively. These are special cases of 'storeConst'. - ```append``` - Stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. - ```appendConst``` - Stores a list, and appends value, specified by the const keyword argument to the list. (Note, that the const keyword argument defaults is None.) The 'appendConst' action is typically used when multiple arguments need to store constants to the same list. - ```count``` - Counts the number of times a keyword argument occurs. For example, used for increasing verbosity levels. - ```help``` - Prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser. See ArgumentParser for details of how the output is created. - ```version``` - Prints version information and exit. Expects a `version=` keyword argument in the addArgument() call. Details in [original action guide](http://docs.python.org/dev/library/argparse.html#action) Sub-commands ============ ArgumentParser.addSubparsers() Many programs split their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands like `svn checkout`, `svn update`, and `svn commit`. Splitting up functionality this way can be a particularly good idea when a program performs several different functions which require different kinds of command-line arguments. `ArgumentParser` supports creation of such sub-commands with `addSubparsers()` method. The `addSubparsers()` method is normally called with no arguments and returns an special action object. This object has a single method `addParser()`, which takes a command name and any `ArgumentParser` constructor arguments, and returns an `ArgumentParser` object that can be modified as usual. Example: sub_commands.js ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse examples: sub-commands', }); var subparsers = parser.addSubparsers({ title:'subcommands', dest:"subcommand_name" }); var bar = subparsers.addParser('c1', {addHelp:true}); bar.addArgument( [ '-f', '--foo' ], { action: 'store', help: 'foo3 bar3' } ); var bar = subparsers.addParser( 'c2', {aliases:['co'], addHelp:true} ); bar.addArgument( [ '-b', '--bar' ], { action: 'store', type: 'int', help: 'foo3 bar3' } ); var args = parser.parseArgs(); console.dir(args); ``` Details in [original sub-commands guide](http://docs.python.org/dev/library/argparse.html#sub-commands) Contributors ============ - [Eugene Shkuropat](https://github.com/shkuropat) - [Paul Jacobson](https://github.com/hpaulj) [others](https://github.com/nodeca/argparse/graphs/contributors) License ======= Copyright (c) 2012 [Vitaly Puzrin](https://github.com/puzrin). Released under the MIT license. See [LICENSE](https://github.com/nodeca/argparse/blob/master/LICENSE) for details. # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. <h1 align="center">Enquirer</h1> <p align="center"> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/v/enquirer.svg" alt="version"> </a> <a href="https://travis-ci.org/enquirer/enquirer"> <img src="https://img.shields.io/travis/enquirer/enquirer.svg" alt="travis"> </a> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/dm/enquirer.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <b>Stylish CLI prompts that are user-friendly, intuitive and easy to create.</b><br> <sub>>_ Prompts should be more like conversations than inquisitions▌</sub> </p> <br> <p align="center"> <sub>(Example shows Enquirer's <a href="#survey-prompt">Survey Prompt</a>)</a></sub> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"><br> <sub>The terminal in all examples is <a href="https://hyper.is/">Hyper</a>, theme is <a href="https://github.com/jonschlinkert/hyper-monokai-extended">hyper-monokai-extended</a>.</sub><br><br> <a href="#built-in-prompts"><strong>See more prompt examples</strong></a> </p> <br> <br> Created by [jonschlinkert](https://github.com/jonschlinkert) and [doowb](https://github.com/doowb), Enquirer is fast, easy to use, and lightweight enough for small projects, while also being powerful and customizable enough for the most advanced use cases. * **Fast** - [Loads in ~4ms](#-performance) (that's about _3-4 times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps_) * **Lightweight** - Only one dependency, the excellent [ansi-colors](https://github.com/doowb/ansi-colors) by [Brian Woodward](https://github.com/doowb). * **Easy to implement** - Uses promises and async/await and sensible defaults to make prompts easy to create and implement. * **Easy to use** - Thrill your users with a better experience! Navigating around input and choices is a breeze. You can even create [quizzes](examples/fun/countdown.js), or [record](examples/fun/record.js) and [playback](examples/fun/play.js) key bindings to aid with tutorials and videos. * **Intuitive** - Keypress combos are available to simplify usage. * **Flexible** - All prompts can be used standalone or chained together. * **Stylish** - Easily override semantic styles and symbols for any part of the prompt. * **Extensible** - Easily create and use custom prompts by extending Enquirer's built-in [prompts](#-prompts). * **Pluggable** - Add advanced features to Enquirer using plugins. * **Validation** - Optionally validate user input with any prompt. * **Well tested** - All prompts are well-tested, and tests are easy to create without having to use brittle, hacky solutions to spy on prompts or "inject" values. * **Examples** - There are numerous [examples](examples) available to help you get started. If you like Enquirer, please consider starring or tweeting about this project to show your support. Thanks! <br> <p align="center"> <b>>_ Ready to start making prompts your users will love? ▌</b><br> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/heartbeat.gif" alt="Enquirer Select Prompt with heartbeat example" width="750"> </p> <br> <br> ## ❯ Getting started Get started with Enquirer, the most powerful and easy-to-use Node.js library for creating interactive CLI prompts. * [Install](#-install) * [Usage](#-usage) * [Enquirer](#-enquirer) * [Prompts](#-prompts) - [Built-in Prompts](#-prompts) - [Custom Prompts](#-custom-prompts) * [Key Bindings](#-key-bindings) * [Options](#-options) * [Release History](#-release-history) * [Performance](#-performance) * [About](#-about) <br> ## ❯ Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install enquirer --save ``` Install with [yarn](https://yarnpkg.com/en/): ```sh $ yarn add enquirer ``` <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/npm-install.gif" alt="Install Enquirer with NPM" width="750"> </p> _(Requires Node.js 8.6 or higher. Please let us know if you need support for an earlier version by creating an [issue](../../issues/new).)_ <br> ## ❯ Usage ### Single prompt The easiest way to get started with enquirer is to pass a [question object](#prompt-options) to the `prompt` method. ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); // { username: 'jonschlinkert' } ``` _(Examples with `await` need to be run inside an `async` function)_ ### Multiple prompts Pass an array of ["question" objects](#prompt-options) to run a series of prompts. ```js const response = await prompt([ { type: 'input', name: 'name', message: 'What is your name?' }, { type: 'input', name: 'username', message: 'What is your username?' } ]); console.log(response); // { name: 'Edward Chan', username: 'edwardmchan' } ``` ### Different ways to run enquirer #### 1. By importing the specific `built-in prompt` ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Did you like enquirer?' }); prompt.run() .then(answer => console.log('Answer:', answer)); ``` #### 2. By passing the options to `prompt` ```js const { prompt } = require('enquirer'); prompt({ type: 'confirm', name: 'question', message: 'Did you like enquirer?' }) .then(answer => console.log('Answer:', answer)); ``` **Jump to**: [Getting Started](#-getting-started) · [Prompts](#-prompts) · [Options](#-options) · [Key Bindings](#-key-bindings) <br> ## ❯ Enquirer **Enquirer is a prompt runner** Add Enquirer to your JavaScript project with following line of code. ```js const Enquirer = require('enquirer'); ``` The main export of this library is the `Enquirer` class, which has methods and features designed to simplify running prompts. ```js const { prompt } = require('enquirer'); const question = [ { type: 'input', name: 'username', message: 'What is your username?' }, { type: 'password', name: 'password', message: 'What is your password?' } ]; let answers = await prompt(question); console.log(answers); ``` **Prompts control how values are rendered and returned** Each individual prompt is a class with special features and functionality for rendering the types of values you want to show users in the terminal, and subsequently returning the types of values you need to use in your application. **How can I customize prompts?** Below in this guide you will find information about creating [custom prompts](#-custom-prompts). For now, we'll focus on how to customize an existing prompt. All of the individual [prompt classes](#built-in-prompts) in this library are exposed as static properties on Enquirer. This allows them to be used directly without using `enquirer.prompt()`. Use this approach if you need to modify a prompt instance, or listen for events on the prompt. **Example** ```js const { Input } = require('enquirer'); const prompt = new Input({ name: 'username', message: 'What is your username?' }); prompt.run() .then(answer => console.log('Username:', answer)) .catch(console.error); ``` ### [Enquirer](index.js#L20) Create an instance of `Enquirer`. **Params** * `options` **{Object}**: (optional) Options to use with all prompts. * `answers` **{Object}**: (optional) Answers object to initialize with. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` ### [register()](index.js#L42) Register a custom prompt type. **Params** * `type` **{String}** * `fn` **{Function|Prompt}**: `Prompt` class, or a function that returns a `Prompt` class. * `returns` **{Object}**: Returns the Enquirer instance **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); enquirer.register('customType', require('./custom-prompt')); ``` ### [prompt()](index.js#L78) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const response = await enquirer.prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` ### [use()](index.js#L160) Use an enquirer plugin. **Params** * `plugin` **{Function}**: Plugin function that takes an instance of Enquirer. * `returns` **{Object}**: Returns the Enquirer instance. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const plugin = enquirer => { // do stuff to enquire instance }; enquirer.use(plugin); ``` ### [Enquirer#prompt](index.js#L210) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` <br> ## ❯ Prompts This section is about Enquirer's prompts: what they look like, how they work, how to run them, available options, and how to customize the prompts or create your own prompt concept. **Getting started with Enquirer's prompts** * [Prompt](#prompt) - The base `Prompt` class used by other prompts - [Prompt Options](#prompt-options) * [Built-in prompts](#built-in-prompts) * [Prompt Types](#prompt-types) - The base `Prompt` class used by other prompts * [Custom prompts](#%E2%9D%AF-custom-prompts) - Enquirer 2.0 introduced the concept of prompt "types", with the goal of making custom prompts easier than ever to create and use. ### Prompt The base `Prompt` class is used to create all other prompts. ```js const { Prompt } = require('enquirer'); class MyCustomPrompt extends Prompt {} ``` See the documentation for [creating custom prompts](#-custom-prompts) to learn more about how this works. #### Prompt Options Each prompt takes an options object (aka "question" object), that implements the following interface: ```js { // required type: string | function, name: string | function, message: string | function | async function, // optional skip: boolean | function | async function, initial: string | function | async function, format: function | async function, result: function | async function, validate: function | async function, } ``` Each property of the options object is described below: | **Property** | **Required?** | **Type** | **Description** | | ------------ | ------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `type` | yes | `string\|function` | Enquirer uses this value to determine the type of prompt to run, but it's optional when prompts are run directly. | | `name` | yes | `string\|function` | Used as the key for the answer on the returned values (answers) object. | | `message` | yes | `string\|function` | The message to display when the prompt is rendered in the terminal. | | `skip` | no | `boolean\|function` | If `true` it will not ask that prompt. | | `initial` | no | `string\|function` | The default value to return if the user does not supply a value. | | `format` | no | `function` | Function to format user input in the terminal. | | `result` | no | `function` | Function to format the final submitted value before it's returned. | | `validate` | no | `function` | Function to validate the submitted value before it's returned. This function may return a boolean or a string. If a string is returned it will be used as the validation error message. | **Example usage** ```js const { prompt } = require('enquirer'); const question = { type: 'input', name: 'username', message: 'What is your username?' }; prompt(question) .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` <br> ### Built-in prompts * [AutoComplete Prompt](#autocomplete-prompt) * [BasicAuth Prompt](#basicauth-prompt) * [Confirm Prompt](#confirm-prompt) * [Form Prompt](#form-prompt) * [Input Prompt](#input-prompt) * [Invisible Prompt](#invisible-prompt) * [List Prompt](#list-prompt) * [MultiSelect Prompt](#multiselect-prompt) * [Numeral Prompt](#numeral-prompt) * [Password Prompt](#password-prompt) * [Quiz Prompt](#quiz-prompt) * [Survey Prompt](#survey-prompt) * [Scale Prompt](#scale-prompt) * [Select Prompt](#select-prompt) * [Sort Prompt](#sort-prompt) * [Snippet Prompt](#snippet-prompt) * [Toggle Prompt](#toggle-prompt) ### AutoComplete Prompt Prompt that auto-completes as the user types, and returns the selected value as a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/autocomplete-prompt.gif" alt="Enquirer AutoComplete Prompt" width="750"> </p> **Example Usage** ```js const { AutoComplete } = require('enquirer'); const prompt = new AutoComplete({ name: 'flavor', message: 'Pick your favorite flavor', limit: 10, initial: 2, choices: [ 'Almond', 'Apple', 'Banana', 'Blackberry', 'Blueberry', 'Cherry', 'Chocolate', 'Cinnamon', 'Coconut', 'Cranberry', 'Grape', 'Nougat', 'Orange', 'Pear', 'Pineapple', 'Raspberry', 'Strawberry', 'Vanilla', 'Watermelon', 'Wintergreen' ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **AutoComplete Options** | Option | Type | Default | Description | | ----------- | ---------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `highlight` | `function` | `dim` version of primary style | The color to use when "highlighting" characters in the list that match user input. | | `multiple` | `boolean` | `false` | Allow multiple choices to be selected. | | `suggest` | `function` | Greedy match, returns true if choice message contains input string. | Function that filters choices. Takes user input and a choices array, and returns a list of matching choices. | | `initial` | `number` | 0 | Preselected item in the list of choices. | | `footer` | `function` | None | Function that displays [footer text](https://github.com/enquirer/enquirer/blob/6c2819518a1e2ed284242a99a685655fbaabfa28/examples/autocomplete/option-footer.js#L10) | **Related prompts** * [Select](#select-prompt) * [MultiSelect](#multiselect-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### BasicAuth Prompt Prompt that asks for username and password to authenticate the user. The default implementation of `authenticate` function in `BasicAuth` prompt is to compare the username and password with the values supplied while running the prompt. The implementer is expected to override the `authenticate` function with a custom logic such as making an API request to a server to authenticate the username and password entered and expect a token back. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61570485-7ffd9c00-aaaa-11e9-857a-d47dc7008284.gif" alt="Enquirer BasicAuth Prompt" width="750"> </p> **Example Usage** ```js const { BasicAuth } = require('enquirer'); const prompt = new BasicAuth({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '123', showPassword: true }); prompt .run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Confirm Prompt Prompt that returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/confirm-prompt.gif" alt="Enquirer Confirm Prompt" width="750"> </p> **Example Usage** ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Want to answer?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Form Prompt Prompt that allows the user to enter and submit multiple values on a single terminal screen. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/form-prompt.gif" alt="Enquirer Form Prompt" width="750"> </p> **Example Usage** ```js const { Form } = require('enquirer'); const prompt = new Form({ name: 'user', message: 'Please provide the following information:', choices: [ { name: 'firstname', message: 'First Name', initial: 'Jon' }, { name: 'lastname', message: 'Last Name', initial: 'Schlinkert' }, { name: 'username', message: 'GitHub username', initial: 'jonschlinkert' } ] }); prompt.run() .then(value => console.log('Answer:', value)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Input Prompt Prompt that takes user input and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/input-prompt.gif" alt="Enquirer Input Prompt" width="750"> </p> **Example Usage** ```js const { Input } = require('enquirer'); const prompt = new Input({ message: 'What is your username?', initial: 'jonschlinkert' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.log); ``` You can use [data-store](https://github.com/jonschlinkert/data-store) to store [input history](https://github.com/enquirer/enquirer/blob/master/examples/input/option-history.js) that the user can cycle through (see [source](https://github.com/enquirer/enquirer/blob/8407dc3579123df5e6e20215078e33bb605b0c37/lib/prompts/input.js)). **Related prompts** * [Confirm](#confirm-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Invisible Prompt Prompt that takes user input, hides it from the terminal, and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/invisible-prompt.gif" alt="Enquirer Invisible Prompt" width="750"> </p> **Example Usage** ```js const { Invisible } = require('enquirer'); const prompt = new Invisible({ name: 'secret', message: 'What is your secret?' }); prompt.run() .then(answer => console.log('Answer:', { secret: answer })) .catch(console.error); ``` **Related prompts** * [Password](#password-prompt) * [Input](#input-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### List Prompt Prompt that returns a list of values, created by splitting the user input. The default split character is `,` with optional trailing whitespace. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/list-prompt.gif" alt="Enquirer List Prompt" width="750"> </p> **Example Usage** ```js const { List } = require('enquirer'); const prompt = new List({ name: 'keywords', message: 'Type comma-separated keywords' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Sort](#sort-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### MultiSelect Prompt Prompt that allows the user to select multiple items from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/multiselect-prompt.gif" alt="Enquirer MultiSelect Prompt" width="750"> </p> **Example Usage** ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: ['aqua', 'blue', 'fuchsia'] ``` **Example key-value pairs** Optionally, pass a `result` function and use the `.map` method to return an object of key-value pairs of the selected names and values: [example](./examples/multiselect/option-result.js) ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ], result(names) { return this.map(names); } }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: { aqua: '#00ffff', blue: '#0000ff', fuchsia: '#ff00ff' } ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Numeral Prompt Prompt that takes a number as input. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/numeral-prompt.gif" alt="Enquirer Numeral Prompt" width="750"> </p> **Example Usage** ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ name: 'number', message: 'Please enter a number' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Confirm](#confirm-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Password Prompt Prompt that takes user input and masks it in the terminal. Also see the [invisible prompt](#invisible-prompt) <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/password-prompt.gif" alt="Enquirer Password Prompt" width="750"> </p> **Example Usage** ```js const { Password } = require('enquirer'); const prompt = new Password({ name: 'password', message: 'What is your password?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Invisible](#invisible-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Quiz Prompt Prompt that allows the user to play multiple-choice quiz questions. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61567561-891d4780-aa6f-11e9-9b09-3d504abd24ed.gif" alt="Enquirer Quiz Prompt" width="750"> </p> **Example Usage** ```js const { Quiz } = require('enquirer'); const prompt = new Quiz({ name: 'countries', message: 'How many countries are there in the world?', choices: ['165', '175', '185', '195', '205'], correctChoice: 3 }); prompt .run() .then(answer => { if (answer.correct) { console.log('Correct!'); } else { console.log(`Wrong! Correct answer is ${answer.correctAnswer}`); } }) .catch(console.error); ``` **Quiz Options** | Option | Type | Required | Description | | ----------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | | `choices` | `array` | Yes | The list of possible answers to the quiz question. | | `correctChoice`| `number` | Yes | Index of the correct choice from the `choices` array. | **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Survey Prompt Prompt that allows the user to provide feedback for a list of questions. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"> </p> **Example Usage** ```js const { Survey } = require('enquirer'); const prompt = new Survey({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.' }, { name: 'navigation', message: 'The website is easy to navigate.' }, { name: 'images', message: 'The website usually has good images.' }, { name: 'upload', message: 'The website makes it easy to upload images.' }, { name: 'colors', message: 'The website has a pleasing color palette.' } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [Scale](#scale-prompt) * [Snippet](#snippet-prompt) * [Select](#select-prompt) *** ### Scale Prompt A more compact version of the [Survey prompt](#survey-prompt), the Scale prompt allows the user to quickly provide feedback using a [Likert Scale](https://en.wikipedia.org/wiki/Likert_scale). <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/scale-prompt.gif" alt="Enquirer Scale Prompt" width="750"> </p> **Example Usage** ```js const { Scale } = require('enquirer'); const prompt = new Scale({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.', initial: 2 }, { name: 'navigation', message: 'The website is easy to navigate.', initial: 2 }, { name: 'images', message: 'The website usually has good images.', initial: 2 }, { name: 'upload', message: 'The website makes it easy to upload images.', initial: 2 }, { name: 'colors', message: 'The website has a pleasing color palette.', initial: 2 } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Select Prompt Prompt that allows the user to select from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/select-prompt.gif" alt="Enquirer Select Prompt" width="750"> </p> **Example Usage** ```js const { Select } = require('enquirer'); const prompt = new Select({ name: 'color', message: 'Pick a flavor', choices: ['apple', 'grape', 'watermelon', 'cherry', 'orange'] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [MultiSelect](#multiselect-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Sort Prompt Prompt that allows the user to sort items in a list. **Example** In this [example](https://github.com/enquirer/enquirer/raw/master/examples/sort/prompt.js), custom styling is applied to the returned values to make it easier to see what's happening. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/sort-prompt.gif" alt="Enquirer Sort Prompt" width="750"> </p> **Example Usage** ```js const colors = require('ansi-colors'); const { Sort } = require('enquirer'); const prompt = new Sort({ name: 'colors', message: 'Sort the colors in order of preference', hint: 'Top is best, bottom is worst', numbered: true, choices: ['red', 'white', 'green', 'cyan', 'yellow'].map(n => ({ name: n, message: colors[n](n) })) }); prompt.run() .then(function(answer = []) { console.log(answer); console.log('Your preferred order of colors is:'); console.log(answer.map(key => colors[key](key)).join('\n')); }) .catch(console.error); ``` **Related prompts** * [List](#list-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Snippet Prompt Prompt that allows the user to replace placeholders in a snippet of code or text. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/snippet-prompt.gif" alt="Prompts" width="750"> </p> **Example Usage** ```js const semver = require('semver'); const { Snippet } = require('enquirer'); const prompt = new Snippet({ name: 'username', message: 'Fill out the fields in package.json', required: true, fields: [ { name: 'author_name', message: 'Author Name' }, { name: 'version', validate(value, state, item, index) { if (item && item.name === 'version' && !semver.valid(value)) { return prompt.styles.danger('version should be a valid semver value'); } return true; } } ], template: `{ "name": "\${name}", "description": "\${description}", "version": "\${version}", "homepage": "https://github.com/\${username}/\${name}", "author": "\${author_name} (https://github.com/\${username})", "repository": "\${username}/\${name}", "license": "\${license:ISC}" } ` }); prompt.run() .then(answer => console.log('Answer:', answer.result)) .catch(console.error); ``` **Related prompts** * [Survey](#survey-prompt) * [AutoComplete](#autocomplete-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Toggle Prompt Prompt that allows the user to toggle between two values then returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/toggle-prompt.gif" alt="Enquirer Toggle Prompt" width="750"> </p> **Example Usage** ```js const { Toggle } = require('enquirer'); const prompt = new Toggle({ message: 'Want to answer?', enabled: 'Yep', disabled: 'Nope' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Confirm](#confirm-prompt) * [Input](#input-prompt) * [Sort](#sort-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Prompt Types There are 5 (soon to be 6!) type classes: * [ArrayPrompt](#arrayprompt) - [Options](#options) - [Properties](#properties) - [Methods](#methods) - [Choices](#choices) - [Defining choices](#defining-choices) - [Choice properties](#choice-properties) - [Related prompts](#related-prompts) * [AuthPrompt](#authprompt) * [BooleanPrompt](#booleanprompt) * DatePrompt (Coming Soon!) * [NumberPrompt](#numberprompt) * [StringPrompt](#stringprompt) Each type is a low-level class that may be used as a starting point for creating higher level prompts. Continue reading to learn how. ### ArrayPrompt The `ArrayPrompt` class is used for creating prompts that display a list of choices in the terminal. For example, Enquirer uses this class as the basis for the [Select](#select) and [Survey](#survey) prompts. #### Options In addition to the [options](#options) available to all prompts, Array prompts also support the following options. | **Option** | **Required?** | **Type** | **Description** | | ----------- | ------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------- | | `autofocus` | `no` | `string\|number` | The index or name of the choice that should have focus when the prompt loads. Only one choice may have focus at a time. | | | `stdin` | `no` | `stream` | The input stream to use for emitting keypress events. Defaults to `process.stdin`. | | `stdout` | `no` | `stream` | The output stream to use for writing the prompt to the terminal. Defaults to `process.stdout`. | | | #### Properties Array prompts have the following instance properties and getters. | **Property name** | **Type** | **Description** | | ----------------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `choices` | `array` | Array of choices that have been normalized from choices passed on the prompt options. | | `cursor` | `number` | Position of the cursor relative to the _user input (string)_. | | `enabled` | `array` | Returns an array of enabled choices. | | `focused` | `array` | Returns the currently selected choice in the visible list of choices. This is similar to the concept of focus in HTML and CSS. Focused choices are always visible (on-screen). When a list of choices is longer than the list of visible choices, and an off-screen choice is _focused_, the list will scroll to the focused choice and re-render. | | `focused` | Gets the currently selected choice. Equivalent to `prompt.choices[prompt.index]`. | | `index` | `number` | Position of the pointer in the _visible list (array) of choices_. | | `limit` | `number` | The number of choices to display on-screen. | | `selected` | `array` | Either a list of enabled choices (when `options.multiple` is true) or the currently focused choice. | | `visible` | `string` | | #### Methods | **Method** | **Description** | | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pointer()` | Returns the visual symbol to use to identify the choice that currently has focus. The `❯` symbol is often used for this. The pointer is not always visible, as with the `autocomplete` prompt. | | `indicator()` | Returns the visual symbol that indicates whether or not a choice is checked/enabled. | | `focus()` | Sets focus on a choice, if it can be focused. | #### Choices Array prompts support the `choices` option, which is the array of choices users will be able to select from when rendered in the terminal. **Type**: `string|object` **Example** ```js const { prompt } = require('enquirer'); const questions = [{ type: 'select', name: 'color', message: 'Favorite color?', initial: 1, choices: [ { name: 'red', message: 'Red', value: '#ff0000' }, //<= choice object { name: 'green', message: 'Green', value: '#00ff00' }, //<= choice object { name: 'blue', message: 'Blue', value: '#0000ff' } //<= choice object ] }]; let answers = await prompt(questions); console.log('Answer:', answers.color); ``` #### Defining choices Whether defined as a string or object, choices are normalized to the following interface: ```js { name: string; message: string | undefined; value: string | undefined; hint: string | undefined; disabled: boolean | string | undefined; } ``` **Example** ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: ['Apple', 'Orange', 'Raspberry'] }; ``` Normalizes to the following when the prompt is run: ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: [ { name: 'Apple', message: 'Apple', value: 'Apple' }, { name: 'Orange', message: 'Orange', value: 'Orange' }, { name: 'Raspberry', message: 'Raspberry', value: 'Raspberry' } ] }; ``` #### Choice properties The following properties are supported on `choice` objects. | **Option** | **Type** | **Description** | | ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `name` | `string` | The unique key to identify a choice | | `message` | `string` | The message to display in the terminal. `name` is used when this is undefined. | | `value` | `string` | Value to associate with the choice. Useful for creating key-value pairs from user choices. `name` is used when this is undefined. | | `choices` | `array` | Array of "child" choices. | | `hint` | `string` | Help message to display next to a choice. | | `role` | `string` | Determines how the choice will be displayed. Currently the only role supported is `separator`. Additional roles may be added in the future (like `heading`, etc). Please create a [feature request] | | `enabled` | `boolean` | Enabled a choice by default. This is only supported when `options.multiple` is true or on prompts that support multiple choices, like [MultiSelect](#-multiselect). | | `disabled` | `boolean\|string` | Disable a choice so that it cannot be selected. This value may either be `true`, `false`, or a message to display. | | `indicator` | `string\|function` | Custom indicator to render for a choice (like a check or radio button). | #### Related prompts * [AutoComplete](#autocomplete-prompt) * [Form](#form-prompt) * [MultiSelect](#multiselect-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) *** ### AuthPrompt The `AuthPrompt` is used to create prompts to log in user using any authentication method. For example, Enquirer uses this class as the basis for the [BasicAuth Prompt](#basicauth-prompt). You can also find prompt examples in `examples/auth/` folder that utilizes `AuthPrompt` to create OAuth based authentication prompt or a prompt that authenticates using time-based OTP, among others. `AuthPrompt` has a factory function that creates an instance of `AuthPrompt` class and it expects an `authenticate` function, as an argument, which overrides the `authenticate` function of the `AuthPrompt` class. #### Methods | **Method** | **Description** | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `authenticate()` | Contain all the authentication logic. This function should be overridden to implement custom authentication logic. The default `authenticate` function throws an error if no other function is provided. | #### Choices Auth prompt supports the `choices` option, which is the similar to the choices used in [Form Prompt](#form-prompt). **Example** ```js const { AuthPrompt } = require('enquirer'); function authenticate(value, state) { if (value.username === this.options.username && value.password === this.options.password) { return true; } return false; } const CustomAuthPrompt = AuthPrompt.create(authenticate); const prompt = new CustomAuthPrompt({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '1234567', choices: [ { name: 'username', message: 'username' }, { name: 'password', message: 'password' } ] }); prompt .run() .then(answer => console.log('Authenticated?', answer)) .catch(console.error); ``` #### Related prompts * [BasicAuth Prompt](#basicauth-prompt) *** ### BooleanPrompt The `BooleanPrompt` class is used for creating prompts that display and return a boolean value. ```js const { BooleanPrompt } = require('enquirer'); const prompt = new BooleanPrompt({ header: '========================', message: 'Do you love enquirer?', footer: '========================', }); prompt.run() .then(answer => console.log('Selected:', answer)) .catch(console.error); ``` **Returns**: `boolean` *** ### NumberPrompt The `NumberPrompt` class is used for creating prompts that display and return a numerical value. ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ header: '************************', message: 'Input the Numbers:', footer: '************************', }); prompt.run() .then(answer => console.log('Numbers are:', answer)) .catch(console.error); ``` **Returns**: `string|number` (number, or number formatted as a string) *** ### StringPrompt The `StringPrompt` class is used for creating prompts that display and return a string value. ```js const { StringPrompt } = require('enquirer'); const prompt = new StringPrompt({ header: '************************', message: 'Input the String:', footer: '************************' }); prompt.run() .then(answer => console.log('String is:', answer)) .catch(console.error); ``` **Returns**: `string` <br> ## ❯ Custom prompts With Enquirer 2.0, custom prompts are easier than ever to create and use. **How do I create a custom prompt?** Custom prompts are created by extending either: * Enquirer's `Prompt` class * one of the built-in [prompts](#-prompts), or * low-level [types](#-types). <!-- Example: HaiKarate Custom Prompt --> ```js const { Prompt } = require('enquirer'); class HaiKarate extends Prompt { constructor(options = {}) { super(options); this.value = options.initial || 0; this.cursorHide(); } up() { this.value++; this.render(); } down() { this.value--; this.render(); } render() { this.clear(); // clear previously rendered prompt from the terminal this.write(`${this.state.message}: ${this.value}`); } } // Use the prompt by creating an instance of your custom prompt class. const prompt = new HaiKarate({ message: 'How many sprays do you want?', initial: 10 }); prompt.run() .then(answer => console.log('Sprays:', answer)) .catch(console.error); ``` If you want to be able to specify your prompt by `type` so that it may be used alongside other prompts, you will need to first create an instance of `Enquirer`. ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` Then use the `.register()` method to add your custom prompt. ```js enquirer.register('haikarate', HaiKarate); ``` Now you can do the following when defining "questions". ```js let spritzer = require('cologne-drone'); let answers = await enquirer.prompt([ { type: 'haikarate', name: 'cologne', message: 'How many sprays do you need?', initial: 10, async onSubmit(name, value) { await spritzer.activate(value); //<= activate drone return value; } } ]); ``` <br> ## ❯ Key Bindings ### All prompts These key combinations may be used with all prompts. | **command** | **description** | | -------------------------------- | -------------------------------------- | | <kbd>ctrl</kbd> + <kbd>c</kbd> | Cancel the prompt. | | <kbd>ctrl</kbd> + <kbd>g</kbd> | Reset the prompt to its initial state. | <br> ### Move cursor These combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>left</kbd> | Move the cursor back one character. | | <kbd>right</kbd> | Move the cursor forward one character. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> ### Edit Input These key combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> | **command (Mac)** | **command (Windows)** | **description** | | ----------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | <kbd>delete</kbd> | <kbd>backspace</kbd> | Delete one character to the left. | | <kbd>fn</kbd> + <kbd>delete</kbd> | <kbd>delete</kbd> | Delete one character to the right. | | <kbd>option</kbd> + <kbd>up</kbd> | <kbd>alt</kbd> + <kbd>up</kbd> | Scroll to the previous item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | | <kbd>option</kbd> + <kbd>down</kbd> | <kbd>alt</kbd> + <kbd>down</kbd> | Scroll to the next item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | ### Select choices These key combinations may be used on prompts that support _multiple_ choices, such as the [multiselect prompt](#multiselect-prompt), or the [select prompt](#select-prompt) when the `multiple` options is true. | **command** | **description** | | ----------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>space</kbd> | Toggle the currently selected choice when `options.multiple` is true. | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>a</kbd> | Toggle all choices to be enabled or disabled. | | <kbd>i</kbd> | Invert the current selection of choices. | | <kbd>g</kbd> | Toggle the current choice group. | <br> ### Hide/show choices | **command** | **description** | | ------------------------------- | ---------------------------------------------- | | <kbd>fn</kbd> + <kbd>up</kbd> | Decrease the number of visible choices by one. | | <kbd>fn</kbd> + <kbd>down</kbd> | Increase the number of visible choices by one. | <br> ### Move/lock Pointer | **command** | **description** | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>up</kbd> | Move the pointer up. | | <kbd>down</kbd> | Move the pointer down. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move the pointer to the first _visible_ choice. | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move the pointer to the last _visible_ choice. | | <kbd>shift</kbd> + <kbd>up</kbd> | Scroll up one choice without changing pointer position (locks the pointer while scrolling). | | <kbd>shift</kbd> + <kbd>down</kbd> | Scroll down one choice without changing pointer position (locks the pointer while scrolling). | <br> | **command (Mac)** | **command (Windows)** | **description** | | -------------------------------- | --------------------- | ---------------------------------------------------------- | | <kbd>fn</kbd> + <kbd>left</kbd> | <kbd>home</kbd> | Move the pointer to the first choice in the choices array. | | <kbd>fn</kbd> + <kbd>right</kbd> | <kbd>end</kbd> | Move the pointer to the last choice in the choices array. | <br> ## ❯ Release History Please see [CHANGELOG.md](CHANGELOG.md). ## ❯ Performance ### System specs MacBook Pro, Intel Core i7, 2.5 GHz, 16 GB. ### Load time Time it takes for the module to load the first time (average of 3 runs): ``` enquirer: 4.013ms inquirer: 286.717ms ``` <br> ## ❯ About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Todo We're currently working on documentation for the following items. Please star and watch the repository for updates! * [ ] Customizing symbols * [ ] Customizing styles (palette) * [ ] Customizing rendered input * [ ] Customizing returned values * [ ] Customizing key bindings * [ ] Question validation * [ ] Choice validation * [ ] Skipping questions * [ ] Async choices * [ ] Async timers: loaders, spinners and other animations * [ ] Links to examples </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ```sh $ yarn && yarn test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> #### Contributors | **Commits** | **Contributor** | | --- | --- | | 283 | [jonschlinkert](https://github.com/jonschlinkert) | | 82 | [doowb](https://github.com/doowb) | | 32 | [rajat-sr](https://github.com/rajat-sr) | | 20 | [318097](https://github.com/318097) | | 15 | [g-plane](https://github.com/g-plane) | | 12 | [pixelass](https://github.com/pixelass) | | 5 | [adityavyas611](https://github.com/adityavyas611) | | 5 | [satotake](https://github.com/satotake) | | 3 | [tunnckoCore](https://github.com/tunnckoCore) | | 3 | [Ovyerus](https://github.com/Ovyerus) | | 3 | [sw-yx](https://github.com/sw-yx) | | 2 | [DanielRuf](https://github.com/DanielRuf) | | 2 | [GabeL7r](https://github.com/GabeL7r) | | 1 | [AlCalzone](https://github.com/AlCalzone) | | 1 | [hipstersmoothie](https://github.com/hipstersmoothie) | | 1 | [danieldelcore](https://github.com/danieldelcore) | | 1 | [ImgBotApp](https://github.com/ImgBotApp) | | 1 | [jsonkao](https://github.com/jsonkao) | | 1 | [knpwrs](https://github.com/knpwrs) | | 1 | [yeskunall](https://github.com/yeskunall) | | 1 | [mischah](https://github.com/mischah) | | 1 | [renarsvilnis](https://github.com/renarsvilnis) | | 1 | [sbugert](https://github.com/sbugert) | | 1 | [stephencweiss](https://github.com/stephencweiss) | | 1 | [skellock](https://github.com/skellock) | | 1 | [whxaxes](https://github.com/whxaxes) | #### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) #### Credit Thanks to [derhuerst](https://github.com/derhuerst), creator of prompt libraries such as [prompt-skeleton](https://github.com/derhuerst/prompt-skeleton), which influenced some of the concepts we used in our prompts. #### License Copyright © 2018-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). # expand-template > Expand placeholders in a template string. [![npm](https://img.shields.io/npm/v/expand-template.svg)](https://www.npmjs.com/package/expand-template) ![Node version](https://img.shields.io/node/v/expand-template.svg) [![Build Status](https://travis-ci.org/ralphtheninja/expand-template.svg?branch=master)](https://travis-ci.org/ralphtheninja/expand-template) [![JavaScript Style Guide](https://img.shields.io/badge/code_style-standard-brightgreen.svg)](https://standardjs.com) ## Install ``` $ npm i expand-template -S ``` ## Usage Default functionality expands templates using `{}` as separators for string placeholders. ```js var expand = require('expand-template')() var template = '{foo}/{foo}/{bar}/{bar}' console.log(expand(template, { foo: 'BAR', bar: 'FOO' })) // -> BAR/BAR/FOO/FOO ``` Custom separators: ```js var expand = require('expand-template')({ sep: '[]' }) var template = '[foo]/[foo]/[bar]/[bar]' console.log(expand(template, { foo: 'BAR', bar: 'FOO' })) // -> BAR/BAR/FOO/FOO ``` ## License All code, unless stated otherwise, is dual-licensed under [`WTFPL`](http://www.wtfpl.net/txt/copying/) and [`MIT`](https://opensource.org/licenses/MIT). # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # near-sdk-as Collection of packages used in developing NEAR smart contracts in AssemblyScript including: - [`runtime library`](https://github.com/near/near-sdk-as/tree/master/sdk-core) - AssemblyScript near runtime library - [`bindgen`](https://github.com/near/near-sdk-as/tree/master/bindgen) - AssemblyScript transformer that adds the bindings needed to (de)serialize input and outputs. - [`near-mock-vm`](https://github.com/near/near-sdk-as/tree/master/near-mock-vm) - Core of the NEAR VM compiled to WebAssembly used for running unit tests. - [`@as-pect/cli`](https://github.com/jtenner/as-pect) - AssemblyScript testing framework similar to jest. ## To Install ```sh yarn add -D near-sdk-as ``` ## Project Setup To set up a AS project to compile with the sdk add the following `asconfig.json` file to the root: ```json { "extends": "near-sdk-as/asconfig.json" } ``` Then if your main file is `assembly/index.ts`, then the project can be build with [`asbuild`](https://github.com/willemneal/asbuild): ```sh yarn asb ``` will create a release build and place it `./build/release/<name-in-package.json>.wasm` ```sh yarn asb --target debug ``` will create a debug build and place it in `./build/debug/..` ## Testing ### Unit Testing See the [sdk's as-pect tests for an example](./sdk/assembly/__tests__) of creating unit tests. Must be ending in `.spec.ts` in a `assembly/__tests__`. ## License `near-sdk-as` is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-MIT](LICENSE-MIT) and [LICENSE-APACHE](LICENSE-APACHE) for details. # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # homedir-polyfill [![NPM version](https://img.shields.io/npm/v/homedir-polyfill.svg?style=flat)](https://www.npmjs.com/package/homedir-polyfill) [![NPM monthly downloads](https://img.shields.io/npm/dm/homedir-polyfill.svg?style=flat)](https://npmjs.org/package/homedir-polyfill) [![NPM total downloads](https://img.shields.io/npm/dt/homedir-polyfill.svg?style=flat)](https://npmjs.org/package/homedir-polyfill) [![Linux Build Status](https://img.shields.io/travis/doowb/homedir-polyfill.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/homedir-polyfill) [![Windows Build Status](https://img.shields.io/appveyor/ci/doowb/homedir-polyfill.svg?style=flat&label=AppVeyor)](https://ci.appveyor.com/project/doowb/homedir-polyfill) > Node.js os.homedir polyfill for older versions of node.js. Please consider following this project's author, [Brian Woodward](https://github.com/doowb), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save homedir-polyfill ``` ## Usage ```js var homedir = require('homedir-polyfill'); console.log(homedir()); //=> /Users/doowb ``` ## Reasoning This library is a polyfill for the [node.js os.homedir](https://nodejs.org/api/os.html#os_os_homedir) method found in modern versions of node.js. This implementation tries to follow the implementation found in `libuv` by finding the current user using the `process.geteuid()` method and the `/etc/passwd` file. This should usually work in a linux environment, but will also fallback to looking at user specific environment variables to build the user's home directory if neccessary. Since `/etc/passwd` is not available on windows platforms, this implementation will use environment variables to find the home directory. In modern versions of node.js, [os.homedir](https://nodejs.org/api/os.html#os_os_homedir) is used. ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). Please read the [contributing guide](contributing.md) for advice on opening issues, pull requests, and coding standards. </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: [parse-passwd](https://www.npmjs.com/package/parse-passwd): Parse a passwd file into a list of users. | [homepage](https://github.com/doowb/parse-passwd "Parse a passwd file into a list of users.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 19 | [doowb](https://github.com/doowb) | | 2 | [martinheidegger](https://github.com/martinheidegger) | ### Author **Brian Woodward** * [GitHub Profile](https://github.com/doowb) * [Twitter Profile](https://twitter.com/doowb) * [LinkedIn Profile](https://linkedin.com/in/woodwardbrian) ### License Copyright © 2016 - 2019, [Brian Woodward](https://github.com/doowb). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on February 21, 2019._ # typedarray-to-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/typedarray-to-buffer/master.svg [travis-url]: https://travis-ci.org/feross/typedarray-to-buffer [npm-image]: https://img.shields.io/npm/v/typedarray-to-buffer.svg [npm-url]: https://npmjs.org/package/typedarray-to-buffer [downloads-image]: https://img.shields.io/npm/dm/typedarray-to-buffer.svg [downloads-url]: https://npmjs.org/package/typedarray-to-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Convert a typed array to a [Buffer](https://github.com/feross/buffer) without a copy. [![saucelabs][saucelabs-image]][saucelabs-url] [saucelabs-image]: https://saucelabs.com/browser-matrix/typedarray-to-buffer.svg [saucelabs-url]: https://saucelabs.com/u/typedarray-to-buffer Say you're using the ['buffer'](https://github.com/feross/buffer) module on npm, or [browserify](http://browserify.org/) and you're working with lots of binary data. Unfortunately, sometimes the browser or someone else's API gives you a typed array like `Uint8Array` to work with and you need to convert it to a `Buffer`. What do you do? Of course: `Buffer.from(uint8array)` But, alas, every time you do `Buffer.from(uint8array)` **the entire array gets copied**. The `Buffer` constructor does a copy; this is defined by the [node docs](http://nodejs.org/api/buffer.html) and the 'buffer' module matches the node API exactly. So, how can we avoid this expensive copy in [performance critical applications](https://github.com/feross/buffer/issues/22)? ***Simply use this module, of course!*** If you have an `ArrayBuffer`, you don't need this module, because `Buffer.from(arrayBuffer)` [is already efficient](https://nodejs.org/api/buffer.html#buffer_class_method_buffer_from_arraybuffer_byteoffset_length). ## install ```bash npm install typedarray-to-buffer ``` ## usage To convert a typed array to a `Buffer` **without a copy**, do this: ```js var toBuffer = require('typedarray-to-buffer') var arr = new Uint8Array([1, 2, 3]) arr = toBuffer(arr) // arr is a buffer now! arr.toString() // '\u0001\u0002\u0003' arr.readUInt16BE(0) // 258 ``` ## how it works If the browser supports typed arrays, then `toBuffer` will **augment the typed array** you pass in with the `Buffer` methods and return it. See [how does Buffer work?](https://github.com/feross/buffer#how-does-it-work) for more about how augmentation works. This module uses the typed array's underlying `ArrayBuffer` to back the new `Buffer`. This respects the "view" on the `ArrayBuffer`, i.e. `byteOffset` and `byteLength`. In other words, if you do `toBuffer(new Uint32Array([1, 2, 3]))`, then the new `Buffer` will contain `[1, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0]`, **not** `[1, 2, 3]`. And it still doesn't require a copy. If the browser doesn't support typed arrays, then `toBuffer` will create a new `Buffer` object, copy the data into it, and return it. There's no simple performance optimization we can do for old browsers. Oh well. If this module is used in node, then it will just call `Buffer.from`. This is just for the convenience of modules that work in both node and the browser. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org). # lodash.merge v4.6.2 The [Lodash](https://lodash.com/) method `_.merge` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.merge ``` In Node.js: ```js var merge = require('lodash.merge'); ``` See the [documentation](https://lodash.com/docs#merge) or [package source](https://github.com/lodash/lodash/blob/4.6.2-npm-packages/lodash.merge) for more details. # bip39-light A lightweight fork of [bitcoinjs/bip39](https://github.com/bitcoinjs/bip39). Only english wordlist and removed some dependendecies. JavaScript implementation of [Bitcoin BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki): Mnemonic code for generating deterministic keys ## Reminder for developers ***Please remember to allow recovery from mnemonic phrases that have invalid checksums (or that you don't have the wordlist)*** When a checksum is invalid, warn the user that the phrase is not something generated by your app, and ask if they would like to use it anyway. This way, your app only needs to hold the wordlists for your supported languages, but you can recover phrases made by other apps in other languages. However, there should be other checks in place, such as checking to make sure the user is inputting 12 words or more separated by a space. ie. `phrase.trim().split(/\s+/g).length >= 12` ## Examples ``` js // Generate a random mnemonic (uses crypto.randomBytes under the hood), defaults to 128-bits of entropy var mnemonic = bip39.generateMnemonic() // => 'seed sock milk update focus rotate barely fade car face mechanic mercy' bip39.mnemonicToSeedHex('basket actual') // => '5cf2d4a8b0355e90295bdfc565a022a409af063d5365bb57bf74d9528f494bfa4400f53d8349b80fdae44082d7f9541e1dba2b003bcfec9d0d53781ca676651f' bip39.mnemonicToSeed('basket actual') // => <Buffer 5c f2 d4 a8 b0 35 5e 90 29 5b df c5 65 a0 22 a4 09 af 06 3d 53 65 bb 57 bf 74 d9 52 8f 49 4b fa 44 00 f5 3d 83 49 b8 0f da e4 40 82 d7 f9 54 1e 1d ba 2b ...> bip39.validateMnemonic(mnemonic) // => true bip39.validateMnemonic('basket actual') // => false ``` ``` js var bip39 = require('bip39-light') // defaults to BIP39 English word list // uses HEX strings for entropy var mnemonic = bip39.entropyToMnemonic('00000000000000000000000000000000') // => abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about // reversible bip39.mnemonicToEntropy(mnemonic) // => '00000000000000000000000000000000' ``` # braces [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/braces.svg?style=flat)](https://www.npmjs.com/package/braces) [![NPM monthly downloads](https://img.shields.io/npm/dm/braces.svg?style=flat)](https://npmjs.org/package/braces) [![NPM total downloads](https://img.shields.io/npm/dt/braces.svg?style=flat)](https://npmjs.org/package/braces) [![Linux Build Status](https://img.shields.io/travis/micromatch/braces.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/braces) > Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save braces ``` ## v3.0.0 Released!! See the [changelog](CHANGELOG.md) for details. ## Why use braces? Brace patterns make globs more powerful by adding the ability to match specific ranges and sequences of characters. * **Accurate** - complete support for the [Bash 4.3 Brace Expansion](www.gnu.org/software/bash/) specification (passes all of the Bash braces tests) * **[fast and performant](#benchmarks)** - Starts fast, runs fast and [scales well](#performance) as patterns increase in complexity. * **Organized code base** - The parser and compiler are easy to maintain and update when edge cases crop up. * **Well-tested** - Thousands of test assertions, and passes all of the Bash, minimatch, and [brace-expansion](https://github.com/juliangruber/brace-expansion) unit tests (as of the date this was written). * **Safer** - You shouldn't have to worry about users defining aggressive or malicious brace patterns that can break your application. Braces takes measures to prevent malicious regex that can be used for DDoS attacks (see [catastrophic backtracking](https://www.regular-expressions.info/catastrophic.html)). * [Supports lists](#lists) - (aka "sets") `a/{b,c}/d` => `['a/b/d', 'a/c/d']` * [Supports sequences](#sequences) - (aka "ranges") `{01..03}` => `['01', '02', '03']` * [Supports steps](#steps) - (aka "increments") `{2..10..2}` => `['2', '4', '6', '8', '10']` * [Supports escaping](#escaping) - To prevent evaluation of special characters. ## Usage The main export is a function that takes one or more brace `patterns` and `options`. ```js const braces = require('braces'); // braces(patterns[, options]); console.log(braces(['{01..05}', '{a..e}'])); //=> ['(0[1-5])', '([a-e])'] console.log(braces(['{01..05}', '{a..e}'], { expand: true })); //=> ['01', '02', '03', '04', '05', 'a', 'b', 'c', 'd', 'e'] ``` ### Brace Expansion vs. Compilation By default, brace patterns are compiled into strings that are optimized for creating regular expressions and matching. **Compiled** ```js console.log(braces('a/{x,y,z}/b')); //=> ['a/(x|y|z)/b'] console.log(braces(['a/{01..20}/b', 'a/{1..5}/b'])); //=> [ 'a/(0[1-9]|1[0-9]|20)/b', 'a/([1-5])/b' ] ``` **Expanded** Enable brace expansion by setting the `expand` option to true, or by using [braces.expand()](#expand) (returns an array similar to what you'd expect from Bash, or `echo {1..5}`, or [minimatch](https://github.com/isaacs/minimatch)): ```js console.log(braces('a/{x,y,z}/b', { expand: true })); //=> ['a/x/b', 'a/y/b', 'a/z/b'] console.log(braces.expand('{01..10}')); //=> ['01','02','03','04','05','06','07','08','09','10'] ``` ### Lists Expand lists (like Bash "sets"): ```js console.log(braces('a/{foo,bar,baz}/*.js')); //=> ['a/(foo|bar|baz)/*.js'] console.log(braces.expand('a/{foo,bar,baz}/*.js')); //=> ['a/foo/*.js', 'a/bar/*.js', 'a/baz/*.js'] ``` ### Sequences Expand ranges of characters (like Bash "sequences"): ```js console.log(braces.expand('{1..3}')); // ['1', '2', '3'] console.log(braces.expand('a/{1..3}/b')); // ['a/1/b', 'a/2/b', 'a/3/b'] console.log(braces('{a..c}', { expand: true })); // ['a', 'b', 'c'] console.log(braces('foo/{a..c}', { expand: true })); // ['foo/a', 'foo/b', 'foo/c'] // supports zero-padded ranges console.log(braces('a/{01..03}/b')); //=> ['a/(0[1-3])/b'] console.log(braces('a/{001..300}/b')); //=> ['a/(0{2}[1-9]|0[1-9][0-9]|[12][0-9]{2}|300)/b'] ``` See [fill-range](https://github.com/jonschlinkert/fill-range) for all available range-expansion options. ### Steppped ranges Steps, or increments, may be used with ranges: ```js console.log(braces.expand('{2..10..2}')); //=> ['2', '4', '6', '8', '10'] console.log(braces('{2..10..2}')); //=> ['(2|4|6|8|10)'] ``` When the [.optimize](#optimize) method is used, or [options.optimize](#optionsoptimize) is set to true, sequences are passed to [to-regex-range](https://github.com/jonschlinkert/to-regex-range) for expansion. ### Nesting Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved. **"Expanded" braces** ```js console.log(braces.expand('a{b,c,/{x,y}}/e')); //=> ['ab/e', 'ac/e', 'a/x/e', 'a/y/e'] console.log(braces.expand('a/{x,{1..5},y}/c')); //=> ['a/x/c', 'a/1/c', 'a/2/c', 'a/3/c', 'a/4/c', 'a/5/c', 'a/y/c'] ``` **"Optimized" braces** ```js console.log(braces('a{b,c,/{x,y}}/e')); //=> ['a(b|c|/(x|y))/e'] console.log(braces('a/{x,{1..5},y}/c')); //=> ['a/(x|([1-5])|y)/c'] ``` ### Escaping **Escaping braces** A brace pattern will not be expanded or evaluted if _either the opening or closing brace is escaped_: ```js console.log(braces.expand('a\\{d,c,b}e')); //=> ['a{d,c,b}e'] console.log(braces.expand('a{d,c,b\\}e')); //=> ['a{d,c,b}e'] ``` **Escaping commas** Commas inside braces may also be escaped: ```js console.log(braces.expand('a{b\\,c}d')); //=> ['a{b,c}d'] console.log(braces.expand('a{d\\,c,b}e')); //=> ['ad,ce', 'abe'] ``` **Single items** Following bash conventions, a brace pattern is also not expanded when it contains a single character: ```js console.log(braces.expand('a{b}c')); //=> ['a{b}c'] ``` ## Options ### options.maxLength **Type**: `Number` **Default**: `65,536` **Description**: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera. ```js console.log(braces('a/{b,c}/d', { maxLength: 3 })); //=> throws an error ``` ### options.expand **Type**: `Boolean` **Default**: `undefined` **Description**: Generate an "expanded" brace pattern (alternatively you can use the `braces.expand()` method, which does the same thing). ```js console.log(braces('a/{b,c}/d', { expand: true })); //=> [ 'a/b/d', 'a/c/d' ] ``` ### options.nodupes **Type**: `Boolean` **Default**: `undefined` **Description**: Remove duplicates from the returned array. ### options.rangeLimit **Type**: `Number` **Default**: `1000` **Description**: To prevent malicious patterns from being passed by users, an error is thrown when `braces.expand()` is used or `options.expand` is true and the generated range will exceed the `rangeLimit`. You can customize `options.rangeLimit` or set it to `Inifinity` to disable this altogether. **Examples** ```js // pattern exceeds the "rangeLimit", so it's optimized automatically console.log(braces.expand('{1..1000}')); //=> ['([1-9]|[1-9][0-9]{1,2}|1000)'] // pattern does not exceed "rangeLimit", so it's NOT optimized console.log(braces.expand('{1..100}')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100'] ``` ### options.transform **Type**: `Function` **Default**: `undefined` **Description**: Customize range expansion. **Example: Transforming non-numeric values** ```js const alpha = braces.expand('x/{a..e}/y', { transform(value, index) { // When non-numeric values are passed, "value" is a character code. return 'foo/' + String.fromCharCode(value) + '-' + index; } }); console.log(alpha); //=> [ 'x/foo/a-0/y', 'x/foo/b-1/y', 'x/foo/c-2/y', 'x/foo/d-3/y', 'x/foo/e-4/y' ] ``` **Example: Transforming numeric values** ```js const numeric = braces.expand('{1..5}', { transform(value) { // when numeric values are passed, "value" is a number return 'foo/' + value * 2; } }); console.log(numeric); //=> [ 'foo/2', 'foo/4', 'foo/6', 'foo/8', 'foo/10' ] ``` ### options.quantifiers **Type**: `Boolean` **Default**: `undefined` **Description**: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, `a{1,3}` will match the letter `a` one to three times. Unfortunately, regex quantifiers happen to share the same syntax as [Bash lists](#lists) The `quantifiers` option tells braces to detect when [regex quantifiers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp#quantifiers) are defined in the given pattern, and not to try to expand them as lists. **Examples** ```js const braces = require('braces'); console.log(braces('a/b{1,3}/{x,y,z}')); //=> [ 'a/b(1|3)/(x|y|z)' ] console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true})); //=> [ 'a/b{1,3}/(x|y|z)' ] console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true, expand: true})); //=> [ 'a/b{1,3}/x', 'a/b{1,3}/y', 'a/b{1,3}/z' ] ``` ### options.unescape **Type**: `Boolean` **Default**: `undefined` **Description**: Strip backslashes that were used for escaping from the result. ## What is "brace expansion"? Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs). In addition to "expansion", braces are also used for matching. In other words: * [brace expansion](#brace-expansion) is for generating new lists * [brace matching](#brace-matching) is for filtering existing lists <details> <summary><strong>More about brace expansion</strong> (click to expand)</summary> There are two main types of brace expansion: 1. **lists**: which are defined using comma-separated values inside curly braces: `{a,b,c}` 2. **sequences**: which are defined using a starting value and an ending value, separated by two dots: `a{1..3}b`. Optionally, a third argument may be passed to define a "step" or increment to use: `a{1..100..10}b`. These are also sometimes referred to as "ranges". Here are some example brace patterns to illustrate how they work: **Sets** ``` {a,b,c} => a b c {a,b,c}{1,2} => a1 a2 b1 b2 c1 c2 ``` **Sequences** ``` {1..9} => 1 2 3 4 5 6 7 8 9 {4..-4} => 4 3 2 1 0 -1 -2 -3 -4 {1..20..3} => 1 4 7 10 13 16 19 {a..j} => a b c d e f g h i j {j..a} => j i h g f e d c b a {a..z..3} => a d g j m p s v y ``` **Combination** Sets and sequences can be mixed together or used along with any other strings. ``` {a,b,c}{1..3} => a1 a2 a3 b1 b2 b3 c1 c2 c3 foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar ``` The fact that braces can be "expanded" from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases. ## Brace matching In addition to _expansion_, brace patterns are also useful for performing regular-expression-like matching. For example, the pattern `foo/{1..3}/bar` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar ``` But not: ``` baz/1/qux baz/2/qux baz/3/qux ``` Braces can also be combined with [glob patterns](https://github.com/jonschlinkert/micromatch) to perform more advanced wildcard matching. For example, the pattern `*/{1..3}/*` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar baz/1/qux baz/2/qux baz/3/qux ``` ## Brace matching pitfalls Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of. ### tldr **"brace bombs"** * brace expansion can eat up a huge amount of processing resources * as brace patterns increase _linearly in size_, the system resources required to expand the pattern increase exponentially * users can accidentally (or intentially) exhaust your system's resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!) For a more detailed explanation with examples, see the [geometric complexity](#geometric-complexity) section. ### The solution Jump to the [performance section](#performance) to see how Braces solves this problem in comparison to other libraries. ### Geometric complexity At minimum, brace patterns with sets limited to two elements have quadradic or `O(n^2)` complexity. But the complexity of the algorithm increases exponentially as the number of sets, _and elements per set_, increases, which is `O(n^c)`. For example, the following sets demonstrate quadratic (`O(n^2)`) complexity: ``` {1,2}{3,4} => (2X2) => 13 14 23 24 {1,2}{3,4}{5,6} => (2X2X2) => 135 136 145 146 235 236 245 246 ``` But add an element to a set, and we get a n-fold Cartesian product with `O(n^c)` complexity: ``` {1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 249 257 258 259 267 268 269 347 348 349 357 358 359 367 368 369 ``` Now, imagine how this complexity grows given that each element is a n-tuple: ``` {1..100}{1..100} => (100X100) => 10,000 elements (38.4 kB) {1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB) ``` Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control. **More information** Interested in learning more about brace expansion? * [linuxjournal/bash-brace-expansion](http://www.linuxjournal.com/content/bash-brace-expansion) * [rosettacode/Brace_expansion](https://rosettacode.org/wiki/Brace_expansion) * [cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) </details> ## Performance Braces is not only screaming fast, it's also more accurate the other brace expansion libraries. ### Better algorithms Fortunately there is a solution to the ["brace bomb" problem](#brace-matching-pitfalls): _don't expand brace patterns into an array when they're used for matching_. Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently. **The proof is in the numbers** Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using `braces()` and `minimatch.braceExpand()`, respectively. | **Pattern** | **braces** | **[minimatch][]** | | --- | --- | --- | | `{1..9007199254740991}`[^1] | `298 B` (5ms 459μs)| N/A (freezes) | | `{1..1000000000000000}` | `41 B` (1ms 15μs) | N/A (freezes) | | `{1..100000000000000}` | `40 B` (890μs) | N/A (freezes) | | `{1..10000000000000}` | `39 B` (2ms 49μs) | N/A (freezes) | | `{1..1000000000000}` | `38 B` (608μs) | N/A (freezes) | | `{1..100000000000}` | `37 B` (397μs) | N/A (freezes) | | `{1..10000000000}` | `35 B` (983μs) | N/A (freezes) | | `{1..1000000000}` | `34 B` (798μs) | N/A (freezes) | | `{1..100000000}` | `33 B` (733μs) | N/A (freezes) | | `{1..10000000}` | `32 B` (5ms 632μs) | `78.89 MB` (16s 388ms 569μs) | | `{1..1000000}` | `31 B` (1ms 381μs) | `6.89 MB` (1s 496ms 887μs) | | `{1..100000}` | `30 B` (950μs) | `588.89 kB` (146ms 921μs) | | `{1..10000}` | `29 B` (1ms 114μs) | `48.89 kB` (14ms 187μs) | | `{1..1000}` | `28 B` (760μs) | `3.89 kB` (1ms 453μs) | | `{1..100}` | `22 B` (345μs) | `291 B` (196μs) | | `{1..10}` | `10 B` (533μs) | `20 B` (37μs) | | `{1..3}` | `7 B` (190μs) | `5 B` (27μs) | ### Faster algorithms When you need expansion, braces is still much faster. _(the following results were generated using `braces.expand()` and `minimatch.braceExpand()`, respectively)_ | **Pattern** | **braces** | **[minimatch][]** | | --- | --- | --- | | `{1..10000000}` | `78.89 MB` (2s 698ms 642μs) | `78.89 MB` (18s 601ms 974μs) | | `{1..1000000}` | `6.89 MB` (458ms 576μs) | `6.89 MB` (1s 491ms 621μs) | | `{1..100000}` | `588.89 kB` (20ms 728μs) | `588.89 kB` (156ms 919μs) | | `{1..10000}` | `48.89 kB` (2ms 202μs) | `48.89 kB` (13ms 641μs) | | `{1..1000}` | `3.89 kB` (1ms 796μs) | `3.89 kB` (1ms 958μs) | | `{1..100}` | `291 B` (424μs) | `291 B` (211μs) | | `{1..10}` | `20 B` (487μs) | `20 B` (72μs) | | `{1..3}` | `5 B` (166μs) | `5 B` (27μs) | If you'd like to run these comparisons yourself, see [test/support/generate.js](test/support/generate.js). ## Benchmarks ### Running benchmarks Install dev dependencies: ```bash npm i -d && npm benchmark ``` ### Latest results Braces is more accurate, without sacrificing performance. ```bash # range (expanded) braces x 29,040 ops/sec ±3.69% (91 runs sampled)) minimatch x 4,735 ops/sec ±1.28% (90 runs sampled) # range (optimized for regex) braces x 382,878 ops/sec ±0.56% (94 runs sampled) minimatch x 1,040 ops/sec ±0.44% (93 runs sampled) # nested ranges (expanded) braces x 19,744 ops/sec ±2.27% (92 runs sampled)) minimatch x 4,579 ops/sec ±0.50% (93 runs sampled) # nested ranges (optimized for regex) braces x 246,019 ops/sec ±2.02% (93 runs sampled) minimatch x 1,028 ops/sec ±0.39% (94 runs sampled) # set (expanded) braces x 138,641 ops/sec ±0.53% (95 runs sampled) minimatch x 219,582 ops/sec ±0.98% (94 runs sampled) # set (optimized for regex) braces x 388,408 ops/sec ±0.41% (95 runs sampled) minimatch x 44,724 ops/sec ±0.91% (89 runs sampled) # nested sets (expanded) braces x 84,966 ops/sec ±0.48% (94 runs sampled) minimatch x 140,720 ops/sec ±0.37% (95 runs sampled) # nested sets (optimized for regex) braces x 263,340 ops/sec ±2.06% (92 runs sampled) minimatch x 28,714 ops/sec ±0.40% (90 runs sampled) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Contributors | **Commits** | **Contributor** | | --- | --- | | 197 | [jonschlinkert](https://github.com/jonschlinkert) | | 4 | [doowb](https://github.com/doowb) | | 1 | [es128](https://github.com/es128) | | 1 | [eush77](https://github.com/eush77) | | 1 | [hemanth](https://github.com/hemanth) | | 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 08, 2019._ <table><thead> <tr> <th>Linux</th> <th>OS X</th> <th>Windows</th> <th>Coverage</th> <th>Downloads</th> </tr> </thead><tbody><tr> <td colspan="2" align="center"> <a href="https://travis-ci.org/kaelzhang/node-ignore"> <img src="https://travis-ci.org/kaelzhang/node-ignore.svg?branch=master" alt="Build Status" /></a> </td> <td align="center"> <a href="https://ci.appveyor.com/project/kaelzhang/node-ignore"> <img src="https://ci.appveyor.com/api/projects/status/github/kaelzhang/node-ignore?branch=master&svg=true" alt="Windows Build Status" /></a> </td> <td align="center"> <a href="https://codecov.io/gh/kaelzhang/node-ignore"> <img src="https://codecov.io/gh/kaelzhang/node-ignore/branch/master/graph/badge.svg" alt="Coverage Status" /></a> </td> <td align="center"> <a href="https://www.npmjs.org/package/ignore"> <img src="http://img.shields.io/npm/dm/ignore.svg" alt="npm module downloads per month" /></a> </td> </tr></tbody></table> # ignore `ignore` is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore [spec](http://git-scm.com/docs/gitignore). Pay attention that [`minimatch`](https://www.npmjs.org/package/minimatch) does not work in the gitignore way. To filter filenames according to .gitignore file, I recommend this module. ##### Tested on - Linux + Node: `0.8` - `7.x` - Windows + Node: `0.10` - `7.x`, node < `0.10` is not tested due to the lack of support of appveyor. Actually, `ignore` does not rely on any versions of node specially. Since `4.0.0`, ignore will no longer support `node < 6` by default, to use in node < 6, `require('ignore/legacy')`. For details, see [CHANGELOG](https://github.com/kaelzhang/node-ignore/blob/master/CHANGELOG.md). ## Table Of Main Contents - [Usage](#usage) - [`Pathname` Conventions](#pathname-conventions) - [Guide for 2.x -> 3.x](#upgrade-2x---3x) - [Guide for 3.x -> 4.x](#upgrade-3x---4x) - See Also: - [`glob-gitignore`](https://www.npmjs.com/package/glob-gitignore) matches files using patterns and filters them according to gitignore rules. ## Usage ```js import ignore from 'ignore' const ig = ignore().add(['.abc/*', '!.abc/d/']) ``` ### Filter the given paths ```js const paths = [ '.abc/a.js', // filtered out '.abc/d/e.js' // included ] ig.filter(paths) // ['.abc/d/e.js'] ig.ignores('.abc/a.js') // true ``` ### As the filter function ```js paths.filter(ig.createFilter()); // ['.abc/d/e.js'] ``` ### Win32 paths will be handled ```js ig.filter(['.abc\\a.js', '.abc\\d\\e.js']) // if the code above runs on windows, the result will be // ['.abc\\d\\e.js'] ``` ## Why another ignore? - `ignore` is a standalone module, and is much simpler so that it could easy work with other programs, unlike [isaacs](https://npmjs.org/~isaacs)'s [fstream-ignore](https://npmjs.org/package/fstream-ignore) which must work with the modules of the fstream family. - `ignore` only contains utility methods to filter paths according to the specified ignore rules, so - `ignore` never try to find out ignore rules by traversing directories or fetching from git configurations. - `ignore` don't cares about sub-modules of git projects. - Exactly according to [gitignore man page](http://git-scm.com/docs/gitignore), fixes some known matching issues of fstream-ignore, such as: - '`/*.js`' should only match '`a.js`', but not '`abc/a.js`'. - '`**/foo`' should match '`foo`' anywhere. - Prevent re-including a file if a parent directory of that file is excluded. - Handle trailing whitespaces: - `'a '`(one space) should not match `'a '`(two spaces). - `'a \ '` matches `'a '` - All test cases are verified with the result of `git check-ignore`. # Methods ## .add(pattern: string | Ignore): this ## .add(patterns: Array<string | Ignore>): this - **pattern** `String | Ignore` An ignore pattern string, or the `Ignore` instance - **patterns** `Array<String | Ignore>` Array of ignore patterns. Adds a rule or several rules to the current manager. Returns `this` Notice that a line starting with `'#'`(hash) is treated as a comment. Put a backslash (`'\'`) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename. ```js ignore().add('#abc').ignores('#abc') // false ignore().add('\#abc').ignores('#abc') // true ``` `pattern` could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just `ignore().add()` the content of a ignore file: ```js ignore() .add(fs.readFileSync(filenameOfGitignore).toString()) .filter(filenames) ``` `pattern` could also be an `ignore` instance, so that we could easily inherit the rules of another `Ignore` instance. ## <strike>.addIgnoreFile(path)</strike> REMOVED in `3.x` for now. To upgrade `ignore@2.x` up to `3.x`, use ```js import fs from 'fs' if (fs.existsSync(filename)) { ignore().add(fs.readFileSync(filename).toString()) } ``` instead. ## .filter(paths: Array<Pathname>): Array<Pathname> ```ts type Pathname = string ``` Filters the given array of pathnames, and returns the filtered array. - **paths** `Array.<Pathname>` The array of `pathname`s to be filtered. ### `Pathname` Conventions: #### 1. `Pathname` should be a `path.relative()`d pathname `Pathname` should be a string that have been `path.join()`ed, or the return value of `path.relative()` to the current directory. ```js // WRONG ig.ignores('./abc') // WRONG, for it will never happen. // If the gitignore rule locates at the root directory, // `'/abc'` should be changed to `'abc'`. // ``` // path.relative('/', '/abc') -> 'abc' // ``` ig.ignores('/abc') // Right ig.ignores('abc') // Right ig.ignores(path.join('./abc')) // path.join('./abc') -> 'abc' ``` In other words, each `Pathname` here should be a relative path to the directory of the gitignore rules. Suppose the dir structure is: ``` /path/to/your/repo |-- a | |-- a.js | |-- .b | |-- .c |-- .DS_store ``` Then the `paths` might be like this: ```js [ 'a/a.js' '.b', '.c/.DS_store' ] ``` Usually, you could use [`glob`](http://npmjs.org/package/glob) with `option.mark = true` to fetch the structure of the current directory: ```js import glob from 'glob' glob('**', { // Adds a / character to directory matches. mark: true }, (err, files) => { if (err) { return console.error(err) } let filtered = ignore().add(patterns).filter(files) console.log(filtered) }) ``` #### 2. filenames and dirnames `node-ignore` does NO `fs.stat` during path matching, so for the example below: ```js ig.add('config/') // `ig` does NOT know if 'config' is a normal file, directory or something ig.ignores('config') // And it returns `false` ig.ignores('config/') // returns `true` ``` Specially for people who develop some library based on `node-ignore`, it is important to understand that. ## .ignores(pathname: Pathname): boolean > new in 3.2.0 Returns `Boolean` whether `pathname` should be ignored. ```js ig.ignores('.abc/a.js') // true ``` ## .createFilter() Creates a filter function which could filter an array of paths with `Array.prototype.filter`. Returns `function(path)` the filter function. ## `options.ignorecase` since 4.0.0 Similar as the `core.ignorecase` option of [git-config](https://git-scm.com/docs/git-config), `node-ignore` will be case insensitive if `options.ignorecase` is set to `true` (default value), otherwise case sensitive. ```js const ig = ignore({ ignorecase: false }) ig.add('*.png') ig.ignores('*.PNG') // false ``` **** # Upgrade Guide ## Upgrade 2.x -> 3.x - All `options` of 2.x are unnecessary and removed, so just remove them. - `ignore()` instance is no longer an [`EventEmitter`](nodejs.org/api/events.html), and all events are unnecessary and removed. - `.addIgnoreFile()` is removed, see the [.addIgnoreFile](#addignorefilepath) section for details. ## Upgrade 3.x -> 4.x Since `4.0.0`, `ignore` will no longer support node < 6, to use `ignore` in node < 6: ```js var ignore = require('ignore/legacy') ``` **** # Collaborators - [@whitecolor](https://github.com/whitecolor) *Alex* - [@SamyPesse](https://github.com/SamyPesse) *Samy Pessé* - [@azproduction](https://github.com/azproduction) *Mikhail Davydov* - [@TrySound](https://github.com/TrySound) *Bogdan Chadkin* - [@JanMattner](https://github.com/JanMattner) *Jan Mattner* - [@ntwb](https://github.com/ntwb) *Stephen Edgar* - [@kasperisager](https://github.com/kasperisager) *Kasper Isager* - [@sandersn](https://github.com/sandersn) *Nathan Shively-Sanders* # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![a fun cartoon logo made of glob characters](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. * `fs` File-system object with Node's `fs` API. By default, the built-in `fs` module will be used. Set to a volume provided by a library like `memfs` to avoid using the "real" file-system. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # ESLint Scope ESLint Scope is the [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) scope analyzer used in ESLint. It is a fork of [escope](http://github.com/estools/escope). ## Usage Install: ``` npm i eslint-scope --save ``` Example: ```js var eslintScope = require('eslint-scope'); var espree = require('espree'); var estraverse = require('estraverse'); var ast = espree.parse(code); var scopeManager = eslintScope.analyze(ast); var currentScope = scopeManager.acquire(ast); // global scope estraverse.traverse(ast, { enter: function(node, parent) { // do stuff if (/Function/.test(node.type)) { currentScope = scopeManager.acquire(node); // get current function scope } }, leave: function(node, parent) { if (/Function/.test(node.type)) { currentScope = currentScope.upper; // set to parent scope } // do stuff } }); ``` ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/eslint-scope/issues). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting ## License ESLint Scope is licensed under a permissive BSD 2-clause license. # Borsh JS [![Project license](https://img.shields.io/badge/license-Apache2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Project license](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Discord](https://img.shields.io/discord/490367152054992913?label=discord)](https://discord.gg/Vyp7ETM) [![Travis status](https://travis-ci.com/near/borsh.svg?branch=master)](https://travis-ci.com/near/borsh-js) [![NPM version](https://img.shields.io/npm/v/borsh.svg?style=flat-square)](https://npmjs.com/borsh) [![Size on NPM](https://img.shields.io/bundlephobia/minzip/borsh.svg?style=flat-square)](https://npmjs.com/borsh) **Borsh JS** is an implementation of the [Borsh] binary serialization format for JavaScript and TypeScript projects. Borsh stands for _Binary Object Representation Serializer for Hashing_. It is meant to be used in security-critical projects as it prioritizes consistency, safety, speed, and comes with a strict specification. ## Examples ### Serializing an object ```javascript const value = new Test({ x: 255, y: 20, z: '123', q: [1, 2, 3] }); const schema = new Map([[Test, { kind: 'struct', fields: [['x', 'u8'], ['y', 'u64'], ['z', 'string'], ['q', [3]]] }]]); const buffer = borsh.serialize(schema, value); ``` ### Deserializing an object ```javascript const newValue = borsh.deserialize(schema, Test, buffer); ``` ## Type Mappings | Borsh | TypeScript | |-----------------------|----------------| | `u8` integer | `number` | | `u16` integer | `number` | | `u32` integer | `number` | | `u64` integer | `BN` | | `u128` integer | `BN` | | `u256` integer | `BN` | | `u512` integer | `BN` | | `f32` float | N/A | | `f64` float | N/A | | fixed-size byte array | `Uint8Array` | | UTF-8 string | `string` | | option | `null` or type | | map | N/A | | set | N/A | | structs | `any` | ## Contributing Install dependencies: ```bash yarn install ``` Continuously build with: ```bash yarn dev ``` Run tests: ```bash yarn test ``` Run linter ```bash yarn lint ``` ## Publish Prepare `dist` version by running: ```bash yarn build ``` When publishing to npm use [np](https://github.com/sindresorhus/np). # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-MIT](LICENSE-MIT.txt) and [LICENSE-APACHE](LICENSE-APACHE) for details. [Borsh]: https://borsh.io # <img src="docs_app/assets/Rx_Logo_S.png" alt="RxJS Logo" width="86" height="86"> RxJS: Reactive Extensions For JavaScript [![CircleCI](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x.svg?style=svg)](https://circleci.com/gh/ReactiveX/rxjs/tree/6.x) [![npm version](https://badge.fury.io/js/%40reactivex%2Frxjs.svg)](http://badge.fury.io/js/%40reactivex%2Frxjs) [![Join the chat at https://gitter.im/Reactive-Extensions/RxJS](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/Reactive-Extensions/RxJS?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # RxJS 6 Stable ### MIGRATION AND RELEASE INFORMATION: Find out how to update to v6, **automatically update your TypeScript code**, and more! - [Current home is MIGRATION.md](./docs_app/content/guide/v6/migration.md) ### FOR V 5.X PLEASE GO TO [THE 5.0 BRANCH](https://github.com/ReactiveX/rxjs/tree/5.x) Reactive Extensions Library for JavaScript. This is a rewrite of [Reactive-Extensions/RxJS](https://github.com/Reactive-Extensions/RxJS) and is the latest production-ready version of RxJS. This rewrite is meant to have better performance, better modularity, better debuggable call stacks, while staying mostly backwards compatible, with some breaking changes that reduce the API surface. [Apache 2.0 License](LICENSE.txt) - [Code of Conduct](CODE_OF_CONDUCT.md) - [Contribution Guidelines](CONTRIBUTING.md) - [Maintainer Guidelines](doc_app/content/maintainer-guidelines.md) - [API Documentation](https://rxjs.dev/) ## Versions In This Repository - [master](https://github.com/ReactiveX/rxjs/commits/master) - This is all of the current, unreleased work, which is against v6 of RxJS right now - [stable](https://github.com/ReactiveX/rxjs/commits/stable) - This is the branch for the latest version you'd get if you do `npm install rxjs` ## Important By contributing or commenting on issues in this repository, whether you've read them or not, you're agreeing to the [Contributor Code of Conduct](CODE_OF_CONDUCT.md). Much like traffic laws, ignorance doesn't grant you immunity. ## Installation and Usage ### ES6 via npm ```sh npm install rxjs ``` It's recommended to pull in the Observable creation methods you need directly from `'rxjs'` as shown below with `range`. And you can pull in any operator you need from one spot, under `'rxjs/operators'`. ```ts import { range } from "rxjs"; import { map, filter } from "rxjs/operators"; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` Here, we're using the built-in `pipe` method on Observables to combine operators. See [pipeable operators](https://github.com/ReactiveX/rxjs/blob/master/doc/pipeable-operators.md) for more information. ### CommonJS via npm To install this library for CommonJS (CJS) usage, use the following command: ```sh npm install rxjs ``` (Note: destructuring available in Node 8+) ```js const { range } = require('rxjs'); const { map, filter } = require('rxjs/operators'); range(1, 200).pipe( filter(x => x % 2 === 1), map(x => x + x) ).subscribe(x => console.log(x)); ``` ### CDN For CDN, you can use [unpkg](https://unpkg.com/): https://unpkg.com/rxjs/bundles/rxjs.umd.min.js The global namespace for rxjs is `rxjs`: ```js const { range } = rxjs; const { map, filter } = rxjs.operators; range(1, 200) .pipe( filter(x => x % 2 === 1), map(x => x + x) ) .subscribe(x => console.log(x)); ``` ## Goals - Smaller overall bundles sizes - Provide better performance than preceding versions of RxJS - To model/follow the [Observable Spec Proposal](https://github.com/zenparsing/es-observable) to the observable - Provide more modular file structure in a variety of formats - Provide more debuggable call stacks than preceding versions of RxJS ## Building/Testing - `npm run build_all` - builds everything - `npm test` - runs tests - `npm run test_no_cache` - run test with `ts-node` set to false ## Performance Tests Run `npm run build_perf` or `npm run perf` to run the performance tests with `protractor`. Run `npm run perf_micro [operator]` to run micro performance test benchmarking operator. ## Adding documentation We appreciate all contributions to the documentation of any type. All of the information needed to get the docs app up and running locally as well as how to contribute can be found in the [documentation directory](./docs_app). ## Generating PNG marble diagrams The script `npm run tests2png` requires some native packages installed locally: `imagemagick`, `graphicsmagick`, and `ghostscript`. For Mac OS X with [Homebrew](http://brew.sh/): - `brew install imagemagick` - `brew install graphicsmagick` - `brew install ghostscript` - You may need to install the Ghostscript fonts manually: - Download the tarball from the [gs-fonts project](https://sourceforge.net/projects/gs-fonts) - `mkdir -p /usr/local/share/ghostscript && tar zxvf /path/to/ghostscript-fonts.tar.gz -C /usr/local/share/ghostscript` For Debian Linux: - `sudo add-apt-repository ppa:dhor/myway` - `apt-get install imagemagick` - `apt-get install graphicsmagick` - `apt-get install ghostscript` For Windows and other Operating Systems, check the download instructions here: - http://imagemagick.org - http://www.graphicsmagick.org - http://www.ghostscript.com/ aproba ====== A ridiculously light-weight function argument validator ``` var validate = require("aproba") function myfunc(a, b, c) { // `a` must be a string, `b` a number, `c` a function validate('SNF', arguments) // [a,b,c] is also valid } myfunc('test', 23, function () {}) // ok myfunc(123, 23, function () {}) // type error myfunc('test', 23) // missing arg error myfunc('test', 23, function () {}, true) // too many args error ``` Valid types are: | type | description | :--: | :---------- | * | matches any type | A | `Array.isArray` OR an `arguments` object | S | typeof == string | N | typeof == number | F | typeof == function | O | typeof == object and not type A and not type E | B | typeof == boolean | E | `instanceof Error` OR `null` **(special: see below)** | Z | == `null` Validation failures throw one of three exception types, distinguished by a `code` property of `EMISSINGARG`, `EINVALIDTYPE` or `ETOOMANYARGS`. If you pass in an invalid type then it will throw with a code of `EUNKNOWNTYPE`. If an **error** argument is found and is not null then the remaining arguments are optional. That is, if you say `ESO` then that's like using a non-magical `E` in: `E|ESO|ZSO`. ### But I have optional arguments?! You can provide more than one signature by separating them with pipes `|`. If any signature matches the arguments then they'll be considered valid. So for example, say you wanted to write a signature for `fs.createWriteStream`. The docs for it describe it thusly: ``` fs.createWriteStream(path[, options]) ``` This would be a signature of `SO|S`. That is, a string and and object, or just a string. Now, if you read the full `fs` docs, you'll see that actually path can ALSO be a buffer. And options can be a string, that is: ``` path <String> | <Buffer> options <String> | <Object> ``` To reproduce this you have to fully enumerate all of the possible combinations and that implies a signature of `SO|SS|OO|OS|S|O`. The awkwardness is a feature: It reminds you of the complexity you're adding to your API when you do this sort of thing. ### Browser support This has no dependencies and should work in browsers, though you'll have noisier stack traces. ### Why this exists I wanted a very simple argument validator. It needed to do two things: 1. Be more concise and easier to use than assertions 2. Not encourage an infinite bikeshed of DSLs This is why types are specified by a single character and there's no such thing as an optional argument. This is not intended to validate user data. This is specifically about asserting the interface of your functions. If you need greater validation, I encourage you to write them by hand or look elsewhere. # is-ci Returns `true` if the current environment is a Continuous Integration server. Please [open an issue](https://github.com/watson/is-ci/issues) if your CI server isn't properly detected :) [![npm](https://img.shields.io/npm/v/is-ci.svg)](https://www.npmjs.com/package/is-ci) [![Build status](https://travis-ci.org/watson/is-ci.svg?branch=master)](https://travis-ci.org/watson/is-ci) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat)](https://github.com/feross/standard) ## Installation ```bash npm install is-ci --save ``` ## Programmatic Usage ```js const isCI = require('is-ci') if (isCI) { console.log('The code is running on a CI server') } ``` ## CLI Usage For CLI usage you need to have the `is-ci` executable in your `PATH`. There's a few ways to do that: - Either install the module globally using `npm install is-ci -g` - Or add the module as a dependency to your app in which case it can be used inside your package.json scripts as is - Or provide the full path to the executable, e.g. `./node_modules/.bin/is-ci` ```bash is-ci && echo "This is a CI server" ``` ## Supported CI tools Refer to [ci-info](https://github.com/watson/ci-info#supported-ci-tools) docs for all supported CI's ## License [MIT](LICENSE) # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` util-deprecate ============== ### The Node.js `util.deprecate()` function with browser support In Node.js, this module simply re-exports the `util.deprecate()` function. In the web browser (i.e. via browserify), a browser-specific implementation of the `util.deprecate()` function is used. ## API A `deprecate()` function is the only thing exposed by this module. ``` javascript // setup: exports.foo = deprecate(foo, 'foo() is deprecated, use bar() instead'); // users see: foo(); // foo() is deprecated, use bar() instead foo(); foo(); ``` ## License (The MIT License) Copyright (c) 2014 Nathan Rajlich <nathan@tootallnate.net> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [![SWUbanner](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg)](https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md) <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/main?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/main?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the maintainers and contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> ## Development instructions A development environment can be set up by cloning the repository: ```sh git clone https://github.com/AssemblyScript/assemblyscript.git cd assemblyscript npm install npm link ``` The link step is optional and makes the development instance available globally. The full process is documented as part of the repository: * [Compiler instructions](./src) * [Runtime instructions](./std/assembly/rt) * [Test instructions](./tests) # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) # Ozone - Javascript Class Framework [![Build Status](https://travis-ci.org/inf3rno/o3.png?branch=master)](https://travis-ci.org/inf3rno/o3) The Ozone class framework contains enhanced class support to ease the development of object-oriented javascript applications in an ES5 environment. Another alternative to get a better class support to use ES6 classes and compilers like Babel, Traceur or TypeScript until native ES6 support arrives. ## Documentation ### Installation ```bash npm install o3 ``` ```bash bower install o3 ``` #### Environment compatibility The framework succeeded the tests on - node v4.2 and v5.x - chrome 51.0 - firefox 47.0 and 48.0 - internet explorer 11.0 - phantomjs 2.1 by the usage of npm scripts under win7 x64. I wasn't able to test the framework by Opera since the Karma launcher is buggy, so I decided not to support Opera. I used [Yadda](https://github.com/acuminous/yadda) to write BDD tests. I used [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. On pre-ES5 environments there will be bugs in the Class module due to pre-ES5 enumeration and the lack of some ES5 methods, so pre-ES5 environments are not supported. #### Requirements An ES5 capable environment is required with - `Object.create` - ES5 compatible property enumeration: `Object.defineProperty`, `Object.getOwnPropertyDescriptor`, `Object.prototype.hasOwnProperty`, etc. - `Array.prototype.forEach` #### Usage In this documentation I used the framework as follows: ```js var o3 = require("o3"), Class = o3.Class; ``` ### Inheritance #### Inheriting from native classes (from the Error class in these examples) You can extend native classes by calling the Class() function. ```js var UserError = Class(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` An alternative to call Class.extend() with the Ancestor as the context. The Class() function uses this in the background. ```js var UserError = Class.extend.call(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` #### Inheriting from custom classes You can use Class.extend() by any other class, not just by native classes. ```js var Ancestor = Class(Object, { prototype: { a: 1, b: 2 } }); var Descendant = Class.extend.call(Ancestor, { prototype: { c: 3 } }); ``` Or you can simply add it as a static method, so you don't have to pass context any time you want to use it. The only drawback, that this static method will be inherited as well. ```js var Ancestor = Class(Object, { extend: Class.extend, prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheriting from the Class class You can inherit the extend() method and other utility methods from the Class class. Probably this is the simplest solution if you need the Class API and you don't need to inherit from special native classes like Error. ```js var Ancestor = Class.extend({ prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheritance with clone and merge The static extend() method uses the clone() and merge() utility methods to inherit from the ancestor and add properties from the config. ```js var MyClass = Class.clone.call(Object, function MyClass(){ // ... }); Class.merge.call(MyClass, { prototype: { x: 1, y: 2 } }); ``` Or with utility methods. ```js var MyClass = Class.clone(function MyClass() { // ... }).merge({ prototype: { x: 1, y: 2 } }); ``` #### Inheritance with clone and absorb You can fill in missing properties with the usage of absorb. ```js var MyClass = Class(SomeAncestor, {...}); Class.absorb.call(MyClass, Class); MyClass.merge({...}); ``` For example if you don't have Class methods and your class already has an ancestor, then you can use absorb() to add Class methods. #### Abstract classes Using abstract classes with instantiation verification won't be implemented in this lib, however we provide an `abstractMethod`, which you can put to not implemented parts of your abstract class. ```js var AbstractA = Class({ prototype: { doA: function (){ // ... var b = this.getB(); // ... // do something with b // ... }, getB: abstractMethod } }); var AB1 = Class(AbstractA, { prototype: { getB: function (){ return new B1(); } } }); var ab1 = new AB1(); ``` I strongly support the composition over inheritance principle and I think you should use dependency injection instead of abstract classes. ```js var A = Class({ prototype: { init: function (b){ this.b = b; }, doA: function (){ // ... // do something with this.b // ... } } }); var b = new B1(); var ab1 = new A(b); ``` ### Constructors #### Using a custom constructor You can pass your custom constructor as a config option by creating the class. ```js var MyClass = Class(Object, { prototype: { constructor: function () { // ... } } }); ``` #### Using a custom factory to create the constructor Or you can pass a static factory method to create your custom constructor. ```js var MyClass = Class(Object, { factory: function () { return function () { // ... } } }); ``` #### Using an inherited factory to create the constructor By inheritance the constructors of the descendant classes will be automatically created as well. ```js var Ancestor = Class(Object, { factory: function () { return function () { // ... } } }); var Descendant = Class(Ancestor, {}); ``` #### Using the default factory to create the constructor You don't need to pass anything if you need a noop function as constructor. The Class.factory() will create a noop constructor by default. ```js var MyClass = Class(Object, {}); ``` In fact you don't need to pass any arguments to the Class function if you need an empty class inheriting from the Object native class. ```js var MyClass = Class(); ``` The default factory calls the build() and init() methods if they are given. ```js var MyClass = Class({ prototype: { build: function (options) { console.log("build", options); }, init: function (options) { console.log("init", options); } } }); var my = new MyClass({a: 1, b: 2}); // build {a: 1, b: 2} // init {a: 1, b: 2} var my2 = my.clone({c: 3}); // build {c: 3} var MyClass2 = MyClass.extend({}, [{d: 4}]); // build {d: 4} ``` ### Instantiation #### Creating new instance with the new operator Ofc. you can create a new instance in the javascript way. ```js var MyClass = Class(); var my = new MyClass(); ``` #### Creating a new instance with the static newInstance method If you want to pass an array of arguments then you can do it the following way. ```js var MyClass = Class.extend({ prototype: { constructor: function () { for (var i in arguments) console.log(arguments[i]); } } }); var my = MyClass.newInstance.apply(MyClass, ["a", "b", "c"]); // a // b // c ``` #### Creating new instance with clone You can create a new instance by cloning the prototype of the class. ```js var MyClass = Class(); var my = Class.prototype.clone.call(MyClass.prototype); ``` Or you can inherit the utility methods to make this easier. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); ``` Just be aware that by default cloning calls only the `build()` method, so the `init()` method won't be called by the new instance. #### Cloning instances You can clone an existing instance with the clone method. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); var my2 = my.clone(); ``` Be aware that this is prototypal inheritance with Object.create(), so the inherited properties won't be enumerable. The clone() method calls the build() method on the new instance if it is given. #### Using clone in the constructor You can use the same behavior both by cloning and by creating a new instance using the constructor ```js var MyClass = Class.extend({ lastIndex: 0, prototype: { index: undefined, constructor: function MyClass() { return MyClass.prototype.clone(); }, clone: function () { var instance = Class.prototype.clone.call(this); instance.index = ++MyClass.lastIndex; return instance; } } }); var my1 = new MyClass(); var my2 = MyClass.prototype.clone(); var my3 = my1.clone(); var my4 = my2.clone(); ``` Be aware that this way the constructor will drop the instance created with the `new` operator. Be aware that the clone() method is used by inheritance, so creating the prototype of a descendant class will use the clone() method as well. ```js var Descendant = MyClass.clone(function Descendant() { return Descendant.prototype.clone(); }); var my5 = Descendant.prototype; var my6 = new Descendant(); // ... ``` #### Using absorb(), merge() or inheritance to set the defaults values on properties You can use absorb() to set default values after configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(config); this.absorb(theDefaults); } } }); ``` You can use merge() to set default values before configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(theDefaults); this.merge(config); } } }); ``` You can use inheritance to set default values on class level. ```js var MyClass = Class.extend({ prototype: { aProperty: defaultValue, // ... constructor: function (config) { this.merge(config); } } }); ``` ## License MIT - 2015 Jánszky László Lajos https-proxy-agent ================ ### An HTTP(s) proxy `http.Agent` implementation for HTTPS [![Build Status](https://github.com/TooTallNate/node-https-proxy-agent/workflows/Node%20CI/badge.svg)](https://github.com/TooTallNate/node-https-proxy-agent/actions?workflow=Node+CI) This module provides an `http.Agent` implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in `https` module. Specifically, this `Agent` implementation connects to an intermediary "proxy" server and issues the [CONNECT HTTP method][CONNECT], which tells the proxy to open a direct TCP connection to the destination server. Since this agent implements the CONNECT HTTP method, it also works with other protocols that use this method when connecting over proxies (i.e. WebSockets). See the "Examples" section below for more. Installation ------------ Install with `npm`: ``` bash $ npm install https-proxy-agent ``` Examples -------- #### `https` module example ``` js var url = require('url'); var https = require('https'); var HttpsProxyAgent = require('https-proxy-agent'); // HTTP/HTTPS proxy to connect to var proxy = process.env.http_proxy || 'http://168.63.76.32:3128'; console.log('using proxy server %j', proxy); // HTTPS endpoint for the proxy to connect to var endpoint = process.argv[2] || 'https://graph.facebook.com/tootallnate'; console.log('attempting to GET %j', endpoint); var options = url.parse(endpoint); // create an instance of the `HttpsProxyAgent` class with the proxy server information var agent = new HttpsProxyAgent(proxy); options.agent = agent; https.get(options, function (res) { console.log('"response" event!', res.headers); res.pipe(process.stdout); }); ``` #### `ws` WebSocket connection example ``` js var url = require('url'); var WebSocket = require('ws'); var HttpsProxyAgent = require('https-proxy-agent'); // HTTP/HTTPS proxy to connect to var proxy = process.env.http_proxy || 'http://168.63.76.32:3128'; console.log('using proxy server %j', proxy); // WebSocket endpoint for the proxy to connect to var endpoint = process.argv[2] || 'ws://echo.websocket.org'; var parsed = url.parse(endpoint); console.log('attempting to connect to WebSocket %j', endpoint); // create an instance of the `HttpsProxyAgent` class with the proxy server information var options = url.parse(proxy); var agent = new HttpsProxyAgent(options); // finally, initiate the WebSocket connection var socket = new WebSocket(endpoint, { agent: agent }); socket.on('open', function () { console.log('"open" event!'); socket.send('hello world'); }); socket.on('message', function (data, flags) { console.log('"message" event! %j %j', data, flags); socket.close(); }); ``` API --- ### new HttpsProxyAgent(Object options) The `HttpsProxyAgent` class implements an `http.Agent` subclass that connects to the specified "HTTP(s) proxy server" in order to proxy HTTPS and/or WebSocket requests. This is achieved by using the [HTTP `CONNECT` method][CONNECT]. The `options` argument may either be a string URI of the proxy server to use, or an "options" object with more specific properties: * `host` - String - Proxy host to connect to (may use `hostname` as well). Required. * `port` - Number - Proxy port to connect to. Required. * `protocol` - String - If `https:`, then use TLS to connect to the proxy. * `headers` - Object - Additional HTTP headers to be sent on the HTTP CONNECT method. * Any other options given are passed to the `net.connect()`/`tls.connect()` functions. License ------- (The MIT License) Copyright (c) 2013 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [CONNECT]: http://en.wikipedia.org/wiki/HTTP_tunnel#HTTP_CONNECT_Tunneling # micromatch [![NPM version](https://img.shields.io/npm/v/micromatch.svg?style=flat)](https://www.npmjs.com/package/micromatch) [![NPM monthly downloads](https://img.shields.io/npm/dm/micromatch.svg?style=flat)](https://npmjs.org/package/micromatch) [![NPM total downloads](https://img.shields.io/npm/dt/micromatch.svg?style=flat)](https://npmjs.org/package/micromatch) [![Tests](https://github.com/micromatch/micromatch/actions/workflows/test.yml/badge.svg)](https://github.com/micromatch/micromatch/actions/workflows/test.yml) > Glob matching for javascript/node.js. A replacement and faster alternative to minimatch and multimatch. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Table of Contents <details> <summary><strong>Details</strong></summary> - [Install](#install) - [Quickstart](#quickstart) - [Why use micromatch?](#why-use-micromatch) * [Matching features](#matching-features) - [Switching to micromatch](#switching-to-micromatch) * [From minimatch](#from-minimatch) * [From multimatch](#from-multimatch) - [API](#api) - [Options](#options) - [Options Examples](#options-examples) * [options.basename](#optionsbasename) * [options.bash](#optionsbash) * [options.expandRange](#optionsexpandrange) * [options.format](#optionsformat) * [options.ignore](#optionsignore) * [options.matchBase](#optionsmatchbase) * [options.noextglob](#optionsnoextglob) * [options.nonegate](#optionsnonegate) * [options.noglobstar](#optionsnoglobstar) * [options.nonull](#optionsnonull) * [options.nullglob](#optionsnullglob) * [options.onIgnore](#optionsonignore) * [options.onMatch](#optionsonmatch) * [options.onResult](#optionsonresult) * [options.posixSlashes](#optionsposixslashes) * [options.unescape](#optionsunescape) - [Extended globbing](#extended-globbing) * [Extglobs](#extglobs) * [Braces](#braces) * [Regex character classes](#regex-character-classes) * [Regex groups](#regex-groups) * [POSIX bracket expressions](#posix-bracket-expressions) - [Notes](#notes) * [Bash 4.3 parity](#bash-43-parity) * [Backslashes](#backslashes) - [Benchmarks](#benchmarks) * [Running benchmarks](#running-benchmarks) * [Latest results](#latest-results) - [Contributing](#contributing) - [About](#about) </details> ## Install Install with [npm](https://www.npmjs.com/) (requires [Node.js](https://nodejs.org/en/) >=8.6): ```sh $ npm install --save micromatch ``` ## Quickstart ```js const micromatch = require('micromatch'); // micromatch(list, patterns[, options]); ``` The [main export](#micromatch) takes a list of strings and one or more glob patterns: ```js console.log(micromatch(['foo', 'bar', 'baz', 'qux'], ['f*', 'b*'])) //=> ['foo', 'bar', 'baz'] console.log(micromatch(['foo', 'bar', 'baz', 'qux'], ['*', '!b*'])) //=> ['foo', 'qux'] ``` Use [.isMatch()](#ismatch) to for boolean matching: ```js console.log(micromatch.isMatch('foo', 'f*')) //=> true console.log(micromatch.isMatch('foo', ['b*', 'f*'])) //=> true ``` [Switching](#switching-to-micromatch) from minimatch and multimatch is easy! <br> ## Why use micromatch? > micromatch is a [replacement](#switching-to-micromatch) for minimatch and multimatch * Supports all of the same matching features as [minimatch](https://github.com/isaacs/minimatch) and [multimatch](https://github.com/sindresorhus/multimatch) * More complete support for the Bash 4.3 specification than minimatch and multimatch. Micromatch passes _all of the spec tests_ from bash, including some that bash still fails. * **Fast & Performant** - Loads in about 5ms and performs [fast matches](#benchmarks). * **Glob matching** - Using wildcards (`*` and `?`), globstars (`**`) for nested directories * **[Advanced globbing](#extended-globbing)** - Supports [extglobs](#extglobs), [braces](#braces-1), and [POSIX brackets](#posix-bracket-expressions), and support for escaping special characters with `\` or quotes. * **Accurate** - Covers more scenarios [than minimatch](https://github.com/yarnpkg/yarn/pull/3339) * **Well tested** - More than 5,000 [test assertions](./test) * **Windows support** - More reliable windows support than minimatch and multimatch. * **[Safe](https://github.com/micromatch/braces#braces-is-safe)** - Micromatch is not subject to DoS with brace patterns like minimatch and multimatch. ### Matching features * Support for multiple glob patterns (no need for wrappers like multimatch) * Wildcards (`**`, `*.js`) * Negation (`'!a/*.js'`, `'*!(b).js'`) * [extglobs](#extglobs) (`+(x|y)`, `!(a|b)`) * [POSIX character classes](#posix-bracket-expressions) (`[[:alpha:][:digit:]]`) * [brace expansion](https://github.com/micromatch/braces) (`foo/{1..5}.md`, `bar/{a,b,c}.js`) * regex character classes (`foo-[1-5].js`) * regex logical "or" (`foo/(abc|xyz).js`) You can mix and match these features to create whatever patterns you need! ## Switching to micromatch _(There is one notable difference between micromatch and minimatch in regards to how backslashes are handled. See [the notes about backslashes](#backslashes) for more information.)_ ### From minimatch Use [micromatch.isMatch()](#ismatch) instead of `minimatch()`: ```js console.log(micromatch.isMatch('foo', 'b*')); //=> false ``` Use [micromatch.match()](#match) instead of `minimatch.match()`: ```js console.log(micromatch.match(['foo', 'bar'], 'b*')); //=> 'bar' ``` ### From multimatch Same signature: ```js console.log(micromatch(['foo', 'bar', 'baz'], ['f*', '*z'])); //=> ['foo', 'baz'] ``` ## API **Params** * `list` **{String|Array<string>}**: List of strings to match. * `patterns` **{String|Array<string>}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) * `returns` **{Array}**: Returns an array of matches **Example** ```js const mm = require('micromatch'); // mm(list, patterns[, options]); console.log(mm(['a.js', 'a.txt'], ['*.js'])); //=> [ 'a.js' ] ``` ### [.matcher](index.js#L104) Returns a matcher function from the given glob `pattern` and `options`. The returned function takes a string to match as its only argument and returns true if the string is a match. **Params** * `pattern` **{String}**: Glob pattern * `options` **{Object}** * `returns` **{Function}**: Returns a matcher function. **Example** ```js const mm = require('micromatch'); // mm.matcher(pattern[, options]); const isMatch = mm.matcher('*.!(*a)'); console.log(isMatch('a.a')); //=> false console.log(isMatch('a.b')); //=> true ``` ### [.isMatch](index.js#L123) Returns true if **any** of the given glob `patterns` match the specified `string`. **Params** * `str` **{String}**: The string to test. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `[options]` **{Object}**: See available [options](#options). * `returns` **{Boolean}**: Returns true if any patterns match `str` **Example** ```js const mm = require('micromatch'); // mm.isMatch(string, patterns[, options]); console.log(mm.isMatch('a.a', ['b.*', '*.a'])); //=> true console.log(mm.isMatch('a.a', 'b.*')); //=> false ``` ### [.not](index.js#L148) Returns a list of strings that _**do not match any**_ of the given `patterns`. **Params** * `list` **{Array}**: Array of strings to match. * `patterns` **{String|Array}**: One or more glob pattern to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Array}**: Returns an array of strings that **do not match** the given patterns. **Example** ```js const mm = require('micromatch'); // mm.not(list, patterns[, options]); console.log(mm.not(['a.a', 'b.b', 'c.c'], '*.a')); //=> ['b.b', 'c.c'] ``` ### [.contains](index.js#L188) Returns true if the given `string` contains the given pattern. Similar to [.isMatch](#isMatch) but the pattern can match any part of the string. **Params** * `str` **{String}**: The string to match. * `patterns` **{String|Array}**: Glob pattern to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if any of the patterns matches any part of `str`. **Example** ```js var mm = require('micromatch'); // mm.contains(string, pattern[, options]); console.log(mm.contains('aa/bb/cc', '*b')); //=> true console.log(mm.contains('aa/bb/cc', '*d')); //=> false ``` ### [.matchKeys](index.js#L230) Filter the keys of the given object with the given `glob` pattern and `options`. Does not attempt to match nested keys. If you need this feature, use [glob-object](https://github.com/jonschlinkert/glob-object) instead. **Params** * `object` **{Object}**: The object with keys to filter. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Object}**: Returns an object with only keys that match the given patterns. **Example** ```js const mm = require('micromatch'); // mm.matchKeys(object, patterns[, options]); const obj = { aa: 'a', ab: 'b', ac: 'c' }; console.log(mm.matchKeys(obj, '*b')); //=> { ab: 'b' } ``` ### [.some](index.js#L259) Returns true if some of the strings in the given `list` match any of the given glob `patterns`. **Params** * `list` **{String|Array}**: The string or array of strings to test. Returns as soon as the first match is found. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if any `patterns` matches any of the strings in `list` **Example** ```js const mm = require('micromatch'); // mm.some(list, patterns[, options]); console.log(mm.some(['foo.js', 'bar.js'], ['*.js', '!foo.js'])); // true console.log(mm.some(['foo.js'], ['*.js', '!foo.js'])); // false ``` ### [.every](index.js#L295) Returns true if every string in the given `list` matches any of the given glob `patterns`. **Params** * `list` **{String|Array}**: The string or array of strings to test. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if all `patterns` matches all of the strings in `list` **Example** ```js const mm = require('micromatch'); // mm.every(list, patterns[, options]); console.log(mm.every('foo.js', ['foo.js'])); // true console.log(mm.every(['foo.js', 'bar.js'], ['*.js'])); // true console.log(mm.every(['foo.js', 'bar.js'], ['*.js', '!foo.js'])); // false console.log(mm.every(['foo.js'], ['*.js', '!foo.js'])); // false ``` ### [.all](index.js#L334) Returns true if **all** of the given `patterns` match the specified string. **Params** * `str` **{String|Array}**: The string to test. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if any patterns match `str` **Example** ```js const mm = require('micromatch'); // mm.all(string, patterns[, options]); console.log(mm.all('foo.js', ['foo.js'])); // true console.log(mm.all('foo.js', ['*.js', '!foo.js'])); // false console.log(mm.all('foo.js', ['*.js', 'foo.js'])); // true console.log(mm.all('foo.js', ['*.js', 'f*', '*o*', '*o.js'])); // true ``` ### [.capture](index.js#L361) Returns an array of matches captured by `pattern` in `string, or`null` if the pattern did not match. **Params** * `glob` **{String}**: Glob pattern to use for matching. * `input` **{String}**: String to match * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Array|null}**: Returns an array of captures if the input matches the glob pattern, otherwise `null`. **Example** ```js const mm = require('micromatch'); // mm.capture(pattern, string[, options]); console.log(mm.capture('test/*.js', 'test/foo.js')); //=> ['foo'] console.log(mm.capture('test/*.js', 'foo/bar.css')); //=> null ``` ### [.makeRe](index.js#L387) Create a regular expression from the given glob `pattern`. **Params** * `pattern` **{String}**: A glob pattern to convert to regex. * `options` **{Object}** * `returns` **{RegExp}**: Returns a regex created from the given pattern. **Example** ```js const mm = require('micromatch'); // mm.makeRe(pattern[, options]); console.log(mm.makeRe('*.js')); //=> /^(?:(\.[\\\/])?(?!\.)(?=.)[^\/]*?\.js)$/ ``` ### [.scan](index.js#L403) Scan a glob pattern to separate the pattern into segments. Used by the [split](#split) method. **Params** * `pattern` **{String}** * `options` **{Object}** * `returns` **{Object}**: Returns an object with **Example** ```js const mm = require('micromatch'); const state = mm.scan(pattern[, options]); ``` ### [.parse](index.js#L419) Parse a glob pattern to create the source string for a regular expression. **Params** * `glob` **{String}** * `options` **{Object}** * `returns` **{Object}**: Returns an object with useful properties and output to be used as regex source string. **Example** ```js const mm = require('micromatch'); const state = mm.parse(pattern[, options]); ``` ### [.braces](index.js#L446) Process the given brace `pattern`. **Params** * `pattern` **{String}**: String with brace pattern to process. * `options` **{Object}**: Any [options](#options) to change how expansion is performed. See the [braces](https://github.com/micromatch/braces) library for all available options. * `returns` **{Array}** **Example** ```js const { braces } = require('micromatch'); console.log(braces('foo/{a,b,c}/bar')); //=> [ 'foo/(a|b|c)/bar' ] console.log(braces('foo/{a,b,c}/bar', { expand: true })); //=> [ 'foo/a/bar', 'foo/b/bar', 'foo/c/bar' ] ``` ## Options | **Option** | **Type** | **Default value** | **Description** | | --- | --- | --- | --- | | `basename` | `boolean` | `false` | If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. | | `bash` | `boolean` | `false` | Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (`**`). | | `capture` | `boolean` | `undefined` | Return regex matches in supporting methods. | | `contains` | `boolean` | `undefined` | Allows glob to match any part of the given string(s). | | `cwd` | `string` | `process.cwd()` | Current working directory. Used by `picomatch.split()` | | `debug` | `boolean` | `undefined` | Debug regular expressions when an error is thrown. | | `dot` | `boolean` | `false` | Match dotfiles. Otherwise dotfiles are ignored unless a `.` is explicitly defined in the pattern. | | `expandRange` | `function` | `undefined` | Custom function for expanding ranges in brace patterns, such as `{a..z}`. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It's recommended that returned strings be wrapped in parentheses. This option is overridden by the `expandBrace` option. | | `failglob` | `boolean` | `false` | Similar to the `failglob` behavior in Bash, throws an error when no matches are found. Based on the bash option of the same name. | | `fastpaths` | `boolean` | `true` | To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to `false`. | | `flags` | `boolean` | `undefined` | Regex flags to use in the generated regex. If defined, the `nocase` option will be overridden. | | [format](#optionsformat) | `function` | `undefined` | Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc. | | `ignore` | `array\|string` | `undefined` | One or more glob patterns for excluding strings that should not be matched from the result. | | `keepQuotes` | `boolean` | `false` | Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes. | | `literalBrackets` | `boolean` | `undefined` | When `true`, brackets in the glob pattern will be escaped so that only literal brackets will be matched. | | `lookbehinds` | `boolean` | `true` | Support regex positive and negative lookbehinds. Note that you must be using Node 8.1.10 or higher to enable regex lookbehinds. | | `matchBase` | `boolean` | `false` | Alias for `basename` | | `maxLength` | `boolean` | `65536` | Limit the max length of the input string. An error is thrown if the input string is longer than this value. | | `nobrace` | `boolean` | `false` | Disable brace matching, so that `{a,b}` and `{1..3}` would be treated as literal characters. | | `nobracket` | `boolean` | `undefined` | Disable matching with regex brackets. | | `nocase` | `boolean` | `false` | Perform case-insensitive matching. Equivalent to the regex `i` flag. Note that this option is ignored when the `flags` option is defined. | | `nodupes` | `boolean` | `true` | Deprecated, use `nounique` instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false. | | `noext` | `boolean` | `false` | Alias for `noextglob` | | `noextglob` | `boolean` | `false` | Disable support for matching with [extglobs](#extglobs) (like `+(a\|b)`) | | `noglobstar` | `boolean` | `false` | Disable support for matching nested directories with globstars (`**`) | | `nonegate` | `boolean` | `false` | Disable support for negating with leading `!` | | `noquantifiers` | `boolean` | `false` | Disable support for regex quantifiers (like `a{1,2}`) and treat them as brace patterns to be expanded. | | [onIgnore](#optionsonIgnore) | `function` | `undefined` | Function to be called on ignored items. | | [onMatch](#optionsonMatch) | `function` | `undefined` | Function to be called on matched items. | | [onResult](#optionsonResult) | `function` | `undefined` | Function to be called on all items, regardless of whether or not they are matched or ignored. | | `posix` | `boolean` | `false` | Support [POSIX character classes](#posix-bracket-expressions) ("posix brackets"). | | `posixSlashes` | `boolean` | `undefined` | Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself | | `prepend` | `string` | `undefined` | String to prepend to the generated regex used for matching. | | `regex` | `boolean` | `false` | Use regular expression rules for `+` (instead of matching literal `+`), and for stars that follow closing parentheses or brackets (as in `)*` and `]*`). | | `strictBrackets` | `boolean` | `undefined` | Throw an error if brackets, braces, or parens are imbalanced. | | `strictSlashes` | `boolean` | `undefined` | When true, picomatch won't match trailing slashes with single stars. | | `unescape` | `boolean` | `undefined` | Remove preceding backslashes from escaped glob characters before creating the regular expression to perform matches. | | `unixify` | `boolean` | `undefined` | Alias for `posixSlashes`, for backwards compatitibility. | ## Options Examples ### options.basename Allow glob patterns without slashes to match a file path based on its basename. Same behavior as [minimatch](https://github.com/isaacs/minimatch) option `matchBase`. **Type**: `Boolean` **Default**: `false` **Example** ```js micromatch(['a/b.js', 'a/c.md'], '*.js'); //=> [] micromatch(['a/b.js', 'a/c.md'], '*.js', { basename: true }); //=> ['a/b.js'] ``` ### options.bash Enabled by default, this option enforces bash-like behavior with stars immediately following a bracket expression. Bash bracket expressions are similar to regex character classes, but unlike regex, a star following a bracket expression **does not repeat the bracketed characters**. Instead, the star is treated the same as any other star. **Type**: `Boolean` **Default**: `true` **Example** ```js const files = ['abc', 'ajz']; console.log(micromatch(files, '[a-c]*')); //=> ['abc', 'ajz'] console.log(micromatch(files, '[a-c]*', { bash: false })); ``` ### options.expandRange **Type**: `function` **Default**: `undefined` Custom function for expanding ranges in brace patterns. The [fill-range](https://github.com/jonschlinkert/fill-range) library is ideal for this purpose, or you can use custom code to do whatever you need. **Example** The following example shows how to create a glob that matches a numeric folder name between `01` and `25`, with leading zeros. ```js const fill = require('fill-range'); const regex = micromatch.makeRe('foo/{01..25}/bar', { expandRange(a, b) { return `(${fill(a, b, { toRegex: true })})`; } }); console.log(regex) //=> /^(?:foo\/((?:0[1-9]|1[0-9]|2[0-5]))\/bar)$/ console.log(regex.test('foo/00/bar')) // false console.log(regex.test('foo/01/bar')) // true console.log(regex.test('foo/10/bar')) // true console.log(regex.test('foo/22/bar')) // true console.log(regex.test('foo/25/bar')) // true console.log(regex.test('foo/26/bar')) // false ``` ### options.format **Type**: `function` **Default**: `undefined` Custom function for formatting strings before they're matched. **Example** ```js // strip leading './' from strings const format = str => str.replace(/^\.\//, ''); const isMatch = picomatch('foo/*.js', { format }); console.log(isMatch('./foo/bar.js')) //=> true ``` ### options.ignore String or array of glob patterns to match files to ignore. **Type**: `String|Array` **Default**: `undefined` ```js const isMatch = micromatch.matcher('*', { ignore: 'f*' }); console.log(isMatch('foo')) //=> false console.log(isMatch('bar')) //=> true console.log(isMatch('baz')) //=> true ``` ### options.matchBase Alias for [options.basename](#options-basename). ### options.noextglob Disable extglob support, so that [extglobs](#extglobs) are regarded as literal characters. **Type**: `Boolean` **Default**: `undefined` **Examples** ```js console.log(micromatch(['a/z', 'a/b', 'a/!(z)'], 'a/!(z)')); //=> ['a/b', 'a/!(z)'] console.log(micromatch(['a/z', 'a/b', 'a/!(z)'], 'a/!(z)', { noextglob: true })); //=> ['a/!(z)'] (matches only as literal characters) ``` ### options.nonegate Disallow negation (`!`) patterns, and treat leading `!` as a literal character to match. **Type**: `Boolean` **Default**: `undefined` ### options.noglobstar Disable matching with globstars (`**`). **Type**: `Boolean` **Default**: `undefined` ```js micromatch(['a/b', 'a/b/c', 'a/b/c/d'], 'a/**'); //=> ['a/b', 'a/b/c', 'a/b/c/d'] micromatch(['a/b', 'a/b/c', 'a/b/c/d'], 'a/**', {noglobstar: true}); //=> ['a/b'] ``` ### options.nonull Alias for [options.nullglob](#options-nullglob). ### options.nullglob If `true`, when no matches are found the actual (arrayified) glob pattern is returned instead of an empty array. Same behavior as [minimatch](https://github.com/isaacs/minimatch) option `nonull`. **Type**: `Boolean` **Default**: `undefined` ### options.onIgnore ```js const onIgnore = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); // { glob: '*', regex: /^(?:(?!\.)(?=.)[^\/]*?\/?)$/, input: 'foo', output: 'foo' } }; const isMatch = micromatch.matcher('*', { onIgnore, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` ### options.onMatch ```js const onMatch = ({ glob, regex, input, output }) => { console.log({ input, output }); // { input: 'some\\path', output: 'some/path' } // { input: 'some\\path', output: 'some/path' } // { input: 'some\\path', output: 'some/path' } }; const isMatch = micromatch.matcher('**', { onMatch, posixSlashes: true }); isMatch('some\\path'); isMatch('some\\path'); isMatch('some\\path'); ``` ### options.onResult ```js const onResult = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = micromatch('*', { onResult, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` ### options.posixSlashes Convert path separators on returned files to posix/unix-style forward slashes. Aliased as `unixify` for backwards compatibility. **Type**: `Boolean` **Default**: `true` on windows, `false` everywhere else. **Example** ```js console.log(micromatch.match(['a\\b\\c'], 'a/**')); //=> ['a/b/c'] console.log(micromatch.match(['a\\b\\c'], { posixSlashes: false })); //=> ['a\\b\\c'] ``` ### options.unescape Remove backslashes from escaped glob characters before creating the regular expression to perform matches. **Type**: `Boolean` **Default**: `undefined` **Example** In this example we want to match a literal `*`: ```js console.log(micromatch.match(['abc', 'a\\*c'], 'a\\*c')); //=> ['a\\*c'] console.log(micromatch.match(['abc', 'a\\*c'], 'a\\*c', { unescape: true })); //=> ['a*c'] ``` <br> <br> ## Extended globbing Micromatch supports the following extended globbing features. ### Extglobs Extended globbing, as described by the bash man page: | **pattern** | **regex equivalent** | **description** | | --- | --- | --- | | `?(pattern)` | `(pattern)?` | Matches zero or one occurrence of the given patterns | | `*(pattern)` | `(pattern)*` | Matches zero or more occurrences of the given patterns | | `+(pattern)` | `(pattern)+` | Matches one or more occurrences of the given patterns | | `@(pattern)` | `(pattern)` <sup>*</sup> | Matches one of the given patterns | | `!(pattern)` | N/A (equivalent regex is much more complicated) | Matches anything except one of the given patterns | <sup><strong>*</strong></sup> Note that `@` isn't a regex character. ### Braces Brace patterns can be used to match specific ranges or sets of characters. **Example** The pattern `{f,b}*/{1..3}/{b,q}*` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar baz/1/qux baz/2/qux baz/3/qux ``` Visit [braces](https://github.com/micromatch/braces) to see the full range of features and options related to brace expansion, or to create brace matching or expansion related issues. ### Regex character classes Given the list: `['a.js', 'b.js', 'c.js', 'd.js', 'E.js']`: * `[ac].js`: matches both `a` and `c`, returning `['a.js', 'c.js']` * `[b-d].js`: matches from `b` to `d`, returning `['b.js', 'c.js', 'd.js']` * `a/[A-Z].js`: matches and uppercase letter, returning `['a/E.md']` Learn about [regex character classes](http://www.regular-expressions.info/charclass.html). ### Regex groups Given `['a.js', 'b.js', 'c.js', 'd.js', 'E.js']`: * `(a|c).js`: would match either `a` or `c`, returning `['a.js', 'c.js']` * `(b|d).js`: would match either `b` or `d`, returning `['b.js', 'd.js']` * `(b|[A-Z]).js`: would match either `b` or an uppercase letter, returning `['b.js', 'E.js']` As with regex, parens can be nested, so patterns like `((a|b)|c)/b` will work. Although brace expansion might be friendlier to use, depending on preference. ### POSIX bracket expressions POSIX brackets are intended to be more user-friendly than regex character classes. This of course is in the eye of the beholder. **Example** ```js console.log(micromatch.isMatch('a1', '[[:alpha:][:digit:]]')) //=> true console.log(micromatch.isMatch('a1', '[[:alpha:][:alpha:]]')) //=> false ``` *** ## Notes ### Bash 4.3 parity Whenever possible matching behavior is based on behavior Bash 4.3, which is mostly consistent with minimatch. However, it's suprising how many edge cases and rabbit holes there are with glob matching, and since there is no real glob specification, and micromatch is more accurate than both Bash and minimatch, there are cases where best-guesses were made for behavior. In a few cases where Bash had no answers, we used wildmatch (used by git) as a fallback. ### Backslashes There is an important, notable difference between minimatch and micromatch _in regards to how backslashes are handled_ in glob patterns. * Micromatch exclusively and explicitly reserves backslashes for escaping characters in a glob pattern, even on windows, which is consistent with bash behavior. _More importantly, unescaping globs can result in unsafe regular expressions_. * Minimatch converts all backslashes to forward slashes, which means you can't use backslashes to escape any characters in your glob patterns. We made this decision for micromatch for a couple of reasons: * Consistency with bash conventions. * Glob patterns are not filepaths. They are a type of [regular language](https://en.wikipedia.org/wiki/Regular_language) that is converted to a JavaScript regular expression. Thus, when forward slashes are defined in a glob pattern, the resulting regular expression will match windows or POSIX path separators just fine. **A note about joining paths to globs** Note that when you pass something like `path.join('foo', '*')` to micromatch, you are creating a filepath and expecting it to still work as a glob pattern. This causes problems on windows, since the `path.sep` is `\\`. In other words, since `\\` is reserved as an escape character in globs, on windows `path.join('foo', '*')` would result in `foo\\*`, which tells micromatch to match `*` as a literal character. This is the same behavior as bash. To solve this, you might be inspired to do something like `'foo\\*'.replace(/\\/g, '/')`, but this causes another, potentially much more serious, problem. ## Benchmarks ### Running benchmarks Install dependencies for running benchmarks: ```sh $ cd bench && npm install ``` Run the benchmarks: ```sh $ npm run bench ``` ### Latest results As of March 24, 2022 (longer bars are better): ```sh # .makeRe star micromatch x 2,232,802 ops/sec ±2.34% (89 runs sampled)) minimatch x 781,018 ops/sec ±6.74% (92 runs sampled)) # .makeRe star; dot=true micromatch x 1,863,453 ops/sec ±0.74% (93 runs sampled) minimatch x 723,105 ops/sec ±0.75% (93 runs sampled) # .makeRe globstar micromatch x 1,624,179 ops/sec ±2.22% (91 runs sampled) minimatch x 1,117,230 ops/sec ±2.78% (86 runs sampled)) # .makeRe globstars micromatch x 1,658,642 ops/sec ±0.86% (92 runs sampled) minimatch x 741,224 ops/sec ±1.24% (89 runs sampled)) # .makeRe with leading star micromatch x 1,525,014 ops/sec ±1.63% (90 runs sampled) minimatch x 561,074 ops/sec ±3.07% (89 runs sampled) # .makeRe - braces micromatch x 172,478 ops/sec ±2.37% (78 runs sampled) minimatch x 96,087 ops/sec ±2.34% (88 runs sampled))) # .makeRe braces - range (expanded) micromatch x 26,973 ops/sec ±0.84% (89 runs sampled) minimatch x 3,023 ops/sec ±0.99% (90 runs sampled)) # .makeRe braces - range (compiled) micromatch x 152,892 ops/sec ±1.67% (83 runs sampled) minimatch x 992 ops/sec ±3.50% (89 runs sampled)d)) # .makeRe braces - nested ranges (expanded) micromatch x 15,816 ops/sec ±13.05% (80 runs sampled) minimatch x 2,953 ops/sec ±1.64% (91 runs sampled) # .makeRe braces - nested ranges (compiled) micromatch x 110,881 ops/sec ±1.85% (82 runs sampled) minimatch x 1,008 ops/sec ±1.51% (91 runs sampled) # .makeRe braces - set (compiled) micromatch x 134,930 ops/sec ±3.54% (63 runs sampled)) minimatch x 43,242 ops/sec ±0.60% (93 runs sampled) # .makeRe braces - nested sets (compiled) micromatch x 94,455 ops/sec ±1.74% (69 runs sampled)) minimatch x 27,720 ops/sec ±1.84% (93 runs sampled)) ``` ## Contributing All contributions are welcome! Please read [the contributing guide](.github/contributing.md) to get started. **Bug reports** Please create an issue if you encounter a bug or matching behavior that doesn't seem correct. If you find a matching-related issue, please: * [research existing issues first](../../issues) (open and closed) * visit the [GNU Bash documentation](https://www.gnu.org/software/bash/manual/) to see how Bash deals with the pattern * visit the [minimatch](https://github.com/isaacs/minimatch) documentation to cross-check expected behavior in node.js * if all else fails, since there is no real specification for globs we will probably need to discuss expected behavior and decide how to resolve it. which means any detail you can provide to help with this discussion would be greatly appreciated. **Platform issues** It's important to us that micromatch work consistently on all platforms. If you encounter any platform-specific matching or path related issues, please let us know (pull requests are also greatly appreciated). ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). Please read the [contributing guide](.github/contributing.md) for advice on opening issues, pull requests, and coding standards. </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [braces](https://www.npmjs.com/package/braces): Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support… [more](https://github.com/micromatch/braces) | [homepage](https://github.com/micromatch/braces "Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.") * [expand-brackets](https://www.npmjs.com/package/expand-brackets): Expand POSIX bracket expressions (character classes) in glob patterns. | [homepage](https://github.com/micromatch/expand-brackets "Expand POSIX bracket expressions (character classes) in glob patterns.") * [extglob](https://www.npmjs.com/package/extglob): Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob… [more](https://github.com/micromatch/extglob) | [homepage](https://github.com/micromatch/extglob "Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob patterns.") * [fill-range](https://www.npmjs.com/package/fill-range): Fill in a range of numbers or letters, optionally passing an increment or `step` to… [more](https://github.com/jonschlinkert/fill-range) | [homepage](https://github.com/jonschlinkert/fill-range "Fill in a range of numbers or letters, optionally passing an increment or `step` to use, or create a regex-compatible range with `options.toRegex`") * [nanomatch](https://www.npmjs.com/package/nanomatch): Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash… [more](https://github.com/micromatch/nanomatch) | [homepage](https://github.com/micromatch/nanomatch "Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash 4.3 wildcard support only (no support for exglobs, posix brackets or braces)") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 512 | [jonschlinkert](https://github.com/jonschlinkert) | | 12 | [es128](https://github.com/es128) | | 9 | [danez](https://github.com/danez) | | 8 | [doowb](https://github.com/doowb) | | 6 | [paulmillr](https://github.com/paulmillr) | | 5 | [mrmlnc](https://github.com/mrmlnc) | | 3 | [DrPizza](https://github.com/DrPizza) | | 2 | [TrySound](https://github.com/TrySound) | | 2 | [mceIdo](https://github.com/mceIdo) | | 2 | [Glazy](https://github.com/Glazy) | | 2 | [MartinKolarik](https://github.com/MartinKolarik) | | 2 | [antonyk](https://github.com/antonyk) | | 2 | [Tvrqvoise](https://github.com/Tvrqvoise) | | 1 | [amilajack](https://github.com/amilajack) | | 1 | [Cslove](https://github.com/Cslove) | | 1 | [devongovett](https://github.com/devongovett) | | 1 | [DianeLooney](https://github.com/DianeLooney) | | 1 | [UltCombo](https://github.com/UltCombo) | | 1 | [frangio](https://github.com/frangio) | | 1 | [joyceerhl](https://github.com/joyceerhl) | | 1 | [juszczykjakub](https://github.com/juszczykjakub) | | 1 | [muescha](https://github.com/muescha) | | 1 | [sebdeckers](https://github.com/sebdeckers) | | 1 | [tomByrer](https://github.com/tomByrer) | | 1 | [fidian](https://github.com/fidian) | | 1 | [curbengh](https://github.com/curbengh) | | 1 | [simlu](https://github.com/simlu) | | 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | | 1 | [yvele](https://github.com/yvele) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2022, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 24, 2022._ # readable-stream ***Node-core v8.11.1 streams for userland*** [![Build Status](https://travis-ci.org/nodejs/readable-stream.svg?branch=master)](https://travis-ci.org/nodejs/readable-stream) [![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) [![NPM](https://nodei.co/npm-dl/readable-stream.png?&months=6&height=3)](https://nodei.co/npm/readable-stream/) [![Sauce Test Status](https://saucelabs.com/browser-matrix/readable-stream.svg)](https://saucelabs.com/u/readable-stream) ```bash npm install --save readable-stream ``` ***Node-core streams for userland*** This package is a mirror of the Streams2 and Streams3 implementations in Node-core. Full documentation may be found on the [Node.js website](https://nodejs.org/dist/v8.11.1/docs/api/stream.html). If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core, for background see [this blogpost](http://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html). As of version 2.0.0 **readable-stream** uses semantic versioning. # Streams Working Group `readable-stream` is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include: * Addressing stream issues on the Node.js issue tracker. * Authoring and editing stream documentation within the Node.js project. * Reviewing changes to stream subclasses within the Node.js project. * Redirecting changes to streams from the Node.js project to this project. * Assisting in the implementation of stream providers within Node.js. * Recommending versions of `readable-stream` to be included in Node.js. * Messaging about the future of streams to give the community advance notice of changes. <a name="members"></a> ## Team Members * **Chris Dickinson** ([@chrisdickinson](https://github.com/chrisdickinson)) &lt;christopher.s.dickinson@gmail.com&gt; - Release GPG key: 9554F04D7259F04124DE6B476D5A82AC7E37093B * **Calvin Metcalf** ([@calvinmetcalf](https://github.com/calvinmetcalf)) &lt;calvin.metcalf@gmail.com&gt; - Release GPG key: F3EF5F62A87FC27A22E643F714CE4FF5015AA242 * **Rod Vagg** ([@rvagg](https://github.com/rvagg)) &lt;rod@vagg.org&gt; - Release GPG key: DD8F2338BAE7501E3DD5AC78C273792F7D83545D * **Sam Newman** ([@sonewman](https://github.com/sonewman)) &lt;newmansam@outlook.com&gt; * **Mathias Buus** ([@mafintosh](https://github.com/mafintosh)) &lt;mathiasbuus@gmail.com&gt; * **Domenic Denicola** ([@domenic](https://github.com/domenic)) &lt;d@domenic.me&gt; * **Matteo Collina** ([@mcollina](https://github.com/mcollina)) &lt;matteo.collina@gmail.com&gt; - Release GPG key: 3ABC01543F22DD2239285CDD818674489FBC127E * **Irina Shestak** ([@lrlna](https://github.com/lrlna)) &lt;shestak.irina@gmail.com&gt; # node-gyp-build > Build tool and bindings loader for [`node-gyp`][node-gyp] that supports prebuilds. ``` npm install node-gyp-build ``` [![Test](https://github.com/prebuild/node-gyp-build/actions/workflows/test.yml/badge.svg)](https://github.com/prebuild/node-gyp-build/actions/workflows/test.yml) Use together with [`prebuildify`][prebuildify] to easily support prebuilds for your native modules. ## Usage > **Note.** Prebuild names have changed in [`prebuildify@3`][prebuildify] and `node-gyp-build@4`. Please see the documentation below. `node-gyp-build` works similar to [`node-gyp build`][node-gyp] except that it will check if a build or prebuild is present before rebuilding your project. It's main intended use is as an npm install script and bindings loader for native modules that bundle prebuilds using [`prebuildify`][prebuildify]. First add `node-gyp-build` as an install script to your native project ``` js { ... "scripts": { "install": "node-gyp-build" } } ``` Then in your `index.js`, instead of using the [`bindings`](https://www.npmjs.com/package/bindings) module use `node-gyp-build` to load your binding. ``` js var binding = require('node-gyp-build')(__dirname) ``` If you do these two things and bundle prebuilds with [`prebuildify`][prebuildify] your native module will work for most platforms without having to compile on install time AND will work in both node and electron without the need to recompile between usage. Users can override `node-gyp-build` and force compiling by doing `npm install --build-from-source`. Prebuilds will be attempted loaded from `MODULE_PATH/prebuilds/...` and then next `EXEC_PATH/prebuilds/...` (the latter allowing use with `zeit/pkg`) ## Supported prebuild names If so desired you can bundle more specific flavors, for example `musl` builds to support Alpine, or targeting a numbered ARM architecture version. These prebuilds can be bundled in addition to generic prebuilds; `node-gyp-build` will try to find the most specific flavor first. Prebuild filenames are composed of _tags_. The runtime tag takes precedence, as does an `abi` tag over `napi`. For more details on tags, please see [`prebuildify`][prebuildify]. Values for the `libc` and `armv` tags are auto-detected but can be overridden through the `LIBC` and `ARM_VERSION` environment variables, respectively. ## License MIT [prebuildify]: https://github.com/prebuild/prebuildify [node-gyp]: https://www.npmjs.com/package/node-gyp # Optionator <a name="optionator" /> Optionator is a JavaScript/Node.js option parsing and help generation library used by [eslint](http://eslint.org), [Grasp](http://graspjs.com), [LiveScript](http://livescript.net), [esmangle](https://github.com/estools/esmangle), [escodegen](https://github.com/estools/escodegen), and [many more](https://www.npmjs.com/browse/depended/optionator). For an online demo, check out the [Grasp online demo](http://www.graspjs.com/#demo). [About](#about) &middot; [Usage](#usage) &middot; [Settings Format](#settings-format) &middot; [Argument Format](#argument-format) ## Why? The problem with other option parsers, such as `yargs` or `minimist`, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application. $ cmd --halp Invalid option '--halp' - perhaps you meant '--help'? $ cmd --count str Invalid value for option 'count' - expected type Int, received value: str. Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier. ## About Optionator uses [type-check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) behind the scenes to cast and verify input according the specified types. MIT license. Version 0.9.1 npm install optionator For updates on Optionator, [follow me on twitter](https://twitter.com/gkzahariev). Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify. ## Usage `require('optionator');` returns a function. It has one property, `VERSION`, the current version of the library as a string. This function is called with an object specifying your options and other information, see the [settings format section](#settings-format). This in turn returns an object with three properties, `parse`, `parseArgv`, `generateHelp`, and `generateHelpForOption`, which are all functions. ```js var optionator = require('optionator')({ prepend: 'Usage: cmd [options]', append: 'Version 1.0.0', options: [{ option: 'help', alias: 'h', type: 'Boolean', description: 'displays help' }, { option: 'count', alias: 'c', type: 'Int', description: 'number of things', example: 'cmd --count 2' }] }); var options = optionator.parseArgv(process.argv); if (options.help) { console.log(optionator.generateHelp()); } ... ``` ### parse(input, parseOptions) `parse` processes the `input` according to your settings, and returns an object with the results. ##### arguments * input - `[String] | Object | String` - the input you wish to parse * parseOptions - `{slice: Int}` - all options optional - `slice` specifies how much to slice away from the beginning if the input is an array or string - by default `0` for string, `2` for array (works with `process.argv`) ##### returns `Object` - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the `_` key. ##### example ```js parse(['node', 't.js', '--count', '2', 'positional']); // {count: 2, _: ['positional']} parse('--count 2 positional'); // {count: 2, _: ['positional']} parse({count: 2, _:['positional']}); // {count: 2, _: ['positional']} ``` ### parseArgv(input) `parseArgv` works exactly like `parse`, but only for array input and it slices off the first two elements. ##### arguments * input - `[String]` - the input you wish to parse ##### returns See "returns" section in "parse" ##### example ```js parseArgv(process.argv); ``` ### generateHelp(helpOptions) `generateHelp` produces help text based on your settings. ##### arguments * helpOptions - `{showHidden: Boolean, interpolate: Object}` - all options optional - `showHidden` specifies whether to show options with `hidden: true` specified, by default it is `false` - `interpolate` specify data to be interpolated in `prepend` and `append` text, `{{key}}` is the format - eg. `generateHelp({interpolate:{version: '0.4.2'}})`, will change this `append` text: `Version {{version}}` to `Version 0.4.2` ##### returns `String` - the generated help text ##### example ```js generateHelp(); /* "Usage: cmd [options] positional -h, --help displays help -c, --count Int number of things Version 1.0.0 "*/ ``` ### generateHelpForOption(optionName) `generateHelpForOption` produces expanded help text for the specified with `optionName` option. If an `example` was specified for the option, it will be displayed, and if a `longDescription` was specified, it will display that instead of the `description`. ##### arguments * optionName - `String` - the name of the option to display ##### returns `String` - the generated help text for the option ##### example ```js generateHelpForOption('count'); /* "-c, --count Int description: number of things example: cmd --count 2 "*/ ``` ## Settings Format When your `require('optionator')`, you get a function that takes in a settings object. This object has the type: { prepend: String, append: String, options: [{heading: String} | { option: String, alias: [String] | String, type: String, enum: [String], default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: [String] | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: [String] | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | [String]]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object } All of the properties are optional (the `Maybe` has been excluded for brevities sake), except for having either `heading: String` or `option: String` in each object in the `options` array. ### Top Level Properties * `prepend` is an optional string to be placed before the options in the help text * `append` is an optional string to be placed after the options in the help text * `options` is a required array specifying your options and headings, the options and headings will be displayed in the order specified * `helpStyle` is an optional object which enables you to change the default appearance of some aspects of the help text * `mutuallyExclusive` is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is present * `concatRepeatedArrays` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `mergeRepeatedObjects` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `positionalAnywhere` is an optional boolean (defaults to `true`) - when `true` it allows positional arguments anywhere, when `false`, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with `positionalAnywhere: false`, the arguments `--flag --boom 12 --crack` would have two positional arguments: `12` and `--crack` * `typeAliases` is an optional object, it allows you to set aliases for types, eg. `{Path: 'String'}` would allow you to use the type `Path` as an alias for the type `String` * `defaults` is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do `default: { type: "String" }` to set the default type of all options to `String`, and then override that default in an individual option by setting the `type` property #### Heading Properties * `heading` a required string, the name of the heading #### Option Properties * `option` the required name of the option - use dash-case, without the leading dashes * `alias` is an optional string or array of strings which specify any aliases for the option * `type` is a required string in the [type check](https://github.com/gkz/type-check) [format](https://github.com/gkz/type-check#type-format), this will be used to cast the inputted value and validate it * `enum` is an optional array of strings, each string will be parsed by [levn](https://github.com/gkz/levn) - the argument value must be one of the resulting values - each potential value must validate against the specified `type` * `default` is a optional string, which will be parsed by [levn](https://github.com/gkz/levn) and used as the default value if none is set - the value must validate against the specified `type` * `restPositional` is an optional boolean - if set to `true`, everything after the option will be taken to be a positional argument, even if it looks like a named argument * `required` is an optional boolean - if set to `true`, the option parsing will fail if the option is not defined * `overrideRequired` is a optional boolean - if set to `true` and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use `--help` or `--version` flags * `concatRepeatedArrays` is an optional boolean or tuple with boolean and options object (defaults to `false`) - when set to `true` and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option `g` of type `[String]`: `-g a -g b -g c,d` will result in `['a','b','c','d']` You can supply an options object by giving the following value: `[true, options]`. The one currently supported option is `oneValuePerFlag`, this only allows one array value per flag. This is useful if your potential values contain a comma. * `mergeRepeatedObjects` is an optional boolean (defaults to `false`) - when set to `true` and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option `g` of type `Object`: `-g a:1 -g b:2 -g c:3,d:4` will result in `{a: 1, b: 2, c: 3, d: 4}` * `dependsOn` is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether `'and'` or `'or'` is first, it will either check whether all (`['and', 'option-a', 'option-b']`), or at least one (`['or', 'option-a', 'option-b']`) other options are set * `description` is an optional string, which will be displayed next to the option in the help text * `longDescription` is an optional string, it will be displayed instead of the `description` when `generateHelpForOption` is used * `example` is an optional string or array of strings with example(s) for the option - these will be displayed when `generateHelpForOption` is used #### Help Style Properties * `aliasSeparator` is an optional string, separates multiple names from each other - default: ' ,' * `typeSeparator` is an optional string, separates the type from the names - default: ' ' * `descriptionSeparator` is an optional string , separates the description from the padded name and type - default: ' ' * `initialIndent` is an optional int - the amount of indent for options - default: 2 * `secondaryIndent` is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4 * `maxPadFactor` is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5 ## Argument Format At the highest level there are two types of arguments: named, and positional. Name arguments of any length are prefixed with `--` (eg. `--go`), and those of one character may be prefixed with either `--` or `-` (eg. `-g`). There are two types of named arguments: boolean flags (eg. `--problemo`, `-p`) which take no value and result in a `true` if they are present, the falsey `undefined` if they are not present, or `false` if present and explicitly prefixed with `no` (eg. `--no-problemo`). Named arguments with values (eg. `--tseries 800`, `-t 800`) are the other type. If the option has a type `Boolean` it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value. For more information about how to properly set types to get the value you want, take a look at the [type check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) pages. You can group single character arguments that use a single `-`, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. `-ba 2` is equivalent to `-b -a 2`. Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in `cmd -b one -a 2 two` where `b` is a boolean flag, and `a` has the type `Number`, there are two positional arguments, `one` and `two`. Everything after an `--` is positional, even if it looks like a named argument. You may optionally use `=` to separate option names from values, for example: `--count=2`. If you specify the option `NUM`, then any argument using a single `-` followed by a number will be valid and will set the value of `NUM`. Eg. `-2` will be parsed into `NUM: 2`. If duplicate named arguments are present, the last one will be taken. ## Technical About `optionator` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [levn](https://github.com/gkz/levn) to cast arguments to their specified type, and uses [type-check](https://github.com/gkz/type-check) to validate values. It also uses the [prelude.ls](http://preludels.com/) library. An ini format parser and serializer for node. Sections are treated as nested objects. Items before the first heading are saved on the object directly. ## Usage Consider an ini-file `config.ini` that looks like this: ; this comment is being ignored scope = global [database] user = dbuser password = dbpassword database = use_this_database [paths.default] datadir = /var/lib/data array[] = first value array[] = second value array[] = third value You can read, manipulate and write the ini-file like so: var fs = require('fs') , ini = require('ini') var config = ini.parse(fs.readFileSync('./config.ini', 'utf-8')) config.scope = 'local' config.database.database = 'use_another_database' config.paths.default.tmpdir = '/tmp' delete config.paths.default.datadir config.paths.default.array.push('fourth value') fs.writeFileSync('./config_modified.ini', ini.stringify(config, { section: 'section' })) This will result in a file called `config_modified.ini` being written to the filesystem with the following content: [section] scope=local [section.database] user=dbuser password=dbpassword database=use_another_database [section.paths.default] tmpdir=/tmp array[]=first value array[]=second value array[]=third value array[]=fourth value ## API ### decode(inistring) Decode the ini-style formatted `inistring` into a nested object. ### parse(inistring) Alias for `decode(inistring)` ### encode(object, [options]) Encode the object `object` into an ini-style formatted string. If the optional parameter `section` is given, then all top-level properties of the object are put into this section and the `section`-string is prepended to all sub-sections, see the usage example above. The `options` object may contain the following: * `section` A string which will be the first `section` in the encoded ini data. Defaults to none. * `whitespace` Boolean to specify whether to put whitespace around the `=` character. By default, whitespace is omitted, to be friendly to some persnickety old parsers that don't tolerate it well. But some find that it's more human-readable and pretty with the whitespace. For backwards compatibility reasons, if a `string` options is passed in, then it is assumed to be the `section` value. ### stringify(object, [options]) Alias for `encode(object, [options])` ### safe(val) Escapes the string `val` such that it is safe to be used as a key or value in an ini-file. Basically escapes quotes. For example ini.safe('"unsafe string"') would result in "\"unsafe string\"" ### unsafe(val) Unescapes the string `val` # toidentifier [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Build Status][github-actions-ci-image]][github-actions-ci-url] [![Test Coverage][codecov-image]][codecov-url] > Convert a string of words to a JavaScript identifier ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install toidentifier ``` ## Example ```js var toIdentifier = require('toidentifier') console.log(toIdentifier('Bad Request')) // => "BadRequest" ``` ## API This CommonJS module exports a single default function: `toIdentifier`. ### toIdentifier(string) Given a string as the argument, it will be transformed according to the following rules and the new string will be returned: 1. Split into words separated by space characters (`0x20`). 2. Upper case the first character of each word. 3. Join the words together with no separator. 4. Remove all non-word (`[0-9a-z_]`) characters. ## License [MIT](LICENSE) [codecov-image]: https://img.shields.io/codecov/c/github/component/toidentifier.svg [codecov-url]: https://codecov.io/gh/component/toidentifier [downloads-image]: https://img.shields.io/npm/dm/toidentifier.svg [downloads-url]: https://npmjs.org/package/toidentifier [github-actions-ci-image]: https://img.shields.io/github/workflow/status/component/toidentifier/ci/master?label=ci [github-actions-ci-url]: https://github.com/component/toidentifier?query=workflow%3Aci [npm-image]: https://img.shields.io/npm/v/toidentifier.svg [npm-url]: https://npmjs.org/package/toidentifier ## [npm]: https://www.npmjs.com/ [yarn]: https://yarnpkg.com/ # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # is-typedarray [![locked](http://badges.github.io/stability-badges/dist/locked.svg)](http://github.com/badges/stability-badges) Detect whether or not an object is a [Typed Array](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays). ## Usage [![NPM](https://nodei.co/npm/is-typedarray.png)](https://nodei.co/npm/is-typedarray/) ### isTypedArray(array) Returns `true` when array is a Typed Array, and `false` when it is not. ## License MIT. See [LICENSE.md](http://github.com/hughsk/is-typedarray/blob/master/LICENSE.md) for details. # type-check [![Build Status](https://travis-ci.org/gkz/type-check.png?branch=master)](https://travis-ci.org/gkz/type-check) <a name="type-check" /> `type-check` is a library which allows you to check the types of JavaScript values at runtime with a Haskell like type syntax. It is great for checking external input, for testing, or even for adding a bit of safety to your internal code. It is a major component of [levn](https://github.com/gkz/levn). MIT license. Version 0.4.0. Check out the [demo](http://gkz.github.io/type-check/). For updates on `type-check`, [follow me on twitter](https://twitter.com/gkzahariev). npm install type-check ## Quick Examples ```js // Basic types: var typeCheck = require('type-check').typeCheck; typeCheck('Number', 1); // true typeCheck('Number', 'str'); // false typeCheck('Error', new Error); // true typeCheck('Undefined', undefined); // true // Comment typeCheck('count::Number', 1); // true // One type OR another type: typeCheck('Number | String', 2); // true typeCheck('Number | String', 'str'); // true // Wildcard, matches all types: typeCheck('*', 2) // true // Array, all elements of a single type: typeCheck('[Number]', [1, 2, 3]); // true typeCheck('[Number]', [1, 'str', 3]); // false // Tuples, or fixed length arrays with elements of different types: typeCheck('(String, Number)', ['str', 2]); // true typeCheck('(String, Number)', ['str']); // false typeCheck('(String, Number)', ['str', 2, 5]); // false // Object properties: typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true typeCheck('{x: Number, y: Boolean}', {x: 2}); // false typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true // A particular type AND object properties: typeCheck('RegExp{source: String, ...}', /re/i); // true typeCheck('RegExp{source: String, ...}', {source: 're'}); // false // Custom types: var opt = {customTypes: {Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}}; typeCheck('Even', 2, opt); // true // Nested: var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}' typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true ``` Check out the [type syntax format](#syntax) and [guide](#guide). ## Usage `require('type-check');` returns an object that exposes four properties. `VERSION` is the current version of the library as a string. `typeCheck`, `parseType`, and `parsedTypeCheck` are functions. ```js // typeCheck(type, input, options); typeCheck('Number', 2); // true // parseType(type); var parsedType = parseType('Number'); // object // parsedTypeCheck(parsedType, input, options); parsedTypeCheck(parsedType, 2); // true ``` ### typeCheck(type, input, options) `typeCheck` checks a JavaScript value `input` against `type` written in the [type format](#type-format) (and taking account the optional `options`) and returns whether the `input` matches the `type`. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js typeCheck('Number', 2); // true ``` ### parseType(type) `parseType` parses string `type` written in the [type format](#type-format) into an object representing the parsed type. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to parse ##### returns `Object` - an object in the parsed type format representing the parsed type ##### example ```js parseType('Number'); // [{type: 'Number'}] ``` ### parsedTypeCheck(parsedType, input, options) `parsedTypeCheck` checks a JavaScript value `input` against parsed `type` in the parsed type format (and taking account the optional `options`) and returns whether the `input` matches the `type`. Use this in conjunction with `parseType` if you are going to use a type more than once. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js parsedTypeCheck([{type: 'Number'}], 2); // true var parsedType = parseType('String'); parsedTypeCheck(parsedType, 'str'); // true ``` <a name="type-format" /> ## Type Format ### Syntax White space is ignored. The root node is a __Types__. * __Identifier__ = `[\$\w]+` - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. `String` * __Type__ = an `Identifier`, an `Identifier` followed by a `Structure`, just a `Structure`, or a wildcard `*` - eg. `String`, `Object{x: Number}`, `{x: Number}`, `Array{0: String, 1: Boolean, length: Number}`, `*` * __Types__ = optionally a comment (an `Identifier` followed by a `::`), optionally the identifier `Maybe`, one or more `Type`, separated by `|` - eg. `Number`, `String | Date`, `Maybe Number`, `Maybe Boolean | String` * __Structure__ = `Fields`, or a `Tuple`, or an `Array` - eg. `{x: Number}`, `(String, Number)`, `[Date]` * __Fields__ = a `{`, followed one or more `Field` separated by a comma `,` (trailing comma `,` is permitted), optionally an `...` (always preceded by a comma `,`), followed by a `}` - eg. `{x: Number, y: String}`, `{k: Function, ...}` * __Field__ = an `Identifier`, followed by a colon `:`, followed by `Types` - eg. `x: Date | String`, `y: Boolean` * __Tuple__ = a `(`, followed by one or more `Types` separated by a comma `,` (trailing comma `,` is permitted), followed by a `)` - eg `(Date)`, `(Number, Date)` * __Array__ = a `[` followed by exactly one `Types` followed by a `]` - eg. `[Boolean]`, `[Boolean | Null]` ### Guide `type-check` uses `Object.toString` to find out the basic type of a value. Specifically, ```js {}.toString.call(VALUE).slice(8, -1) {}.toString.call(true).slice(8, -1) // 'Boolean' ``` A basic type, eg. `Number`, uses this check. This is much more versatile than using `typeof` - for example, with `document`, `typeof` produces `'object'` which isn't that useful, and our technique produces `'HTMLDocument'`. You may check for multiple types by separating types with a `|`. The checker proceeds from left to right, and passes if the value is any of the types - eg. `String | Boolean` first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false. Adding a `Maybe` in front of a list of multiple types is the same as also checking for `Null` and `Undefined` - eg. `Maybe String` is equivalent to `Undefined | Null | String`. You may add a comment to remind you of what the type is for by following an identifier with a `::` before a type (or multiple types). The comment is simply thrown out. The wildcard `*` matches all types. There are three types of structures for checking the contents of a value: 'fields', 'tuple', and 'array'. If used by itself, a 'fields' structure will pass with any type of object as long as it is an instance of `Object` and the properties pass - this allows for duck typing - eg. `{x: Boolean}`. To check if the properties pass, and the value is of a certain type, you can specify the type - eg. `Error{message: String}`. If you want to make a field optional, you can simply use `Maybe` - eg. `{x: Boolean, y: Maybe String}` will still pass if `y` is undefined (or null). If you don't care if the value has properties beyond what you have specified, you can use the 'etc' operator `...` - eg. `{x: Boolean, ...}` will match an object with an `x` property that is a boolean, and with zero or more other properties. For an array, you must specify one or more types (separated by `|`) - it will pass for something of any length as long as each element passes the types provided - eg. `[Number]`, `[Number | String]`. A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. `(String, Number)`. An array and tuple structure check that the value is of type `Array` by default, but if another type is specified, they will check for that instead - eg. `Int32Array[Number]`. You can use the wildcard `*` to search for any type at all. Check out the [type precedence](https://github.com/zaboco/type-precedence) library for type-check. ## Options Options is an object. It is an optional parameter to the `typeCheck` and `parsedTypeCheck` functions. The only current option is `customTypes`. <a name="custom-types" /> ### Custom Types __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; } } } }; typeCheck('Even', 2, options); // true typeCheck('Even', 3, options); // false ``` `customTypes` allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a `typeOf` property - a string, and `validate` property - a function. The `typeOf` property is the type the value should be (optional - if not set only `validate` will be used), and `validate` is a function which should return true if the value is of that type. `validate` receives one parameter, which is the value that we are checking. ## Technical About `type-check` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It also uses the [prelude.ls](http://preludels.com/) library. # graceful-fs graceful-fs functions as a drop-in replacement for the fs module, making various improvements. The improvements are meant to normalize behavior across different platforms and environments, and to make filesystem access more resilient to errors. ## Improvements over [fs module](https://nodejs.org/api/fs.html) * Queues up `open` and `readdir` calls, and retries them once something closes if there is an EMFILE error from too many file descriptors. * fixes `lchmod` for Node versions prior to 0.6.2. * implements `fs.lutimes` if possible. Otherwise it becomes a noop. * ignores `EINVAL` and `EPERM` errors in `chown`, `fchown` or `lchown` if the user isn't root. * makes `lchmod` and `lchown` become noops, if not available. * retries reading a file if `read` results in EAGAIN error. On Windows, it retries renaming a file for up to one second if `EACCESS` or `EPERM` error occurs, likely because antivirus software has locked the directory. ## USAGE ```javascript // use just like fs var fs = require('graceful-fs') // now go and do stuff with it... fs.readFile('some-file-or-whatever', (err, data) => { // Do stuff here. }) ``` ## Sync methods This module cannot intercept or handle `EMFILE` or `ENFILE` errors from sync methods. If you use sync methods which open file descriptors then you are responsible for dealing with any errors. This is a known limitation, not a bug. ## Global Patching If you want to patch the global fs module (or any other fs-like module) you can do this: ```javascript // Make sure to read the caveat below. var realFs = require('fs') var gracefulFs = require('graceful-fs') gracefulFs.gracefulify(realFs) ``` This should only ever be done at the top-level application layer, in order to delay on EMFILE errors from any fs-using dependencies. You should **not** do this in a library, because it can cause unexpected delays in other parts of the program. ## Changes This module is fairly stable at this point, and used by a lot of things. That being said, because it implements a subtle behavior change in a core part of the node API, even modest changes can be extremely breaking, and the versioning is thus biased towards bumping the major when in doubt. The main change between major versions has been switching between providing a fully-patched `fs` module vs monkey-patching the node core builtin, and the approach by which a non-monkey-patched `fs` was created. The goal is to trade `EMFILE` errors for slower fs operations. So, if you try to open a zillion files, rather than crashing, `open` operations will be queued up and wait for something else to `close`. There are advantages to each approach. Monkey-patching the fs means that no `EMFILE` errors can possibly occur anywhere in your application, because everything is using the same core `fs` module, which is patched. However, it can also obviously cause undesirable side-effects, especially if the module is loaded multiple times. Implementing a separate-but-identical patched `fs` module is more surgical (and doesn't run the risk of patching multiple times), but also imposes the challenge of keeping in sync with the core module. The current approach loads the `fs` module, and then creates a lookalike object that has all the same methods, except a few that are patched. It is safe to use in all versions of Node from 0.8 through 7.0. ### v4 * Do not monkey-patch the fs module. This module may now be used as a drop-in dep, and users can opt into monkey-patching the fs builtin if their app requires it. ### v3 * Monkey-patch fs, because the eval approach no longer works on recent node. * fixed possible type-error throw if rename fails on windows * verify that we *never* get EMFILE errors * Ignore ENOSYS from chmod/chown * clarify that graceful-fs must be used as a drop-in ### v2.1.0 * Use eval rather than monkey-patching fs. * readdir: Always sort the results * win32: requeue a file if error has an OK status ### v2.0 * A return to monkey patching * wrap process.cwd ### v1.1 * wrap readFile * Wrap fs.writeFile. * readdir protection * Don't clobber the fs builtin * Handle fs.read EAGAIN errors by trying again * Expose the curOpen counter * No-op lchown/lchmod if not implemented * fs.rename patch only for win32 * Patch fs.rename to handle AV software on Windows * Close #4 Chown should not fail on einval or eperm if non-root * Fix isaacs/fstream#1 Only wrap fs one time * Fix #3 Start at 1024 max files, then back off on EMFILE * lutimes that doens't blow up on Linux * A full on-rewrite using a queue instead of just swallowing the EMFILE error * Wrap Read/Write streams as well ### 1.0 * Update engines for node 0.6 * Be lstat-graceful on Windows * first # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. # ci-info Get details about the current Continuous Integration environment. Please [open an issue](https://github.com/watson/ci-info/issues/new?template=ci-server-not-detected.md) if your CI server isn't properly detected :) [![npm](https://img.shields.io/npm/v/ci-info.svg)](https://www.npmjs.com/package/ci-info) [![Tests](https://github.com/watson/ci-info/workflows/Tests/badge.svg)](https://github.com/watson/ci-info/actions) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat)](https://github.com/feross/standard) ## Installation ```bash npm install ci-info --save ``` ## Usage ```js var ci = require('ci-info') if (ci.isCI) { console.log('The name of the CI server is:', ci.name) } else { console.log('This program is not running on a CI server') } ``` ## Supported CI tools Officially supported CI servers: | Name | Constant | isPR | | ------------------------------------------------------------------------------- | -------------------- | ---- | | [AWS CodeBuild](https://aws.amazon.com/codebuild/) | `ci.CODEBUILD` | 🚫 | | [AppVeyor](http://www.appveyor.com) | `ci.APPVEYOR` | ✅ | | [Azure Pipelines](https://azure.microsoft.com/en-us/services/devops/pipelines/) | `ci.AZURE_PIPELINES` | ✅ | | [Appcircle](https://appcircle.io/) | `ci.APPCIRCLE` | 🚫 | | [Bamboo](https://www.atlassian.com/software/bamboo) by Atlassian | `ci.BAMBOO` | 🚫 | | [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) | `ci.BITBUCKET` | ✅ | | [Bitrise](https://www.bitrise.io/) | `ci.BITRISE` | ✅ | | [Buddy](https://buddy.works/) | `ci.BUDDY` | ✅ | | [Buildkite](https://buildkite.com) | `ci.BUILDKITE` | ✅ | | [CircleCI](http://circleci.com) | `ci.CIRCLE` | ✅ | | [Cirrus CI](https://cirrus-ci.org) | `ci.CIRRUS` | ✅ | | [Codefresh](https://codefresh.io/) | `ci.CODEFRESH` | ✅ | | [Codeship](https://codeship.com) | `ci.CODESHIP` | 🚫 | | [Drone](https://drone.io) | `ci.DRONE` | ✅ | | [dsari](https://github.com/rfinnie/dsari) | `ci.DSARI` | 🚫 | | [Expo Application Services](https://expo.dev/eas) | `ci.EAS_BUILD` | 🚫 | | [GitHub Actions](https://github.com/features/actions/) | `ci.GITHUB_ACTIONS` | ✅ | | [GitLab CI](https://about.gitlab.com/gitlab-ci/) | `ci.GITLAB` | ✅ | | [GoCD](https://www.go.cd/) | `ci.GOCD` | 🚫 | | [Hudson](http://hudson-ci.org) | `ci.HUDSON` | 🚫 | | [Jenkins CI](https://jenkins-ci.org) | `ci.JENKINS` | ✅ | | [LayerCI](https://layerci.com/) | `ci.LAYERCI` | ✅ | | [Magnum CI](https://magnum-ci.com) | `ci.MAGNUM` | 🚫 | | [Netlify CI](https://www.netlify.com/) | `ci.NETLIFY` | ✅ | | [Nevercode](http://nevercode.io/) | `ci.NEVERCODE` | ✅ | | [Render](https://render.com/) | `ci.RENDER` | ✅ | | [Sail CI](https://sail.ci/) | `ci.SAIL` | ✅ | | [Screwdriver](https://screwdriver.cd/) | `ci.SCREWDRIVER` | ✅ | | [Semaphore](https://semaphoreci.com) | `ci.SEMAPHORE` | ✅ | | [Shippable](https://www.shippable.com/) | `ci.SHIPPABLE` | ✅ | | [Solano CI](https://www.solanolabs.com/) | `ci.SOLANO` | ✅ | | [Strider CD](https://strider-cd.github.io/) | `ci.STRIDER` | 🚫 | | [TaskCluster](http://docs.taskcluster.net) | `ci.TASKCLUSTER` | 🚫 | | [TeamCity](https://www.jetbrains.com/teamcity/) by JetBrains | `ci.TEAMCITY` | 🚫 | | [Travis CI](http://travis-ci.org) | `ci.TRAVIS` | ✅ | | [Vercel](https://vercel.com/) | `ci.VERCEL` | 🚫 | | [Visual Studio App Center](https://appcenter.ms/) | `ci.APPCENTER` | 🚫 | ## API ### `ci.name` Returns a string containing name of the CI server the code is running on. If CI server is not detected, it returns `null`. Don't depend on the value of this string not to change for a specific vendor. If you find your self writing `ci.name === 'Travis CI'`, you most likely want to use `ci.TRAVIS` instead. ### `ci.isCI` Returns a boolean. Will be `true` if the code is running on a CI server, otherwise `false`. Some CI servers not listed here might still trigger the `ci.isCI` boolean to be set to `true` if they use certain vendor neutral environment variables. In those cases `ci.name` will be `null` and no vendor specific boolean will be set to `true`. ### `ci.isPR` Returns a boolean if PR detection is supported for the current CI server. Will be `true` if a PR is being tested, otherwise `false`. If PR detection is not supported for the current CI server, the value will be `null`. ### `ci.<VENDOR-CONSTANT>` A vendor specific boolean constant is exposed for each support CI vendor. A constant will be `true` if the code is determined to run on the given CI server, otherwise `false`. Examples of vendor constants are `ci.TRAVIS` or `ci.APPVEYOR`. For a complete list, see the support table above. Deprecated vendor constants that will be removed in the next major release: - `ci.TDDIUM` (Solano CI) This have been renamed `ci.SOLANO` ## License [MIT](LICENSE) # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0` * `^0.2.3` := `>=0.2.3 <0.3.0` * `^0.0.3` := `>=0.0.3 <0.0.4` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0` * `^0.0.x` := `>=0.0.0 <0.1.0` * `^0.0` := `>=0.0.0 <0.1.0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0` * `^0.x` := `>=0.0.0 <1.0.0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Integer.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` # buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/buffer/master.svg [travis-url]: https://travis-ci.org/feross/buffer [npm-image]: https://img.shields.io/npm/v/buffer.svg [npm-url]: https://npmjs.org/package/buffer [downloads-image]: https://img.shields.io/npm/dm/buffer.svg [downloads-url]: https://npmjs.org/package/buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### The buffer module from [node.js](https://nodejs.org/), for the browser. [![saucelabs][saucelabs-image]][saucelabs-url] [saucelabs-image]: https://saucelabs.com/browser-matrix/buffer.svg [saucelabs-url]: https://saucelabs.com/u/buffer With [browserify](http://browserify.org), simply `require('buffer')` or use the `Buffer` global and you will get this module. The goal is to provide an API that is 100% identical to [node's Buffer API](https://nodejs.org/api/buffer.html). Read the [official docs](https://nodejs.org/api/buffer.html) for the full list of properties, instance methods, and class methods that are supported. ## features - Manipulate binary data like a boss, in all browsers! - Super fast. Backed by Typed Arrays (`Uint8Array`/`ArrayBuffer`, not `Object`) - Extremely small bundle size (**6.75KB minified + gzipped**, 51.9KB with comments) - Excellent browser support (Chrome, Firefox, Edge, Safari 9+, IE 11, iOS 9+, Android, etc.) - Preserves Node API exactly, with one minor difference (see below) - Square-bracket `buf[4]` notation works! - Does not modify any browser prototypes or put anything on `window` - Comprehensive test suite (including all buffer tests from node.js core) ## install To use this module directly (without browserify), install it: ```bash npm install buffer ``` This module was previously called **native-buffer-browserify**, but please use **buffer** from now on. If you do not use a bundler, you can use the [standalone script](https://bundle.run/buffer). ## usage The module's API is identical to node's `Buffer` API. Read the [official docs](https://nodejs.org/api/buffer.html) for the full list of properties, instance methods, and class methods that are supported. As mentioned above, `require('buffer')` or use the `Buffer` global with [browserify](http://browserify.org) and this module will automatically be included in your bundle. Almost any npm module will work in the browser, even if it assumes that the node `Buffer` API will be available. To depend on this module explicitly (without browserify), require it like this: ```js var Buffer = require('buffer/').Buffer // note: the trailing slash is important! ``` To require this module explicitly, use `require('buffer/')` which tells the node.js module lookup algorithm (also used by browserify) to use the **npm module** named `buffer` instead of the **node.js core** module named `buffer`! ## how does it work? The Buffer constructor returns instances of `Uint8Array` that have their prototype changed to `Buffer.prototype`. Furthermore, `Buffer` is a subclass of `Uint8Array`, so the returned instances will have all the node `Buffer` methods and the `Uint8Array` methods. Square bracket notation works as expected -- it returns a single octet. The `Uint8Array` prototype remains unmodified. ## tracking the latest node api This module tracks the Buffer API in the latest (unstable) version of node.js. The Buffer API is considered **stable** in the [node stability index](https://nodejs.org/docs/latest/api/documentation.html#documentation_stability_index), so it is unlikely that there will ever be breaking changes. Nonetheless, when/if the Buffer API changes in node, this module's API will change accordingly. ## related packages - [`buffer-reverse`](https://www.npmjs.com/package/buffer-reverse) - Reverse a buffer - [`buffer-xor`](https://www.npmjs.com/package/buffer-xor) - Bitwise xor a buffer - [`is-buffer`](https://www.npmjs.com/package/is-buffer) - Determine if an object is a Buffer without including the whole `Buffer` package ## conversion packages ### convert typed array to buffer Use [`typedarray-to-buffer`](https://www.npmjs.com/package/typedarray-to-buffer) to convert any kind of typed array to a `Buffer`. Does not perform a copy, so it's super fast. ### convert buffer to typed array `Buffer` is a subclass of `Uint8Array` (which is a typed array). So there is no need to explicitly convert to typed array. Just use the buffer as a `Uint8Array`. ### convert blob to buffer Use [`blob-to-buffer`](https://www.npmjs.com/package/blob-to-buffer) to convert a `Blob` to a `Buffer`. ### convert buffer to blob To convert a `Buffer` to a `Blob`, use the `Blob` constructor: ```js var blob = new Blob([ buffer ]) ``` Optionally, specify a mimetype: ```js var blob = new Blob([ buffer ], { type: 'text/html' }) ``` ### convert arraybuffer to buffer To convert an `ArrayBuffer` to a `Buffer`, use the `Buffer.from` function. Does not perform a copy, so it's super fast. ```js var buffer = Buffer.from(arrayBuffer) ``` ### convert buffer to arraybuffer To convert a `Buffer` to an `ArrayBuffer`, use the `.buffer` property (which is present on all `Uint8Array` objects): ```js var arrayBuffer = buffer.buffer.slice( buffer.byteOffset, buffer.byteOffset + buffer.byteLength ) ``` Alternatively, use the [`to-arraybuffer`](https://www.npmjs.com/package/to-arraybuffer) module. ## performance See perf tests in `/perf`. `BrowserBuffer` is the browser `buffer` module (this repo). `Uint8Array` is included as a sanity check (since `BrowserBuffer` uses `Uint8Array` under the hood, `Uint8Array` will always be at least a bit faster). Finally, `NodeBuffer` is the node.js buffer module, which is included to compare against. NOTE: Performance has improved since these benchmarks were taken. PR welcome to update the README. ### Chrome 38 | Method | Operations | Accuracy | Sampled | Fastest | |:-------|:-----------|:---------|:--------|:-------:| | BrowserBuffer#bracket-notation | 11,457,464 ops/sec | ±0.86% | 66 | ✓ | | Uint8Array#bracket-notation | 10,824,332 ops/sec | ±0.74% | 65 | | | | | | | | BrowserBuffer#concat | 450,532 ops/sec | ±0.76% | 68 | | | Uint8Array#concat | 1,368,911 ops/sec | ±1.50% | 62 | ✓ | | | | | | | BrowserBuffer#copy(16000) | 903,001 ops/sec | ±0.96% | 67 | | | Uint8Array#copy(16000) | 1,422,441 ops/sec | ±1.04% | 66 | ✓ | | | | | | | BrowserBuffer#copy(16) | 11,431,358 ops/sec | ±0.46% | 69 | | | Uint8Array#copy(16) | 13,944,163 ops/sec | ±1.12% | 68 | ✓ | | | | | | | BrowserBuffer#new(16000) | 106,329 ops/sec | ±6.70% | 44 | | | Uint8Array#new(16000) | 131,001 ops/sec | ±2.85% | 31 | ✓ | | | | | | | BrowserBuffer#new(16) | 1,554,491 ops/sec | ±1.60% | 65 | | | Uint8Array#new(16) | 6,623,930 ops/sec | ±1.66% | 65 | ✓ | | | | | | | BrowserBuffer#readDoubleBE | 112,830 ops/sec | ±0.51% | 69 | ✓ | | DataView#getFloat64 | 93,500 ops/sec | ±0.57% | 68 | | | | | | | | BrowserBuffer#readFloatBE | 146,678 ops/sec | ±0.95% | 68 | ✓ | | DataView#getFloat32 | 99,311 ops/sec | ±0.41% | 67 | | | | | | | | BrowserBuffer#readUInt32LE | 843,214 ops/sec | ±0.70% | 69 | ✓ | | DataView#getUint32 | 103,024 ops/sec | ±0.64% | 67 | | | | | | | | BrowserBuffer#slice | 1,013,941 ops/sec | ±0.75% | 67 | | | Uint8Array#subarray | 1,903,928 ops/sec | ±0.53% | 67 | ✓ | | | | | | | BrowserBuffer#writeFloatBE | 61,387 ops/sec | ±0.90% | 67 | | | DataView#setFloat32 | 141,249 ops/sec | ±0.40% | 66 | ✓ | ### Firefox 33 | Method | Operations | Accuracy | Sampled | Fastest | |:-------|:-----------|:---------|:--------|:-------:| | BrowserBuffer#bracket-notation | 20,800,421 ops/sec | ±1.84% | 60 | | | Uint8Array#bracket-notation | 20,826,235 ops/sec | ±2.02% | 61 | ✓ | | | | | | | BrowserBuffer#concat | 153,076 ops/sec | ±2.32% | 61 | | | Uint8Array#concat | 1,255,674 ops/sec | ±8.65% | 52 | ✓ | | | | | | | BrowserBuffer#copy(16000) | 1,105,312 ops/sec | ±1.16% | 63 | | | Uint8Array#copy(16000) | 1,615,911 ops/sec | ±0.55% | 66 | ✓ | | | | | | | BrowserBuffer#copy(16) | 16,357,599 ops/sec | ±0.73% | 68 | | | Uint8Array#copy(16) | 31,436,281 ops/sec | ±1.05% | 68 | ✓ | | | | | | | BrowserBuffer#new(16000) | 52,995 ops/sec | ±6.01% | 35 | | | Uint8Array#new(16000) | 87,686 ops/sec | ±5.68% | 45 | ✓ | | | | | | | BrowserBuffer#new(16) | 252,031 ops/sec | ±1.61% | 66 | | | Uint8Array#new(16) | 8,477,026 ops/sec | ±0.49% | 68 | ✓ | | | | | | | BrowserBuffer#readDoubleBE | 99,871 ops/sec | ±0.41% | 69 | | | DataView#getFloat64 | 285,663 ops/sec | ±0.70% | 68 | ✓ | | | | | | | BrowserBuffer#readFloatBE | 115,540 ops/sec | ±0.42% | 69 | | | DataView#getFloat32 | 288,722 ops/sec | ±0.82% | 68 | ✓ | | | | | | | BrowserBuffer#readUInt32LE | 633,926 ops/sec | ±1.08% | 67 | ✓ | | DataView#getUint32 | 294,808 ops/sec | ±0.79% | 64 | | | | | | | | BrowserBuffer#slice | 349,425 ops/sec | ±0.46% | 69 | | | Uint8Array#subarray | 5,965,819 ops/sec | ±0.60% | 65 | ✓ | | | | | | | BrowserBuffer#writeFloatBE | 59,980 ops/sec | ±0.41% | 67 | | | DataView#setFloat32 | 317,634 ops/sec | ±0.63% | 68 | ✓ | ### Safari 8 | Method | Operations | Accuracy | Sampled | Fastest | |:-------|:-----------|:---------|:--------|:-------:| | BrowserBuffer#bracket-notation | 10,279,729 ops/sec | ±2.25% | 56 | ✓ | | Uint8Array#bracket-notation | 10,030,767 ops/sec | ±2.23% | 59 | | | | | | | | BrowserBuffer#concat | 144,138 ops/sec | ±1.38% | 65 | | | Uint8Array#concat | 4,950,764 ops/sec | ±1.70% | 63 | ✓ | | | | | | | BrowserBuffer#copy(16000) | 1,058,548 ops/sec | ±1.51% | 64 | | | Uint8Array#copy(16000) | 1,409,666 ops/sec | ±1.17% | 65 | ✓ | | | | | | | BrowserBuffer#copy(16) | 6,282,529 ops/sec | ±1.88% | 58 | | | Uint8Array#copy(16) | 11,907,128 ops/sec | ±2.87% | 58 | ✓ | | | | | | | BrowserBuffer#new(16000) | 101,663 ops/sec | ±3.89% | 57 | | | Uint8Array#new(16000) | 22,050,818 ops/sec | ±6.51% | 46 | ✓ | | | | | | | BrowserBuffer#new(16) | 176,072 ops/sec | ±2.13% | 64 | | | Uint8Array#new(16) | 24,385,731 ops/sec | ±5.01% | 51 | ✓ | | | | | | | BrowserBuffer#readDoubleBE | 41,341 ops/sec | ±1.06% | 67 | | | DataView#getFloat64 | 322,280 ops/sec | ±0.84% | 68 | ✓ | | | | | | | BrowserBuffer#readFloatBE | 46,141 ops/sec | ±1.06% | 65 | | | DataView#getFloat32 | 337,025 ops/sec | ±0.43% | 69 | ✓ | | | | | | | BrowserBuffer#readUInt32LE | 151,551 ops/sec | ±1.02% | 66 | | | DataView#getUint32 | 308,278 ops/sec | ±0.94% | 67 | ✓ | | | | | | | BrowserBuffer#slice | 197,365 ops/sec | ±0.95% | 66 | | | Uint8Array#subarray | 9,558,024 ops/sec | ±3.08% | 58 | ✓ | | | | | | | BrowserBuffer#writeFloatBE | 17,518 ops/sec | ±1.03% | 63 | | | DataView#setFloat32 | 319,751 ops/sec | ±0.48% | 68 | ✓ | ### Node 0.11.14 | Method | Operations | Accuracy | Sampled | Fastest | |:-------|:-----------|:---------|:--------|:-------:| | BrowserBuffer#bracket-notation | 10,489,828 ops/sec | ±3.25% | 90 | | | Uint8Array#bracket-notation | 10,534,884 ops/sec | ±0.81% | 92 | ✓ | | NodeBuffer#bracket-notation | 10,389,910 ops/sec | ±0.97% | 87 | | | | | | | | BrowserBuffer#concat | 487,830 ops/sec | ±2.58% | 88 | | | Uint8Array#concat | 1,814,327 ops/sec | ±1.28% | 88 | ✓ | | NodeBuffer#concat | 1,636,523 ops/sec | ±1.88% | 73 | | | | | | | | BrowserBuffer#copy(16000) | 1,073,665 ops/sec | ±0.77% | 90 | | | Uint8Array#copy(16000) | 1,348,517 ops/sec | ±0.84% | 89 | ✓ | | NodeBuffer#copy(16000) | 1,289,533 ops/sec | ±0.82% | 93 | | | | | | | | BrowserBuffer#copy(16) | 12,782,706 ops/sec | ±0.74% | 85 | | | Uint8Array#copy(16) | 14,180,427 ops/sec | ±0.93% | 92 | ✓ | | NodeBuffer#copy(16) | 11,083,134 ops/sec | ±1.06% | 89 | | | | | | | | BrowserBuffer#new(16000) | 141,678 ops/sec | ±3.30% | 67 | | | Uint8Array#new(16000) | 161,491 ops/sec | ±2.96% | 60 | | | NodeBuffer#new(16000) | 292,699 ops/sec | ±3.20% | 55 | ✓ | | | | | | | BrowserBuffer#new(16) | 1,655,466 ops/sec | ±2.41% | 82 | | | Uint8Array#new(16) | 14,399,926 ops/sec | ±0.91% | 94 | ✓ | | NodeBuffer#new(16) | 3,894,696 ops/sec | ±0.88% | 92 | | | | | | | | BrowserBuffer#readDoubleBE | 109,582 ops/sec | ±0.75% | 93 | ✓ | | DataView#getFloat64 | 91,235 ops/sec | ±0.81% | 90 | | | NodeBuffer#readDoubleBE | 88,593 ops/sec | ±0.96% | 81 | | | | | | | | BrowserBuffer#readFloatBE | 139,854 ops/sec | ±1.03% | 85 | ✓ | | DataView#getFloat32 | 98,744 ops/sec | ±0.80% | 89 | | | NodeBuffer#readFloatBE | 92,769 ops/sec | ±0.94% | 93 | | | | | | | | BrowserBuffer#readUInt32LE | 710,861 ops/sec | ±0.82% | 92 | | | DataView#getUint32 | 117,893 ops/sec | ±0.84% | 91 | | | NodeBuffer#readUInt32LE | 851,412 ops/sec | ±0.72% | 93 | ✓ | | | | | | | BrowserBuffer#slice | 1,673,877 ops/sec | ±0.73% | 94 | | | Uint8Array#subarray | 6,919,243 ops/sec | ±0.67% | 90 | ✓ | | NodeBuffer#slice | 4,617,604 ops/sec | ±0.79% | 93 | | | | | | | | BrowserBuffer#writeFloatBE | 66,011 ops/sec | ±0.75% | 93 | | | DataView#setFloat32 | 127,760 ops/sec | ±0.72% | 93 | ✓ | | NodeBuffer#writeFloatBE | 103,352 ops/sec | ±0.83% | 93 | | ### iojs 1.8.1 | Method | Operations | Accuracy | Sampled | Fastest | |:-------|:-----------|:---------|:--------|:-------:| | BrowserBuffer#bracket-notation | 10,990,488 ops/sec | ±1.11% | 91 | | | Uint8Array#bracket-notation | 11,268,757 ops/sec | ±0.65% | 97 | | | NodeBuffer#bracket-notation | 11,353,260 ops/sec | ±0.83% | 94 | ✓ | | | | | | | BrowserBuffer#concat | 378,954 ops/sec | ±0.74% | 94 | | | Uint8Array#concat | 1,358,288 ops/sec | ±0.97% | 87 | | | NodeBuffer#concat | 1,934,050 ops/sec | ±1.11% | 78 | ✓ | | | | | | | BrowserBuffer#copy(16000) | 894,538 ops/sec | ±0.56% | 84 | | | Uint8Array#copy(16000) | 1,442,656 ops/sec | ±0.71% | 96 | | | NodeBuffer#copy(16000) | 1,457,898 ops/sec | ±0.53% | 92 | ✓ | | | | | | | BrowserBuffer#copy(16) | 12,870,457 ops/sec | ±0.67% | 95 | | | Uint8Array#copy(16) | 16,643,989 ops/sec | ±0.61% | 93 | ✓ | | NodeBuffer#copy(16) | 14,885,848 ops/sec | ±0.74% | 94 | | | | | | | | BrowserBuffer#new(16000) | 109,264 ops/sec | ±4.21% | 63 | | | Uint8Array#new(16000) | 138,916 ops/sec | ±1.87% | 61 | | | NodeBuffer#new(16000) | 281,449 ops/sec | ±3.58% | 51 | ✓ | | | | | | | BrowserBuffer#new(16) | 1,362,935 ops/sec | ±0.56% | 99 | | | Uint8Array#new(16) | 6,193,090 ops/sec | ±0.64% | 95 | ✓ | | NodeBuffer#new(16) | 4,745,425 ops/sec | ±1.56% | 90 | | | | | | | | BrowserBuffer#readDoubleBE | 118,127 ops/sec | ±0.59% | 93 | ✓ | | DataView#getFloat64 | 107,332 ops/sec | ±0.65% | 91 | | | NodeBuffer#readDoubleBE | 116,274 ops/sec | ±0.94% | 95 | | | | | | | | BrowserBuffer#readFloatBE | 150,326 ops/sec | ±0.58% | 95 | ✓ | | DataView#getFloat32 | 110,541 ops/sec | ±0.57% | 98 | | | NodeBuffer#readFloatBE | 121,599 ops/sec | ±0.60% | 87 | | | | | | | | BrowserBuffer#readUInt32LE | 814,147 ops/sec | ±0.62% | 93 | | | DataView#getUint32 | 137,592 ops/sec | ±0.64% | 90 | | | NodeBuffer#readUInt32LE | 931,650 ops/sec | ±0.71% | 96 | ✓ | | | | | | | BrowserBuffer#slice | 878,590 ops/sec | ±0.68% | 93 | | | Uint8Array#subarray | 2,843,308 ops/sec | ±1.02% | 90 | | | NodeBuffer#slice | 4,998,316 ops/sec | ±0.68% | 90 | ✓ | | | | | | | BrowserBuffer#writeFloatBE | 65,927 ops/sec | ±0.74% | 93 | | | DataView#setFloat32 | 139,823 ops/sec | ±0.97% | 89 | ✓ | | NodeBuffer#writeFloatBE | 135,763 ops/sec | ±0.65% | 96 | | | | | | | ## Testing the project First, install the project: npm install Then, to run tests in Node.js, run: npm run test-node To test locally in a browser, you can run: npm run test-browser-es5-local # For ES5 browsers that don't support ES6 npm run test-browser-es6-local # For ES6 compliant browsers This will print out a URL that you can then open in a browser to run the tests, using [airtap](https://www.npmjs.com/package/airtap). To run automated browser tests using Saucelabs, ensure that your `SAUCE_USERNAME` and `SAUCE_ACCESS_KEY` environment variables are set, then run: npm test This is what's run in Travis, to check against various browsers. The list of browsers is kept in the `bin/airtap-es5.yml` and `bin/airtap-es6.yml` files. ## JavaScript Standard Style This module uses [JavaScript Standard Style](https://github.com/feross/standard). [![JavaScript Style Guide](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) To test that the code conforms to the style, `npm install` and run: ./node_modules/.bin/standard ## credit This was originally forked from [buffer-browserify](https://github.com/toots/buffer-browserify). ## Security Policies and Procedures The `buffer` team and community take all security bugs in `buffer` seriously. Please see our [security policies and procedures](https://github.com/feross/security) document to learn how to report issues. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org), and other contributors. Originally forked from an MIT-licensed module by Romain Beauxis. # is-number [![NPM version](https://img.shields.io/npm/v/is-number.svg?style=flat)](https://www.npmjs.com/package/is-number) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-number.svg?style=flat)](https://npmjs.org/package/is-number) [![NPM total downloads](https://img.shields.io/npm/dt/is-number.svg?style=flat)](https://npmjs.org/package/is-number) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/is-number.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/is-number) > Returns true if the value is a finite number. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-number ``` ## Why is this needed? In JavaScript, it's not always as straightforward as it should be to reliably check if a value is a number. It's common for devs to use `+`, `-`, or `Number()` to cast a string value to a number (for example, when values are returned from user input, regex matches, parsers, etc). But there are many non-intuitive edge cases that yield unexpected results: ```js console.log(+[]); //=> 0 console.log(+''); //=> 0 console.log(+' '); //=> 0 console.log(typeof NaN); //=> 'number' ``` This library offers a performant way to smooth out edge cases like these. ## Usage ```js const isNumber = require('is-number'); ``` See the [tests](./test.js) for more examples. ### true ```js isNumber(5e3); // true isNumber(0xff); // true isNumber(-1.1); // true isNumber(0); // true isNumber(1); // true isNumber(1.1); // true isNumber(10); // true isNumber(10.10); // true isNumber(100); // true isNumber('-1.1'); // true isNumber('0'); // true isNumber('012'); // true isNumber('0xff'); // true isNumber('1'); // true isNumber('1.1'); // true isNumber('10'); // true isNumber('10.10'); // true isNumber('100'); // true isNumber('5e3'); // true isNumber(parseInt('012')); // true isNumber(parseFloat('012')); // true ``` ### False Everything else is false, as you would expect: ```js isNumber(Infinity); // false isNumber(NaN); // false isNumber(null); // false isNumber(undefined); // false isNumber(''); // false isNumber(' '); // false isNumber('foo'); // false isNumber([1]); // false isNumber([]); // false isNumber(function () {}); // false isNumber({}); // false ``` ## Release history ### 7.0.0 * Refactor. Now uses `.isFinite` if it exists. * Performance is about the same as v6.0 when the value is a string or number. But it's now 3x-4x faster when the value is not a string or number. ### 6.0.0 * Optimizations, thanks to @benaadams. ### 5.0.0 **Breaking changes** * removed support for `instanceof Number` and `instanceof String` ## Benchmarks As with all benchmarks, take these with a grain of salt. See the [benchmarks](./benchmark/index.js) for more detail. ``` # all v7.0 x 413,222 ops/sec ±2.02% (86 runs sampled) v6.0 x 111,061 ops/sec ±1.29% (85 runs sampled) parseFloat x 317,596 ops/sec ±1.36% (86 runs sampled) fastest is 'v7.0' # string v7.0 x 3,054,496 ops/sec ±1.05% (89 runs sampled) v6.0 x 2,957,781 ops/sec ±0.98% (88 runs sampled) parseFloat x 3,071,060 ops/sec ±1.13% (88 runs sampled) fastest is 'parseFloat,v7.0' # number v7.0 x 3,146,895 ops/sec ±0.89% (89 runs sampled) v6.0 x 3,214,038 ops/sec ±1.07% (89 runs sampled) parseFloat x 3,077,588 ops/sec ±1.07% (87 runs sampled) fastest is 'v6.0' ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object "Returns true if an object was created by the `Object` constructor.") * [is-primitive](https://www.npmjs.com/package/is-primitive): Returns `true` if the value is a primitive. | [homepage](https://github.com/jonschlinkert/is-primitive "Returns `true` if the value is a primitive. ") * [isobject](https://www.npmjs.com/package/isobject): Returns true if the value is an object and not an array or null. | [homepage](https://github.com/jonschlinkert/isobject "Returns true if the value is an object and not an array or null.") * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of "Get the native type of a value.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 49 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [charlike-old](https://github.com/charlike-old) | | 1 | [benaadams](https://github.com/benaadams) | | 1 | [realityking](https://github.com/realityking) | ### Author **Jon Schlinkert** * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) ### License Copyright © 2018, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 15, 2018._ semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install --save semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero digit in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0` * `^0.2.3` := `>=0.2.3 <0.3.0` * `^0.0.3` := `>=0.0.3 <0.0.4` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0` * `^0.0.x` := `>=0.0.0 <0.1.0` * `^0.0` := `>=0.0.0 <0.1.0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0` * `^0.x` := `>=0.0.0 <1.0.0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) <a name="table"></a> # Table > Produces a string that represents array data in a text table. [![Github action status](https://github.com/gajus/table/actions/workflows/main.yml/badge.svg)](https://github.com/gajus/table/actions) [![Coveralls](https://img.shields.io/coveralls/gajus/table.svg?style=flat-square)](https://coveralls.io/github/gajus/table) [![NPM version](http://img.shields.io/npm/v/table.svg?style=flat-square)](https://www.npmjs.org/package/table) [![Canonical Code Style](https://img.shields.io/badge/code%20style-canonical-blue.svg?style=flat-square)](https://github.com/gajus/canonical) [![Twitter Follow](https://img.shields.io/twitter/follow/kuizinas.svg?style=social&label=Follow)](https://twitter.com/kuizinas) * [Table](#table) * [Features](#table-features) * [Install](#table-install) * [Usage](#table-usage) * [API](#table-api) * [table](#table-api-table-1) * [createStream](#table-api-createstream) * [getBorderCharacters](#table-api-getbordercharacters) ![Demo of table displaying a list of missions to the Moon.](./.README/demo.png) <a name="table-features"></a> ## Features * Works with strings containing [fullwidth](https://en.wikipedia.org/wiki/Halfwidth_and_fullwidth_forms) characters. * Works with strings containing [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code). * Configurable border characters. * Configurable content alignment per column. * Configurable content padding per column. * Configurable column width. * Text wrapping. <a name="table-install"></a> ## Install ```bash npm install table ``` [![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/gajus) [![Become a Patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/gajus) <a name="table-usage"></a> ## Usage ```js import { table } from 'table'; // Using commonjs? // const { table } = require('table'); const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; console.log(table(data)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ ``` <a name="table-api"></a> ## API <a name="table-api-table-1"></a> ### table Returns the string in the table format **Parameters:** - **_data_:** The data to display - Type: `any[][]` - Required: `true` - **_config_:** Table configuration - Type: `object` - Required: `false` <a name="table-api-table-1-config-border"></a> ##### config.border Type: `{ [type: string]: string }`\ Default: `honeywell` [template](#getbordercharacters) Custom borders. The keys are any of: - `topLeft`, `topRight`, `topBody`,`topJoin` - `bottomLeft`, `bottomRight`, `bottomBody`, `bottomJoin` - `joinLeft`, `joinRight`, `joinBody`, `joinJoin` - `bodyLeft`, `bodyRight`, `bodyJoin` - `headerJoin` ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: { topBody: `─`, topJoin: `┬`, topLeft: `┌`, topRight: `┐`, bottomBody: `─`, bottomJoin: `┴`, bottomLeft: `└`, bottomRight: `┘`, bodyLeft: `│`, bodyRight: `│`, bodyJoin: `│`, joinBody: `─`, joinLeft: `├`, joinRight: `┤`, joinJoin: `┼` } }; console.log(table(data, config)); ``` ``` ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ ``` <a name="table-api-table-1-config-drawverticalline"></a> ##### config.drawVerticalLine Type: `(lineIndex: number, columnCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a vertical line. This callback is called for each vertical border of the table. If the table has `n` columns, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawVerticalLine: (lineIndex, columnCount) => { return lineIndex === 0 || lineIndex === columnCount; } }; console.log(table(data, config)); ``` ``` ╔════════════╗ ║ 0A 0B 0C ║ ╟────────────╢ ║ 1A 1B 1C ║ ╟────────────╢ ║ 2A 2B 2C ║ ╟────────────╢ ║ 3A 3B 3C ║ ╟────────────╢ ║ 4A 4B 4C ║ ╚════════════╝ ``` <a name="table-api-table-1-config-drawhorizontalline"></a> ##### config.drawHorizontalLine Type: `(lineIndex: number, rowCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a horizontal line. This callback is called for each horizontal border of the table. If the table has `n` rows, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. If the table has `n` rows and contains the header, then the range will be `[0, n+1]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawHorizontalLine: (lineIndex, rowCount) => { return lineIndex === 0 || lineIndex === 1 || lineIndex === rowCount - 1 || lineIndex === rowCount; } }; console.log(table(data, config)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ║ 2A │ 2B │ 2C ║ ║ 3A │ 3B │ 3C ║ ╟────┼────┼────╢ ║ 4A │ 4B │ 4C ║ ╚════╧════╧════╝ ``` <a name="table-api-table-1-config-singleline"></a> ##### config.singleLine Type: `boolean`\ Default: `false` If `true`, horizontal lines inside the table are not drawn. This option also overrides the `config.drawHorizontalLine` if specified. ```js const data = [ ['-rw-r--r--', '1', 'pandorym', 'staff', '1529', 'May 23 11:25', 'LICENSE'], ['-rw-r--r--', '1', 'pandorym', 'staff', '16327', 'May 23 11:58', 'README.md'], ['drwxr-xr-x', '76', 'pandorym', 'staff', '2432', 'May 23 12:02', 'dist'], ['drwxr-xr-x', '634', 'pandorym', 'staff', '20288', 'May 23 11:54', 'node_modules'], ['-rw-r--r--', '1,', 'pandorym', 'staff', '525688', 'May 23 11:52', 'package-lock.json'], ['-rw-r--r--@', '1', 'pandorym', 'staff', '2440', 'May 23 11:25', 'package.json'], ['drwxr-xr-x', '27', 'pandorym', 'staff', '864', 'May 23 11:25', 'src'], ['drwxr-xr-x', '20', 'pandorym', 'staff', '640', 'May 23 11:25', 'test'], ]; const config = { singleLine: true }; console.log(table(data, config)); ``` ``` ╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 1529 │ May 23 11:25 │ LICENSE ║ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 16327 │ May 23 11:58 │ README.md ║ ║ drwxr-xr-x │ 76 │ pandorym │ staff │ 2432 │ May 23 12:02 │ dist ║ ║ drwxr-xr-x │ 634 │ pandorym │ staff │ 20288 │ May 23 11:54 │ node_modules ║ ║ -rw-r--r-- │ 1, │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║ ║ -rw-r--r--@ │ 1 │ pandorym │ staff │ 2440 │ May 23 11:25 │ package.json ║ ║ drwxr-xr-x │ 27 │ pandorym │ staff │ 864 │ May 23 11:25 │ src ║ ║ drwxr-xr-x │ 20 │ pandorym │ staff │ 640 │ May 23 11:25 │ test ║ ╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝ ``` <a name="table-api-table-1-config-columns"></a> ##### config.columns Type: `Column[] | { [columnIndex: number]: Column }` Column specific configurations. <a name="table-api-table-1-config-columns-config-columns-width"></a> ###### config.columns[*].width Type: `number`\ Default: the maximum cell widths of the column Column width (excluding the paddings). ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: { 1: { width: 10 } } }; console.log(table(data, config)); ``` ``` ╔════╤════════════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────────────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────────────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════════════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-alignment"></a> ###### config.columns[*].alignment Type: `'center' | 'justify' | 'left' | 'right'`\ Default: `'left'` Cell content horizontal alignment ```js const data = [ ['0A', '0B', '0C', '0D 0E 0F'], ['1A', '1B', '1C', '1D 1E 1F'], ['2A', '2B', '2C', '2D 2E 2F'], ]; const config = { columnDefault: { width: 10, }, columns: [ { alignment: 'left' }, { alignment: 'center' }, { alignment: 'right' }, { alignment: 'justify' } ], }; console.log(table(data, config)); ``` ``` ╔════════════╤════════════╤════════════╤════════════╗ ║ 0A │ 0B │ 0C │ 0D 0E 0F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C │ 1D 1E 1F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C │ 2D 2E 2F ║ ╚════════════╧════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-verticalalignment"></a> ###### config.columns[*].verticalAlignment Type: `'top' | 'middle' | 'bottom'`\ Default: `'top'` Cell content vertical alignment ```js const data = [ ['A', 'B', 'C', 'DEF'], ]; const config = { columnDefault: { width: 1, }, columns: [ { verticalAlignment: 'top' }, { verticalAlignment: 'middle' }, { verticalAlignment: 'bottom' }, ], }; console.log(table(data, config)); ``` ``` ╔═══╤═══╤═══╤═══╗ ║ A │ │ │ D ║ ║ │ B │ │ E ║ ║ │ │ C │ F ║ ╚═══╧═══╧═══╧═══╝ ``` <a name="table-api-table-1-config-columns-config-columns-paddingleft"></a> ###### config.columns[*].paddingLeft Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the left. <a name="table-api-table-1-config-columns-config-columns-paddingright"></a> ###### config.columns[*].paddingRight Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the right. The `paddingLeft` and `paddingRight` options do not count on the column width. So the column has `width = 5`, `paddingLeft = 2` and `paddingRight = 2` will have the total width is `9`. ```js const data = [ ['0A', 'AABBCC', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: [ { paddingLeft: 3 }, { width: 2, paddingRight: 3 } ] }; console.log(table(data, config)); ``` ``` ╔══════╤══════╤════╗ ║ 0A │ AA │ 0C ║ ║ │ BB │ ║ ║ │ CC │ ║ ╟──────┼──────┼────╢ ║ 1A │ 1B │ 1C ║ ╟──────┼──────┼────╢ ║ 2A │ 2B │ 2C ║ ╚══════╧══════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-truncate"></a> ###### config.columns[*].truncate Type: `number`\ Default: `Infinity` The number of characters is which the content will be truncated. To handle a content that overflows the container width, `table` package implements [text wrapping](#config.columns[*].wrapWord). However, sometimes you may want to truncate content that is too long to be displayed in the table. ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20, truncate: 100 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convall… ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-wrapword"></a> ###### config.columns[*].wrapWord Type: `boolean`\ Default: `false` The `table` package implements auto text wrapping, i.e., text that has the width greater than the container width will be separated into multiple lines at the nearest space or one of the special characters: `\|/_.,;-`. When `wrapWord` is `false`: ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convallis ║ ║ dapibus. Nunc venena ║ ║ tis tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` When `wrapWord` is `true`: ``` ╔══════════════════════╗ ║ Lorem ipsum dolor ║ ║ sit amet, ║ ║ consectetur ║ ║ adipiscing elit. ║ ║ Phasellus pulvinar ║ ║ nibh sed mauris ║ ║ convallis dapibus. ║ ║ Nunc venenatis ║ ║ tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columndefault"></a> ##### config.columnDefault Type: `Column`\ Default: `{}` The default configuration for all columns. Column-specific settings will overwrite the default values. <a name="table-api-table-1-config-header"></a> ##### config.header Type: `object` Header configuration. *Deprecated in favor of the new spanning cells API.* The header configuration inherits the most of the column's, except: - `content` **{string}**: the header content. - `width:` calculate based on the content width automatically. - `alignment:` `center` be default. - `verticalAlignment:` is not supported. - `config.border.topJoin` will be `config.border.topBody` for prettier. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ]; const config = { columnDefault: { width: 10, }, header: { alignment: 'center', content: 'THE HEADER\nThis is the table about something', }, } console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════╗ ║ THE HEADER ║ ║ This is the table about something ║ ╟────────────┬────────────┬────────────╢ ║ 0A │ 0B │ 0C ║ ╟────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C ║ ╟────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C ║ ╚════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-spanningcells"></a> ##### config.spanningCells Type: `SpanningCellConfig[]` Spanning cells configuration. The configuration should be straightforward: just specify an array of minimal cell configurations including the position of top-left cell and the number of columns and/or rows will be expanded from it. The content of overlap cells will be ignored to make the `data` shape be consistent. By default, the configuration of column that the top-left cell belongs to will be applied to the whole spanning cell, except: * The `width` will be summed up of all spanning columns. * The `paddingRight` will be received from the right-most column intentionally. Advances customized column-like styles can be configurable to each spanning cell to overwrite the default behavior. ```js const data = [ ['Test Coverage Report', '', '', '', '', ''], ['Module', 'Component', 'Test Cases', 'Failures', 'Durations', 'Success Rate'], ['Services', 'User', '50', '30', '3m 7s', '60.0%'], ['', 'Payment', '100', '80', '7m 15s', '80.0%'], ['Subtotal', '', '150', '110', '10m 22s', '73.3%'], ['Controllers', 'User', '24', '18', '1m 30s', '75.0%'], ['', 'Payment', '30', '24', '50s', '80.0%'], ['Subtotal', '', '54', '42', '2m 20s', '77.8%'], ['Total', '', '204', '152', '12m 42s', '74.5%'], ]; const config = { columns: [ { alignment: 'center', width: 12 }, { alignment: 'center', width: 10 }, { alignment: 'right' }, { alignment: 'right' }, { alignment: 'right' }, { alignment: 'right' } ], spanningCells: [ { col: 0, row: 0, colSpan: 6 }, { col: 0, row: 2, rowSpan: 2, verticalAlignment: 'middle'}, { col: 0, row: 4, colSpan: 2, alignment: 'right'}, { col: 0, row: 5, rowSpan: 2, verticalAlignment: 'middle'}, { col: 0, row: 7, colSpan: 2, alignment: 'right' }, { col: 0, row: 8, colSpan: 2, alignment: 'right' } ], }; console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════════════════════════════════════════════╗ ║ Test Coverage Report ║ ╟──────────────┬────────────┬────────────┬──────────┬───────────┬──────────────╢ ║ Module │ Component │ Test Cases │ Failures │ Durations │ Success Rate ║ ╟──────────────┼────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ User │ 50 │ 30 │ 3m 7s │ 60.0% ║ ║ Services ├────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ Payment │ 100 │ 80 │ 7m 15s │ 80.0% ║ ╟──────────────┴────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ Subtotal │ 150 │ 110 │ 10m 22s │ 73.3% ║ ╟──────────────┬────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ User │ 24 │ 18 │ 1m 30s │ 75.0% ║ ║ Controllers ├────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ │ Payment │ 30 │ 24 │ 50s │ 80.0% ║ ╟──────────────┴────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ Subtotal │ 54 │ 42 │ 2m 20s │ 77.8% ║ ╟───────────────────────────┼────────────┼──────────┼───────────┼──────────────╢ ║ Total │ 204 │ 152 │ 12m 42s │ 74.5% ║ ╚═══════════════════════════╧════════════╧══════════╧═══════════╧══════════════╝ ``` <a name="table-api-createstream"></a> ### createStream `table` package exports `createStream` function used to draw a table and append rows. **Parameter:** - _**config:**_ the same as `table`'s, except `config.columnDefault.width` and `config.columnCount` must be provided. ```js import { createStream } from 'table'; const config = { columnDefault: { width: 50 }, columnCount: 1 }; const stream = createStream(config); setInterval(() => { stream.write([new Date()]); }, 500); ``` ![Streaming current date.](./.README/api/stream/streaming.gif) `table` package uses ANSI escape codes to overwrite the output of the last line when a new row is printed. The underlying implementation is explained in this [Stack Overflow answer](http://stackoverflow.com/a/32938658/368691). Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g. ```js import { createStream } from 'table'; import _ from 'lodash'; const config = { columnDefault: { width: 50 }, columnCount: 3, columns: [ { width: 10, alignment: 'right' }, { alignment: 'center' }, { width: 10 } ] }; const stream = createStream(config); let i = 0; setInterval(() => { let random; random = _.sample('abcdefghijklmnopqrstuvwxyz', _.random(1, 30)).join(''); stream.write([i++, new Date(), random]); }, 500); ``` ![Streaming random data.](./.README/api/stream/streaming-random.gif) <a name="table-api-getbordercharacters"></a> ### getBorderCharacters **Parameter:** - **_template_** - Type: `'honeywell' | 'norc' | 'ramac' | 'void'` - Required: `true` You can load one of the predefined border templates using `getBorderCharacters` function. ```js import { table, getBorderCharacters } from 'table'; const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: getBorderCharacters(`name of the template`) }; console.log(table(data, config)); ``` ``` # honeywell ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ # norc ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ # ramac (ASCII; for use in terminals that do not support Unicode characters) +----+----+----+ | 0A | 0B | 0C | |----|----|----| | 1A | 1B | 1C | |----|----|----| | 2A | 2B | 2C | +----+----+----+ # void (no borders; see "borderless table" section of the documentation) 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` Raise [an issue](https://github.com/gajus/table/issues) if you'd like to contribute a new border template. <a name="table-api-getbordercharacters-borderless-table"></a> #### Borderless Table Simply using `void` border character template creates a table with a lot of unnecessary spacing. To create a more pleasant to the eye table, reset the padding and remove the joining rows, e.g. ```js const output = table(data, { border: getBorderCharacters('void'), columnDefault: { paddingLeft: 0, paddingRight: 1 }, drawHorizontalLine: () => false } ); console.log(output); ``` ``` 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` # line-column [![Build Status](https://travis-ci.org/io-monad/line-column.svg?branch=master)](https://travis-ci.org/io-monad/line-column) [![Coverage Status](https://coveralls.io/repos/github/io-monad/line-column/badge.svg?branch=master)](https://coveralls.io/github/io-monad/line-column?branch=master) [![npm version](https://badge.fury.io/js/line-column.svg)](https://badge.fury.io/js/line-column) Node module to convert efficiently index to/from line-column in a string. ## Install npm install line-column ## Usage ### lineColumn(str, options = {}) Returns a `LineColumnFinder` instance for given string `str`. #### Options | Key | Description | Default | | ------- | ----------- | ------- | | `origin` | The origin value of line number and column number | `1` | ### lineColumn(str, index) This is just a shorthand for `lineColumn(str).fromIndex(index)`. ### LineColumnFinder#fromIndex(index) Find line and column from index in the string. Parameters: - `index` - `number` Index in the string. (0-origin) Returns: - `{ line: x, col: y }` Found line number and column number. - `null` if the given index is out of range. ### LineColumnFinder#toIndex(line, column) Find index from line and column in the string. Parameters: - `line` - `number` Line number in the string. - `column` - `number` Column number in the string. or - `{ line: x, col: y }` - `Object` line and column numbers in the string.<br>A key name `column` can be used instead of `col`. or - `[ line, col ]` - `Array` line and column numbers in the string. Returns: - `number` Found index in the string. - `-1` if the given line or column is out of range. ## Example ```js var lineColumn = require("line-column"); var testString = [ "ABCDEFG\n", // line:0, index:0 "HIJKLMNOPQRSTU\n", // line:1, index:8 "VWXYZ\n", // line:2, index:23 "日本語の文字\n", // line:3, index:29 "English words" // line:4, index:36 ].join(""); // length:49 lineColumn(testString).fromIndex(3) // { line: 1, col: 4 } lineColumn(testString).fromIndex(33) // { line: 4, col: 5 } lineColumn(testString).toIndex(1, 4) // 3 lineColumn(testString).toIndex(4, 5) // 33 // Shorthand of .fromIndex (compatible with find-line-column) lineColumn(testString, 33) // { line:4, col: 5 } // Object or Array is also acceptable lineColumn(testString).toIndex({ line: 4, col: 5 }) // 33 lineColumn(testString).toIndex({ line: 4, column: 5 }) // 33 lineColumn(testString).toIndex([4, 5]) // 33 // You can cache it for the same string. It is so efficient. (See benchmark) var finder = lineColumn(testString); finder.fromIndex(33) // { line: 4, column: 5 } finder.toIndex(4, 5) // 33 // For 0-origin line and column numbers var oneOrigin = lineColumn(testString, { origin: 0 }); oneOrigin.fromIndex(33) // { line: 3, column: 4 } oneOrigin.toIndex(3, 4) // 33 ``` ## Testing npm test ## Benchmark The popular package [find-line-column](https://www.npmjs.com/package/find-line-column) provides the same "index to line-column" feature. Here is some benchmarking on `line-column` vs `find-line-column`. You can run this benchmark by `npm run benchmark`. See [benchmark/](benchmark/) for the source code. ``` long text + line-column (not cached) x 72,989 ops/sec ±0.83% (89 runs sampled) long text + line-column (cached) x 13,074,242 ops/sec ±0.32% (89 runs sampled) long text + find-line-column x 33,887 ops/sec ±0.54% (84 runs sampled) short text + line-column (not cached) x 1,636,766 ops/sec ±0.77% (82 runs sampled) short text + line-column (cached) x 21,699,686 ops/sec ±1.04% (82 runs sampled) short text + find-line-column x 382,145 ops/sec ±1.04% (85 runs sampled) ``` As you might have noticed, even not cached version of `line-column` is 2x - 4x faster than `find-line-column`, and cached version of `line-column` is remarkable 50x - 380x faster. ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -am 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## License MIT (See LICENSE) is2 === is2 is a type-checking module for JavaScript to test values. Is does not throw exceptions and every function only returns true or false. Use is2 to validate types in your node.js code. Every function in is2 returns either true of false. ## Installation To install is2, type: $ npm install is2 ## Usage const is = require('is2'); console.log(`1===1 is: ${is.equal(true, 1===1)}`); console.log(`10 is a positive number: ${is.positiveNumber(10)}`); console.log(`11 is an odd number: ${is.oddNumber(11)}`); ## API Each function returns true or false. The names after the '-' are aliases, which provide brevity. Environment: * is.browser() * is.defined(val) - is.def * is.nodejs() - is.node() * is.undefined(val) - is.udef, is.undef Types: * is.array(val) - is.ary, is.arry * is.arrayLike(val) - is.arryLike, is.aryLike, is.arrLike * is.arguments(val) - is.args * is.boolean(val) - is.bool * is.buffer(val) - is.buf, is.buff * is.date(val) * is.error(val) - is.err * is.false(val) * is.function(val) - is.funct, is.fun * is.mongoId - is.objectId, is.objId * is.null(val) * is.nullOrUndefined(val) - is.nullOrUndef * is.number(val) - is.num * is.object(val) - is.obj * is.regExp(val) - is.regexp, is.re * is.string(val) - is.str * is.true(val) * is.uuid(val) Relationships: * is.equal(val, other) - is.eq, is.objEquals * is.hosted(val, host) * is.instanceOf(val, constructor) - is.instOf, is.instanceof * is.matching(val1, val2 [, val3, ...]) - is.match : true if the first arument is strictly equal to any of the subsequent args. * is.objectInstanceof(obj, objType) - is.instOf, is.instanceOf, is.objInstOf, is.objectInstanceOf * is.type(val, type) - is.a * is.enumerator(val, array) - is.enum, is.inArray Object State: * is.empty(val) * is.emptyArguments(val) - is.emptyArgs, is.noArgs * is.emptyArray(val) - is.emptyArry, is.emptyAry, is.emptyArray * is.emptyArrayLike(val) - is.emptyArrLike * is.emptyString(val) - is.emptyStr * is.nonEmptyArray(val) - is.nonEmptyArry, is.nonEmptyAry * is.nonEmptyObject(val) - is.nonEmptyObj * is.emptyObject(val) - is.emptyObj * is.nonEmptyString(val) - is.nonEmptyStr Numeric Types within Number: * is.even(val) - is.evenNum, is.evenNumber * is.decimal(val) - is.decNum, is.dec * is.integer(val) - is.int * is.notANumber(val) - is.nan, is.notANum * is.odd(val) - is.oddNum, is.oddNumber Numeric Type and State: * is.positiveNumber(val) - is.pos, is.positive, is.posNum, is.positiveNum * is.negativeNumber(val) - is.neg, is.negNum, is.negativeNum, is.negativeNumber * is.negativeInteger(val) - is.negativeInt, is.negInt * is.positiveInteger(val) - is.posInt, is.positiveInt Numeric Relationship: * is.divisibleBy(val, other) - is.divisBy, is.divBy * is.greaterOrEqualTo(val, other) - is.ge, is.greaterOrEqual * is.greaterThan(val, other) - is.gt * is.lessThanOrEqualTo(val, other) - is.lessThanOrEq, is.lessThanOrEqual, is.le * is.lessThan(val, other) - is.lt * is.maximum(val, array) - is.max * is.minimum(val, array) - is.min * is.withIn(val, start, finish) - is.within * is.prettyClose(val, comp, precision) - is.closish, is.near Networking: * is.dnsAddress(val) - is.dnsAddr, is.dns * is.emailAddress(val) - is.email, is.emailAddr * is.ipv4Address(val) - is.ipv4, is.ipv4Addr * is.ipv6Address(val) - is.ipv6, is.ipv6Addr * is.ipAddress(val) - is.ip, is.ipAddr * is.hostAddress(val) - is.host = is.hostIp = is.hostAddr * is.port(val) * is.systemPort(val) - is.sysPort * is.url(val) - is.uri * is.userPort(val) Credit Cards: * is.creditCardNumber(str) - is.creditCard, is.creditCardNum * is.americanExpressCardNumber(str) - is.amexCardNum, is.amexCard * is.chinaUnionPayCardNumber(str) - is.chinaUnionPayCard, is.chinaUnion * is.dankortCardNumber(str) - is.dankortCard, is.dankort * is.dinersClubCarteBlancheCardNumber(str) - is.dinersClubCarteBlancheCard, is.dinersClubCB * is.dinersClubInternationalCardNumber(str) - is.dinersClubInternationalCard, is.dinersClubInt * is.dinersClubUSACanadaCardNumber(str) - is.dinersClubUSACanCard, is.dinersClub * is.discoverCardNumber(str) - is.discoverCard, is.discover * is.instaPaymentCardNumber(str) - is.instaPayment * is.jcbCardNumber(str) - is.jcbCard, is.jcb * is.laserCardNumber(str) - is.laserCard, is.laser * is.maestroCardNumber(str) - is.maestroCard, is.maestro * is.masterCardCardNumber - is.masterCardCard, is.masterCard * is.visaCardNumber(str) - is.visaCard, is.visa * is.visaElectronCardNumber(str) - is.visaElectronCard, is.visaElectron Personal information: * is.streetAddress(str) - is.street, is.address * is.zipCode(str) - is.zip * is.phoneNumber(str) - is.phone ## License The MIT License (MIT) Copyright (c) 2013,2014 Edmond Meinfelder Copyright (c) 2011 Enrico Marino Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # flat-cache > A stupidly simple key/value storage using files to persist the data [![NPM Version](http://img.shields.io/npm/v/flat-cache.svg?style=flat)](https://npmjs.org/package/flat-cache) [![Build Status](https://api.travis-ci.org/royriojas/flat-cache.svg?branch=master)](https://travis-ci.org/royriojas/flat-cache) ## install ```bash npm i --save flat-cache ``` ## Usage ```js var flatCache = require('flat-cache') // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var cache = flatCache.load('cacheId'); // sets a key on the cache cache.setKey('key', { foo: 'var' }); // get a key from the cache cache.getKey('key') // { foo: 'var' } // fetch the entire persisted object cache.all() // { 'key': { foo: 'var' } } // remove a key cache.removeKey('key'); // removes a key from the cache // save it to disk cache.save(); // very important, if you don't save no changes will be persisted. // cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys // loads the cache from a given directory, if one does // not exists for the given Id a new one will be prepared to be created var cache = flatCache.load('cacheId', path.resolve('./path/to/folder')); // The following methods are useful to clear the cache // delete a given cache flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists. // delete all cache flatCache.clearAll(); // remove the cache directory ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistance in order to make a script that will beutify files with `esformatter` only execute on the files that were changed since the last run. To make that possible we need to store the `fileSize` and `modificationTime` of the files. So a simple `key/value` storage was needed and Bam! this module was born. ## Important notes - If no directory is especified when the `load` method is called, a folder named `.cache` will be created inside the module directory when `cache.save` is called. If you're committing your `node_modules` to any vcs, you might want to ignore the default `.cache` folder, or specify a custom directory. - The values set on the keys of the cache should be `stringify-able` ones, meaning no circular references - All the changes to the cache state are done to memory - I could have used a timer or `Object.observe` to deliver the changes to disk, but I wanted to keep this module intentionally dumb and simple - Non visited keys are removed when `cache.save()` is called. If this is not desired, you can pass `true` to the save call like: `cache.save( true /* noPrune */ )`. ## License MIT ## Changelog [changelog](./changelog.md) # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ <h1 align="center"> <img width="250" src="https://rawgit.com/lukechilds/keyv/master/media/logo.svg" alt="keyv"> <br> <br> </h1> > Simple key-value storage with support for multiple backends [![Build Status](https://travis-ci.org/lukechilds/keyv.svg?branch=master)](https://travis-ci.org/lukechilds/keyv) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/keyv/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/keyv?branch=master) [![npm](https://img.shields.io/npm/dm/keyv.svg)](https://www.npmjs.com/package/keyv) [![npm](https://img.shields.io/npm/v/keyv.svg)](https://www.npmjs.com/package/keyv) Keyv provides a consistent interface for key-value storage across multiple backends via storage adapters. It supports TTL based expiry, making it suitable as a cache or a persistent key-value store. ## Features There are a few existing modules similar to Keyv, however Keyv is different because it: - Isn't bloated - Has a simple Promise based API - Suitable as a TTL based cache or persistent key-value store - [Easily embeddable](#add-cache-support-to-your-module) inside another module - Works with any storage that implements the [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) API - Handles all JSON types plus `Buffer` - Supports namespaces - Wide range of [**efficient, well tested**](#official-storage-adapters) storage adapters - Connection errors are passed through (db failures won't kill your app) - Supports the current active LTS version of Node.js or higher ## Usage Install Keyv. ``` npm install --save keyv ``` By default everything is stored in memory, you can optionally also install a storage adapter. ``` npm install --save @keyv/redis npm install --save @keyv/mongo npm install --save @keyv/sqlite npm install --save @keyv/postgres npm install --save @keyv/mysql ``` Create a new Keyv instance, passing your connection string if applicable. Keyv will automatically load the correct storage adapter. ```js const Keyv = require('keyv'); // One of the following const keyv = new Keyv(); const keyv = new Keyv('redis://user:pass@localhost:6379'); const keyv = new Keyv('mongodb://user:pass@localhost:27017/dbname'); const keyv = new Keyv('sqlite://path/to/database.sqlite'); const keyv = new Keyv('postgresql://user:pass@localhost:5432/dbname'); const keyv = new Keyv('mysql://user:pass@localhost:3306/dbname'); // Handle DB connection errors keyv.on('error', err => console.log('Connection Error', err)); await keyv.set('foo', 'expires in 1 second', 1000); // true await keyv.set('foo', 'never expires'); // true await keyv.get('foo'); // 'never expires' await keyv.delete('foo'); // true await keyv.clear(); // undefined ``` ### Namespaces You can namespace your Keyv instance to avoid key collisions and allow you to clear only a certain namespace while using the same database. ```js const users = new Keyv('redis://user:pass@localhost:6379', { namespace: 'users' }); const cache = new Keyv('redis://user:pass@localhost:6379', { namespace: 'cache' }); await users.set('foo', 'users'); // true await cache.set('foo', 'cache'); // true await users.get('foo'); // 'users' await cache.get('foo'); // 'cache' await users.clear(); // undefined await users.get('foo'); // undefined await cache.get('foo'); // 'cache' ``` ### Custom Serializers Keyv uses [`json-buffer`](https://github.com/dominictarr/json-buffer) for data serialization to ensure consistency across different backends. You can optionally provide your own serialization functions to support extra data types or to serialize to something other than JSON. ```js const keyv = new Keyv({ serialize: JSON.stringify, deserialize: JSON.parse }); ``` **Warning:** Using custom serializers means you lose any guarantee of data consistency. You should do extensive testing with your serialisation functions and chosen storage engine. ## Official Storage Adapters The official storage adapters are covered by [over 150 integration tests](https://travis-ci.org/lukechilds/keyv/jobs/260418145) to guarantee consistent behaviour. They are lightweight, efficient wrappers over the DB clients making use of indexes and native TTLs where available. Database | Adapter | Native TTL | Status ---|---|---|--- Redis | [@keyv/redis](https://github.com/lukechilds/keyv-redis) | Yes | [![Build Status](https://travis-ci.org/lukechilds/keyv-redis.svg?branch=master)](https://travis-ci.org/lukechilds/keyv-redis) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/keyv-redis/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/keyv-redis?branch=master) MongoDB | [@keyv/mongo](https://github.com/lukechilds/keyv-mongo) | Yes | [![Build Status](https://travis-ci.org/lukechilds/keyv-mongo.svg?branch=master)](https://travis-ci.org/lukechilds/keyv-mongo) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/keyv-mongo/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/keyv-mongo?branch=master) SQLite | [@keyv/sqlite](https://github.com/lukechilds/keyv-sqlite) | No | [![Build Status](https://travis-ci.org/lukechilds/keyv-sqlite.svg?branch=master)](https://travis-ci.org/lukechilds/keyv-sqlite) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/keyv-sqlite/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/keyv-sqlite?branch=master) PostgreSQL | [@keyv/postgres](https://github.com/lukechilds/keyv-postgres) | No | [![Build Status](https://travis-ci.org/lukechilds/keyv-postgres.svg?branch=master)](https://travis-ci.org/lukechildskeyv-postgreskeyv) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/keyv-postgres/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/keyv-postgres?branch=master) MySQL | [@keyv/mysql](https://github.com/lukechilds/keyv-mysql) | No | [![Build Status](https://travis-ci.org/lukechilds/keyv-mysql.svg?branch=master)](https://travis-ci.org/lukechilds/keyv-mysql) [![Coverage Status](https://coveralls.io/repos/github/lukechilds/keyv-mysql/badge.svg?branch=master)](https://coveralls.io/github/lukechilds/keyv-mysql?branch=master) ## Third-party Storage Adapters You can also use third-party storage adapters or build your own. Keyv will wrap these storage adapters in TTL functionality and handle complex types internally. ```js const Keyv = require('keyv'); const myAdapter = require('./my-storage-adapter'); const keyv = new Keyv({ store: myAdapter }); ``` Any store that follows the [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) api will work. ```js new Keyv({ store: new Map() }); ``` For example, [`quick-lru`](https://github.com/sindresorhus/quick-lru) is a completely unrelated module that implements the Map API. ```js const Keyv = require('keyv'); const QuickLRU = require('quick-lru'); const lru = new QuickLRU({ maxSize: 1000 }); const keyv = new Keyv({ store: lru }); ``` The following are third-party storage adapters compatible with Keyv: - [quick-lru](https://github.com/sindresorhus/quick-lru) - Simple "Least Recently Used" (LRU) cache - [keyv-file](https://github.com/zaaack/keyv-file) - File system storage adapter for Keyv - [keyv-dynamodb](https://www.npmjs.com/package/keyv-dynamodb) - DynamoDB storage adapter for Keyv ## Add Cache Support to your Module Keyv is designed to be easily embedded into other modules to add cache support. The recommended pattern is to expose a `cache` option in your modules options which is passed through to Keyv. Caching will work in memory by default and users have the option to also install a Keyv storage adapter and pass in a connection string, or any other storage that implements the `Map` API. You should also set a namespace for your module so you can safely call `.clear()` without clearing unrelated app data. Inside your module: ```js class AwesomeModule { constructor(opts) { this.cache = new Keyv({ uri: typeof opts.cache === 'string' && opts.cache, store: typeof opts.cache !== 'string' && opts.cache, namespace: 'awesome-module' }); } } ``` Now it can be consumed like this: ```js const AwesomeModule = require('awesome-module'); // Caches stuff in memory by default const awesomeModule = new AwesomeModule(); // After npm install --save keyv-redis const awesomeModule = new AwesomeModule({ cache: 'redis://localhost' }); // Some third-party module that implements the Map API const awesomeModule = new AwesomeModule({ cache: some3rdPartyStore }); ``` ## API ### new Keyv([uri], [options]) Returns a new Keyv instance. The Keyv instance is also an `EventEmitter` that will emit an `'error'` event if the storage adapter connection fails. ### uri Type: `String`<br> Default: `undefined` The connection string URI. Merged into the options object as options.uri. ### options Type: `Object` The options object is also passed through to the storage adapter. Check your storage adapter docs for any extra options. #### options.namespace Type: `String`<br> Default: `'keyv'` Namespace for the current instance. #### options.ttl Type: `Number`<br> Default: `undefined` Default TTL. Can be overridden by specififying a TTL on `.set()`. #### options.serialize Type: `Function`<br> Default: `JSONB.stringify` A custom serialization function. #### options.deserialize Type: `Function`<br> Default: `JSONB.parse` A custom deserialization function. #### options.store Type: `Storage adapter instance`<br> Default: `new Map()` The storage adapter instance to be used by Keyv. #### options.adapter Type: `String`<br> Default: `undefined` Specify an adapter to use. e.g `'redis'` or `'mongodb'`. ### Instance Keys must always be strings. Values can be of any type. #### .set(key, value, [ttl]) Set a value. By default keys are persistent. You can set an expiry TTL in milliseconds. Returns `true`. #### .get(key) Returns the value. #### .delete(key) Deletes an entry. Returns `true` if the key existed, `false` if not. #### .clear() Delete all entries in the current namespace. Returns `undefined`. ## License MIT © Luke Childs has-unicode =========== Try to guess if your terminal supports unicode ```javascript var hasUnicode = require("has-unicode") if (hasUnicode()) { // the terminal probably has unicode support } ``` ```javascript var hasUnicode = require("has-unicode").tryHarder hasUnicode(function(unicodeSupported) { if (unicodeSupported) { // the terminal probably has unicode support } }) ``` ## Detecting Unicode What we actually detect is UTF-8 support, as that's what Node itself supports. If you have a UTF-16 locale then you won't be detected as unicode capable. ### Windows Since at least Windows 7, `cmd` and `powershell` have been unicode capable, but unfortunately even then it's not guaranteed. In many localizations it still uses legacy code pages and there's no facility short of running programs or linking C++ that will let us detect this. As such, we report any Windows installation as NOT unicode capable, and recommend that you encourage your users to override this via config. ### Unix Like Operating Systems We look at the environment variables `LC_ALL`, `LC_CTYPE`, and `LANG` in that order. For `LC_ALL` and `LANG`, it looks for `.UTF-8` in the value. For `LC_CTYPE` it looks to see if the value is `UTF-8`. This is sufficient for most POSIX systems. While locale data can be put in `/etc/locale.conf` as well, AFAIK it's always copied into the environment. # mkdirp-classic Just a non-deprecated mirror of [mkdirp 0.5.2](https://github.com/substack/node-mkdirp/tree/0.5.1) for use in modules where we depend on the non promise interface. ``` npm install mkdirp-classic ``` ## Usage ``` js // See the above link ``` ## License MIT semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0` * `^0.2.3` := `>=0.2.3 <0.3.0` * `^0.0.3` := `>=0.0.3 <0.0.4` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0` * `^0.0.x` := `>=0.0.0 <0.1.0` * `^0.0` := `>=0.0.0 <0.1.0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0` * `^0.x` := `>=0.0.0 <1.0.0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Integer.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` # detect-libc Node.js module to detect the C standard library (libc) implementation family and version in use on a given Linux system. Provides a value suitable for use with the `LIBC` option of [prebuild](https://www.npmjs.com/package/prebuild), [prebuild-ci](https://www.npmjs.com/package/prebuild-ci) and [prebuild-install](https://www.npmjs.com/package/prebuild-install), therefore allowing build and provision of pre-compiled binaries for musl-based Linux e.g. Alpine as well as glibc-based. Currently supports libc detection of `glibc` and `musl`. ## Install ```sh npm install detect-libc ``` ## Usage ### API ```js const { GLIBC, MUSL, family, version, isNonGlibcLinux } = require('detect-libc'); ``` * `GLIBC` is a String containing the value "glibc" for comparison with `family`. * `MUSL` is a String containing the value "musl" for comparison with `family`. * `family` is a String representing the system libc family. * `version` is a String representing the system libc version number. * `isNonGlibcLinux` is a Boolean representing whether the system is a non-glibc Linux, e.g. Alpine. ### detect-libc command line tool When run on a Linux system with a non-glibc libc, the child command will be run with the `LIBC` environment variable set to the relevant value. On all other platforms will run the child command as-is. The command line feature requires `spawnSync` provided by Node v0.12+. ```sh detect-libc child-command ``` ## Integrating with prebuild ```json "scripts": { "install": "detect-libc prebuild-install || node-gyp rebuild", "test": "mocha && detect-libc prebuild-ci" }, "dependencies": { "detect-libc": "^1.0.2", "prebuild-install": "^2.2.0" }, "devDependencies": { "prebuild": "^6.2.1", "prebuild-ci": "^2.2.3" } ``` ## Licence Copyright 2017 Lovell Fuller Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0.html) Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ## assemblyscript-temporal An implementation of temporal within AssemblyScript, with an initial focus on non-timezone-aware classes and functionality. ### Why? AssemblyScript has minimal `Date` support, however, the JS Date API itself is terrible and people tend not to use it that often. As a result libraries like moment / luxon have become staple replacements. However, there is now a [relatively mature TC39 proposal](https://github.com/tc39/proposal-temporal) that adds greatly improved date support to JS. The goal of this project is to implement Temporal for AssemblyScript. ### Usage This library currently supports the following types: #### `PlainDateTime` A `PlainDateTime` represents a calendar date and wall-clock time that does not carry time zone information, e.g. December 7th, 1995 at 3:00 PM (in the Gregorian calendar). For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindatetime.html), this implementation follows the specification as closely as possible. You can create a `PlainDateTime` from individual components, a string or an object literal: ```javascript datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.year; // 2019; datetime.month; // 11; // ... datetime.nanosecond; // 789; datetime = PlainDateTime.from("1976-11-18T12:34:56"); datetime.toString(); // "1976-11-18T12:34:56" datetime = PlainDateTime.from({ year: 1966, month: 3, day: 3 }); datetime.toString(); // "1966-03-03T00:00:00" ``` There are various ways you can manipulate a date: ```javascript // use 'with' to copy a date but with various property values overriden datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.with({ year: 2019 }).toString(); // "2019-11-18T15:23:30.123456789" // use 'add' or 'substract' to add / subtract a duration datetime = PlainDateTime.from("2020-01-12T15:00"); datetime.add({ months: 1 }).toString(); // "2020-02-12T15:00:00"); // add / subtract support Duration objects or object literals datetime.add(new Duration(1)).toString(); // "2021-01-12T15:00:00"); ``` You can compare dates and check for equality ```javascript dt1 = PlainDateTime.from("1976-11-18"); dt2 = PlainDateTime.from("2019-10-29"); PlainDateTime.compare(dt1, dt1); // 0 PlainDateTime.compare(dt1, dt2); // -1 dt1.equals(dt1); // true ``` Currently `PlainDateTime` only supports the ISO 8601 (Gregorian) calendar. #### `PlainDate` A `PlainDate` object represents a calendar date that is not associated with a particular time or time zone, e.g. August 24th, 2006. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindate.html), this implementation follows the specification as closely as possible. The `PlainDate` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainTime` A `PlainTime` object represents a wall-clock time that is not associated with a particular date or time zone, e.g. 7:39 PM. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaintime.html), this implementation follows the specification as closely as possible. The `PlainTime` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainMonthDay` A date without a year component. This is useful to express things like "Bastille Day is on the 14th of July". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainmonthday.html) , this implementation follows the specification as closely as possible. ```javascript const monthDay = PlainMonthDay.from({ month: 7, day: 14 }); // => 07-14 const date = monthDay.toPlainDate({ year: 2030 }); // => 2030-07-14 date.dayOfWeek; // => 7 ``` The `PlainMonthDay` API is almost identical to `PlainDateTime`, so see above for more API usage examples. #### `PlainYearMonth` A date without a day component. This is useful to express things like "the October 2020 meeting". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainyearmonth.html) , this implementation follows the specification as closely as possible. The `PlainYearMonth` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `now` The `now` object has several methods which give information about the current time and date. ```javascript dateTime = now.plainDateTimeISO(); dateTime.toString(); // 2021-04-01T12:05:47.357 ``` ## Contributing This project is open source, MIT licensed and your contributions are very much welcomed. There is a [brief document that outlines implementation progress and priorities](./development.md). [![npm version][npm-image]][npm-url] [![build status][travis-image]][travis-url] # u2f-api U2F API for browsers ## API ### Support U2F has for a long time been supported in Chrome, although not with the standard `window.u2f` methods, but through a built-in extension. Nowadays, browsers seem to use `window.u2f` to expose the functionality. Supported browsers are: * Chrome, using Chrome-specific hacks * Opera, using Chrome-specific hacks Firefox, Safari and other browsers still lack U2F support. Since 0.1.0, this library supports the standard `window.u2f` methods. The library should be complemented with server-side functionality, e.g. using the [`u2f`](https://www.npmjs.com/package/u2f) package. ### Basics `u2f-api` exports two main functions and an error "enum". The main functions are `register()` and `sign()`, although since U2F isn't widely supported, the functions `isSupported()` as well as `ensureSupport()` helps you build applications which can use U2F only when the client supports it. The `register()` and `sign()` functions return *cancellable promises*, i.e. promises you can cancel manually. This helps you to ensure your code doesn't continue in success flow and by mistake accept a registration or authentification request. The returned promise has a function `cancel()` which will immediately reject the promise. #### Check or ensure support ```ts import { isSupported } from 'u2f-api' isSupported(): Promise< Boolean > // Doesn't throw/reject ``` ```ts import { ensureSupport } from 'u2f-api' ensureSupport(): Promise< void > // Throws/rejects if not supported ``` #### Register ```ts import { register } from 'u2f-api' register( registerRequests: RegisterRequest[], signRequests: SignRequest[], // optional timeout: number // optional ): Promise< RegisterResponse > ``` The `registerRequests` can be either a RegisterRequest or an array of such. The optional `signRequests` must be, unless ignored, an array of SignRequests. The optional `timeout` is in seconds, and will default to an implementation specific value, e.g. 30. #### Sign ```ts import { sign } from 'u2f-api' sign( signRequests: SignRequest[], timeout: number // optional ): Promise< SignResponse > ``` The values and interpretation of the arguments are the same as with `register( )`. #### Errors `register()` and `sign()` can return rejected promises. The rejection error is an `Error` object with a `metaData` property containing `code` and `type`. The `code` is a numerical value describing the type of the error, and `type` is the name of the error, as defined by the `ErrorCodes` enum in the "FIDO U2F Javascript API" specification. They are: ```js OK = 0 // u2f-api will never throw errors with this code OTHER_ERROR = 1 BAD_REQUEST = 2 CONFIGURATION_UNSUPPORTED = 3 DEVICE_INELIGIBLE = 4 TIMEOUT = 5 CANCELLED = -1 // Added by this library ``` ## Usage ### Loading the library The library is promisified and will use the built-in native promises of the browser, unless another promise library is injected. The following are valid ways to load the library: ```js var u2fApi = require( 'u2f-api' ); // Will use the native Promise // ... or var u2fApi = require( 'u2f-api' )( require( 'bluebird' ) ); // Will use bluebird for promises ``` ### Registering a passkey With `registerRequestsFromServer` somehow received from the server, the client code becomes: ```js u2fApi.register( registerRequestsFromServer ) .then( sendRegisterResponseToServer ) .catch( ... ); ``` ### Signing a passkey With `signRequestsFromServer` also received from the server somehow: ```js u2fApi.sign( signRequestsFromServer ) .then( sendSignResponseToServer ) .catch( ... ); ``` ### Example with checks for client support ```js u2fApi.isSupported( ) .then( function( supported ) { if ( supported ) { return u2fApi.sign( signRequestsFromServer ) .then( sendSignResponseToServer ); } else { ... // Other authentication method } } ) .catch( ... ); ``` ### Canceling As mentioned in the API section above, the returned promises from `register()` and `sign()` have a method `cancel()` which will cancel the request. This is nothing more than a helper function. ## Example implementation U2F is a *challenge-response protocol*. The server sends a `challenge` to the client, which responds with a `response`. This library is intended to be used in the client (the browser). There is another package intended for server-side: https://www.npmjs.com/package/u2f ## Common problems If you get `BAD_REQUEST`, the most common situations are that you either don't use `https` (which you must), or that the AppID doesn't match the server URI. In fact, the AppID must be exactly the base URI to your server (such as `https://your-server.com`), including the port if it isn't 443. For more information, please see https://developers.yubico.com/U2F/Libraries/Client_error_codes.html and https://developers.yubico.com/U2F/App_ID.html [npm-image]: https://img.shields.io/npm/v/u2f-api.svg [npm-url]: https://npmjs.org/package/u2f-api [travis-image]: https://img.shields.io/travis/grantila/u2f-api.svg [travis-url]: https://travis-ci.org/grantila/u2f-api <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # `react-is` This package allows you to test arbitrary values and see if they're a particular React element type. ## Installation ```sh # Yarn yarn add react-is # NPM npm install react-is ``` ## Usage ### Determining if a Component is Valid ```js import React from "react"; import * as ReactIs from "react-is"; class ClassComponent extends React.Component { render() { return React.createElement("div"); } } const FunctionComponent = () => React.createElement("div"); const ForwardRefComponent = React.forwardRef((props, ref) => React.createElement(Component, { forwardedRef: ref, ...props }) ); const Context = React.createContext(false); ReactIs.isValidElementType("div"); // true ReactIs.isValidElementType(ClassComponent); // true ReactIs.isValidElementType(FunctionComponent); // true ReactIs.isValidElementType(ForwardRefComponent); // true ReactIs.isValidElementType(Context.Provider); // true ReactIs.isValidElementType(Context.Consumer); // true ReactIs.isValidElementType(React.createFactory("div")); // true ``` ### Determining an Element's Type #### Context ```js import React from "react"; import * as ReactIs from 'react-is'; const ThemeContext = React.createContext("blue"); ReactIs.isContextConsumer(<ThemeContext.Consumer />); // true ReactIs.isContextProvider(<ThemeContext.Provider />); // true ReactIs.typeOf(<ThemeContext.Provider />) === ReactIs.ContextProvider; // true ReactIs.typeOf(<ThemeContext.Consumer />) === ReactIs.ContextConsumer; // true ``` #### Element ```js import React from "react"; import * as ReactIs from 'react-is'; ReactIs.isElement(<div />); // true ReactIs.typeOf(<div />) === ReactIs.Element; // true ``` #### Fragment ```js import React from "react"; import * as ReactIs from 'react-is'; ReactIs.isFragment(<></>); // true ReactIs.typeOf(<></>) === ReactIs.Fragment; // true ``` #### Portal ```js import React from "react"; import ReactDOM from "react-dom"; import * as ReactIs from 'react-is'; const div = document.createElement("div"); const portal = ReactDOM.createPortal(<div />, div); ReactIs.isPortal(portal); // true ReactIs.typeOf(portal) === ReactIs.Portal; // true ``` #### StrictMode ```js import React from "react"; import * as ReactIs from 'react-is'; ReactIs.isStrictMode(<React.StrictMode />); // true ReactIs.typeOf(<React.StrictMode />) === ReactIs.StrictMode; // true ``` # which Like the unix `which` utility. Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so `hash -r` is not needed when the PATH changes. ## USAGE ```javascript var which = require('which') // async usage which('node', function (er, resolvedPath) { // er is returned if no "node" is found on the PATH // if it is found, then the absolute path to the exec is returned }) // or promise which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... }) // sync usage // throws if not found var resolved = which.sync('node') // if nothrow option is used, returns null if not found resolved = which.sync('node', {nothrow: true}) // Pass options to override the PATH and PATHEXT environment vars. which('node', { path: someOtherPath }, function (er, resolved) { if (er) throw er console.log('found at %j', resolved) }) ``` ## CLI USAGE Same as the BSD `which(1)` binary. ``` usage: which [-as] program ... ``` ## OPTIONS You may pass an options object as the second argument. - `path`: Use instead of the `PATH` environment variable. - `pathExt`: Use instead of the `PATHEXT` environment variable. - `all`: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` # URI.js URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI ([RFC 3987](http://www.ietf.org/rfc/rfc3987.txt)), IDNA ([RFC 5890](http://www.ietf.org/rfc/rfc5890.txt)), IPv6 Address ([RFC 5952](http://www.ietf.org/rfc/rfc5952.txt)), IPv6 Zone Identifier ([RFC 6874](http://www.ietf.org/rfc/rfc6874.txt)) specifications. URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated). ## API ### Parsing URI.parse("uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body"); //returns: //{ // scheme : "uri", // userinfo : "user:pass", // host : "example.com", // port : 123, // path : "/one/two.three", // query : "q1=a1&q2=a2", // fragment : "body" //} ### Serializing URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer" ### Resolving URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g" ### Normalizing URI.normalize("HTTP://ABC.com:80/%7Esmith/home.html") === "http://abc.com/~smith/home.html" ### Comparison URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true ### IP Support //IPv4 normalization URI.normalize("//192.068.001.000") === "//192.68.1.0" //IPv6 normalization URI.normalize("//[2001:0:0DB8::0:0001]") === "//[2001:0:db8::1]" //IPv6 zone identifier support URI.parse("//[2001:db8::7%25en1]"); //returns: //{ // host : "2001:db8::7%en1" //} ### IRI Support //convert IRI to URI URI.serialize(URI.parse("http://examplé.org/rosé")) === "http://xn--exampl-gva.org/ros%C3%A9" //convert URI to IRI URI.serialize(URI.parse("http://xn--exampl-gva.org/ros%C3%A9"), {iri:true}) === "http://examplé.org/rosé" ### Options All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties: * `scheme` (string) Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior. * `reference` (string) If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme. * `tolerant` (boolean, false) If set to `true`, the parser will relax URI resolving rules. * `absolutePath` (boolean, false) If set to `true`, the serializer will not resolve a relative `path` component. * `iri` (boolean, false) If set to `true`, the serializer will unescape non-ASCII characters as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `unicodeSupport` (boolean, false) If set to `true`, the parser will unescape non-ASCII characters in the parsed output as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `domainHost` (boolean, false) If set to `true`, the library will treat the `host` component as a domain name, and convert IDNs (International Domain Names) as per [RFC 5891](http://www.ietf.org/rfc/rfc5891.txt). ## Scheme Extendable URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. Currently, URI.js has built in support for the following schemes: * http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\] * https \[[RFC 2818](http://www.ietf.org/rfc/rfc2818.txt)\] * ws \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * wss \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * mailto \[[RFC 6068](http://www.ietf.org/rfc/rfc6068.txt)\] * urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\] * urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\] ### HTTP/HTTPS Support URI.equal("HTTP://ABC.COM:80", "http://abc.com/") === true URI.equal("https://abc.com", "HTTPS://ABC.COM:443/") === true ### WS/WSS Support URI.parse("wss://example.com/foo?bar=baz"); //returns: //{ // scheme : "wss", // host: "example.com", // resourceName: "/foo?bar=baz", // secure: true, //} URI.equal("WS://ABC.COM:80/chat#one", "ws://abc.com/chat") === true ### Mailto Support URI.parse("mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!"); //returns: //{ // scheme : "mailto", // to : ["alpha@example.com", "bravo@example.com"], // subject : "SUBSCRIBE", // body : "Sign me up!" //} URI.serialize({ scheme : "mailto", to : ["alpha@example.com"], subject : "REMOVE", body : "Please remove me", headers : { cc : "charlie@example.com" } }) === "mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me" ### URN Support URI.parse("urn:example:foo"); //returns: //{ // scheme : "urn", // nid : "example", // nss : "foo", //} #### URN UUID Support URI.parse("urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6"); //returns: //{ // scheme : "urn", // nid : "uuid", // uuid : "f81d4fae-7dec-11d0-a765-00a0c91e6bf6", //} ## Usage To load in a browser, use the following tag: <script type="text/javascript" src="uri-js/dist/es5/uri.all.min.js"></script> To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line: npm install uri-js # OR yarn add uri-js Then, in your code, load it using: const URI = require("uri-js"); If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using: import * as URI from "uri-js"; Or you can load just what you need using named exports: import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from "uri-js"; ## Breaking changes ### Breaking changes from 3.x URN parsing has been completely changed to better align with the specification. Scheme is now always `urn`, but has two new properties: `nid` which contains the Namspace Identifier, and `nss` which contains the Namespace Specific String. The `nss` property will be removed by higher order scheme handlers, such as the UUID URN scheme handler. The UUID of a URN can now be found in the `uuid` property. ### Breaking changes from 2.x URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful. ### Breaking changes from 1.x The `errors` array on parsed components is now an `error` string. # duplexer3 [![Build Status](https://travis-ci.org/floatdrop/duplexer3.svg?branch=master)](https://travis-ci.org/floatdrop/duplexer3) [![Coverage Status](https://coveralls.io/repos/floatdrop/duplexer3/badge.svg?branch=master&service=github)](https://coveralls.io/github/floatdrop/duplexer3?branch=master) Like [duplexer2](https://github.com/deoxxa/duplexer2) but using Streams3 without readable-stream dependency ```javascript var stream = require("stream"); var duplexer3 = require("duplexer3"); var writable = new stream.Writable({objectMode: true}), readable = new stream.Readable({objectMode: true}); writable._write = function _write(input, encoding, done) { if (readable.push(input)) { return done(); } else { readable.once("drain", done); } }; readable._read = function _read(n) { // no-op }; // simulate the readable thing closing after a bit writable.once("finish", function() { setTimeout(function() { readable.push(null); }, 500); }); var duplex = duplexer3(writable, readable); duplex.on("data", function(e) { console.log("got data", JSON.stringify(e)); }); duplex.on("finish", function() { console.log("got finish event"); }); duplex.on("end", function() { console.log("got end event"); }); duplex.write("oh, hi there", function() { console.log("finished writing"); }); duplex.end(function() { console.log("finished ending"); }); ``` ``` got data "oh, hi there" finished writing got finish event finished ending got end event ``` ## Overview This is a reimplementation of [duplexer](https://www.npmjs.com/package/duplexer) using the Streams3 API which is standard in Node as of v4. Everything largely works the same. ## Installation [Available via `npm`](https://docs.npmjs.com/cli/install): ``` $ npm i duplexer3 ``` ## API ### duplexer3 Creates a new `DuplexWrapper` object, which is the actual class that implements most of the fun stuff. All that fun stuff is hidden. DON'T LOOK. ```javascript duplexer3([options], writable, readable) ``` ```javascript const duplex = duplexer3(new stream.Writable(), new stream.Readable()); ``` Arguments * __options__ - an object specifying the regular `stream.Duplex` options, as well as the properties described below. * __writable__ - a writable stream * __readable__ - a readable stream Options * __bubbleErrors__ - a boolean value that specifies whether to bubble errors from the underlying readable/writable streams. Default is `true`. ## License 3-clause BSD. [A copy](./LICENSE) is included with the source. ## Contact * GitHub ([deoxxa](http://github.com/deoxxa)) * Twitter ([@deoxxa](http://twitter.com/deoxxa)) * Email ([deoxxa@fknsrs.biz](mailto:deoxxa@fknsrs.biz)) ### esutils [![Build Status](https://secure.travis-ci.org/estools/esutils.svg)](http://travis-ci.org/estools/esutils) esutils ([esutils](http://github.com/estools/esutils)) is utility box for ECMAScript language tools. ### API ### ast #### ast.isExpression(node) Returns true if `node` is an Expression as defined in ECMA262 edition 5.1 section [11](https://es5.github.io/#x11). #### ast.isStatement(node) Returns true if `node` is a Statement as defined in ECMA262 edition 5.1 section [12](https://es5.github.io/#x12). #### ast.isIterationStatement(node) Returns true if `node` is an IterationStatement as defined in ECMA262 edition 5.1 section [12.6](https://es5.github.io/#x12.6). #### ast.isSourceElement(node) Returns true if `node` is a SourceElement as defined in ECMA262 edition 5.1 section [14](https://es5.github.io/#x14). #### ast.trailingStatement(node) Returns `Statement?` if `node` has trailing `Statement`. ```js if (cond) consequent; ``` When taking this `IfStatement`, returns `consequent;` statement. #### ast.isProblematicIfStatement(node) Returns true if `node` is a problematic IfStatement. If `node` is a problematic `IfStatement`, `node` cannot be represented as an one on one JavaScript code. ```js { type: 'IfStatement', consequent: { type: 'WithStatement', body: { type: 'IfStatement', consequent: {type: 'EmptyStatement'} } }, alternate: {type: 'EmptyStatement'} } ``` The above node cannot be represented as a JavaScript code, since the top level `else` alternate belongs to an inner `IfStatement`. ### code #### code.isDecimalDigit(code) Return true if provided code is decimal digit. #### code.isHexDigit(code) Return true if provided code is hexadecimal digit. #### code.isOctalDigit(code) Return true if provided code is octal digit. #### code.isWhiteSpace(code) Return true if provided code is white space. White space characters are formally defined in ECMA262. #### code.isLineTerminator(code) Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262. #### code.isIdentifierStart(code) Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262. #### code.isIdentifierPart(code) Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262. ### keyword #### keyword.isKeywordES5(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections [7.6.1.1](http://es5.github.io/#x7.6.1.1) and [7.6.1.2](http://es5.github.io/#x7.6.1.2), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isKeywordES6(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections [11.6.2.1](http://ecma-international.org/ecma-262/6.0/#sec-keywords) and [11.6.2.2](http://ecma-international.org/ecma-262/6.0/#sec-future-reserved-words), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isReservedWordES5(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section [7.6.1](http://es5.github.io/#x7.6.1). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isReservedWordES6(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section [11.6.2](http://ecma-international.org/ecma-262/6.0/#sec-reserved-words). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isRestrictedWord(id) Returns `true` if provided identifier string is one of `eval` or `arguments`. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section [12.1.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers-static-semantics-early-errors). #### keyword.isIdentifierNameES5(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). #### keyword.isIdentifierNameES6(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section [11.6](http://ecma-international.org/ecma-262/6.0/#sec-names-and-keywords). #### keyword.isIdentifierES5(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. #### keyword.isIdentifierES6(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section [12.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. ### License Copyright (C) 2013 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 67 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. tunnel-agent ============ HTTP proxy tunneling agent. Formerly part of mikeal/request, now a standalone module. # ncp - Asynchronous recursive file & directory copying [![Build Status](https://secure.travis-ci.org/AvianFlu/ncp.png)](http://travis-ci.org/AvianFlu/ncp) Think `cp -r`, but pure node, and asynchronous. `ncp` can be used both as a CLI tool and programmatically. ## Command Line usage Usage is simple: `ncp [source] [dest] [--limit=concurrency limit] [--filter=filter] --stopOnErr` The 'filter' is a Regular Expression - matched files will be copied. The 'concurrency limit' is an integer that represents how many pending file system requests `ncp` has at a time. 'stoponerr' is a boolean flag that will tell `ncp` to stop immediately if any errors arise, rather than attempting to continue while logging errors. The default behavior is to complete as many copies as possible, logging errors along the way. If there are no errors, `ncp` will output `done.` when complete. If there are errors, the error messages will be logged to `stdout` and to `./ncp-debug.log`, and the copy operation will attempt to continue. ## Programmatic usage Programmatic usage of `ncp` is just as simple. The only argument to the completion callback is a possible error. ```javascript var ncp = require('ncp').ncp; ncp.limit = 16; ncp(source, destination, function (err) { if (err) { return console.error(err); } console.log('done!'); }); ``` You can also call ncp like `ncp(source, destination, options, callback)`. `options` should be a dictionary. Currently, such options are available: * `options.filter` - a `RegExp` instance, against which each file name is tested to determine whether to copy it or not, or a function taking single parameter: copied file name, returning `true` or `false`, determining whether to copy file or not. * `options.transform` - a function: `function (read, write) { read.pipe(write) }` used to apply streaming transforms while copying. * `options.clobber` - boolean=true. if set to false, `ncp` will not overwrite destination files that already exist. * `options.dereference` - boolean=false. If set to true, `ncp` will follow symbolic links. For example, a symlink in the source tree pointing to a regular file will become a regular file in the destination tree. Broken symlinks will result in errors. * `options.stopOnErr` - boolean=false. If set to true, `ncp` will behave like `cp -r`, and stop on the first error it encounters. By default, `ncp` continues copying, logging all errors and returning an array. * `options.errs` - stream. If `options.stopOnErr` is `false`, a stream can be provided, and errors will be written to this stream. Please open an issue if any bugs arise. As always, I accept (working) pull requests, and refunds are available at `/dev/null`. <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # Acorn A tiny, fast JavaScript parser written in JavaScript. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface **parse**`(input, options)` is the main interface to the library. The `input` parameter is a string, `options` can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the [ESTree spec](https://github.com/estree/estree). ```javascript let acorn = require("acorn"); console.log(acorn.parse("1 + 1")); ``` When encountering a syntax error, the parser will raise a `SyntaxError` object with a meaningful message. The error object will have a `pos` property that indicates the string offset at which the error occurred, and a `loc` object that contains a `{line, column}` object referring to that same position. Options can be provided by passing a second argument, which should be an object containing any of these fields: - **ecmaVersion**: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10. **NOTE**: Only 'stage 4' (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins. - **sourceType**: Indicate the mode the code should be parsed in. Can be either `"script"` or `"module"`. This influences global strict mode and parsing of `import` and `export` declarations. **NOTE**: If set to `"module"`, then static `import` / `export` syntax will be valid, even if `ecmaVersion` is less than 6. - **onInsertedSemicolon**: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if `locations` is on, also a `{line, column}` object representing this position. - **onTrailingComma**: Like `onInsertedSemicolon`, but for trailing commas. - **allowReserved**: If `false`, using a reserved word will generate an error. Defaults to `true` for `ecmaVersion` 3, `false` for higher versions. When given the value `"never"`, reserved words and keywords can also not be used as property names (as in Internet Explorer's old parser). - **allowReturnOutsideFunction**: By default, a return statement at the top level raises an error. Set this to `true` to accept such code. - **allowImportExportEverywhere**: By default, `import` and `export` declarations can only appear at a program's top level. Setting this option to `true` allows them anywhere where a statement is allowed. - **allowAwaitOutsideFunction**: By default, `await` expressions can only appear inside `async` functions. Setting this option to `true` allows to have top-level `await` expressions. They are still not allowed in non-`async` functions, though. - **allowHashBang**: When this is enabled (off by default), if the code starts with the characters `#!` (as in a shellscript), the first line will be treated as a comment. - **locations**: When `true`, each node has a `loc` object attached with `start` and `end` subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. Default is `false`. - **onToken**: If a function is passed for this option, each found token will be passed in same format as tokens returned from `tokenizer().getToken()`. If array is passed, each found token is pushed to it. Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **onComment**: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters: - `block`: `true` if the comment is a block comment, false if it is a line comment. - `text`: The content of the comment. - `start`: Character offset of the start of the comment. - `end`: Character offset of the end of the comment. When the `locations` options is on, the `{line, column}` locations of the comment’s start and end are passed as two additional parameters. If array is passed for this option, each found comment is pushed to it as object in Esprima format: ```javascript { "type": "Line" | "Block", "value": "comment text", "start": Number, "end": Number, // If `locations` option is on: "loc": { "start": {line: Number, column: Number} "end": {line: Number, column: Number} }, // If `ranges` option is on: "range": [Number, Number] } ``` Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **ranges**: Nodes have their start and end characters offsets recorded in `start` and `end` properties (directly on the node, rather than the `loc` object, which holds line/column data. To also add a [semi-standardized](https://bugzilla.mozilla.org/show_bug.cgi?id=745678) `range` property holding a `[start, end]` array with the same numbers, set the `ranges` option to `true`. - **program**: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the `program` option in subsequent parses. This will add the toplevel forms of the parsed file to the "Program" (top) node of an existing parse tree. - **sourceFile**: When the `locations` option is `true`, you can pass this option to add a `source` attribute in every node’s `loc` object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose. - **directSourceFile**: Like `sourceFile`, but a `sourceFile` property will be added (regardless of the `location` option) directly to the nodes, rather than the `loc` object. - **preserveParens**: If this option is `true`, parenthesized expressions are represented by (non-standard) `ParenthesizedExpression` nodes that have a single `expression` property containing the expression inside parentheses. **parseExpressionAt**`(input, offset, options)` will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression. **tokenizer**`(input, options)` returns an object with a `getToken` method that can be called repeatedly to get the next token, a `{start, end, type, value}` object (with added `loc` property when the `locations` option is enabled and `range` property when the `ranges` option is enabled). When the token's type is `tokTypes.eof`, you should stop calling the method, since it will keep returning that same token forever. In ES6 environment, returned result can be used as any other protocol-compliant iterable: ```javascript for (let token of acorn.tokenizer(str)) { // iterate over the tokens } // transform code to array of tokens: var tokens = [...acorn.tokenizer(str)]; ``` **tokTypes** holds an object mapping names to the token type objects that end up in the `type` properties of tokens. **getLineInfo**`(input, offset)` can be used to get a `{line, column}` object for a given program string and offset. ### The `Parser` class Instances of the **`Parser`** class contain all the state and logic that drives a parse. It has static methods `parse`, `parseExpressionAt`, and `tokenizer` that match the top-level functions by the same name. When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static `extend` method. ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); var JSXParser = acorn.Parser.extend(jsx()); JSXParser.parse("foo(<bar/>)"); ``` The `extend` method takes any number of plugin values, and returns a new `Parser` class that includes the extra parser logic provided by the plugins. ## Command line interface The `bin/acorn` utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options: - `--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10`: Sets the ECMAScript version to parse. Default is version 9. - `--module`: Sets the parsing mode to `"module"`. Is set to `"script"` otherwise. - `--locations`: Attaches a "loc" object to each node with "start" and "end" subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. - `--allow-hash-bang`: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment. - `--compact`: No whitespace is used in the AST output. - `--silent`: Do not output the AST, just return the exit status. - `--help`: Print the usage information and quit. The utility spits out the syntax tree as JSON data. ## Existing plugins - [`acorn-jsx`](https://github.com/RReverser/acorn-jsx): Parse [Facebook JSX syntax extensions](https://github.com/facebook/jsx) Plugins for ECMAScript proposals: - [`acorn-stage3`](https://github.com/acornjs/acorn-stage3): Parse most stage 3 proposals, bundling: - [`acorn-class-fields`](https://github.com/acornjs/acorn-class-fields): Parse [class fields proposal](https://github.com/tc39/proposal-class-fields) - [`acorn-import-meta`](https://github.com/acornjs/acorn-import-meta): Parse [import.meta proposal](https://github.com/tc39/proposal-import-meta) - [`acorn-private-methods`](https://github.com/acornjs/acorn-private-methods): parse [private methods, getters and setters proposal](https://github.com/tc39/proposal-private-methods)n Shims used when bundling asc for browser usage. ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo: [demo](https://estools.github.io/esquery/) The following selectors are supported: * AST node type: `ForStatement` * [wildcard](http://dev.w3.org/csswg/selectors4/#universal-selector): `*` * [attribute existence](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr]` * [attribute value](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr="foo"]` or `[attr=123]` * attribute regex: `[attr=/foo.*/]` or (with flags) `[attr=/foo.*/is]` * attribute conditions: `[attr!="foo"]`, `[attr>2]`, `[attr<3]`, `[attr>=2]`, or `[attr<=3]` * nested attribute: `[attr.level2="foo"]` * field: `FunctionDeclaration > Identifier.id` * [First](http://dev.w3.org/csswg/selectors4/#the-first-child-pseudo) or [last](http://dev.w3.org/csswg/selectors4/#the-last-child-pseudo) child: `:first-child` or `:last-child` * [nth-child](http://dev.w3.org/csswg/selectors4/#the-nth-child-pseudo) (no ax+b support): `:nth-child(2)` * [nth-last-child](http://dev.w3.org/csswg/selectors4/#the-nth-last-child-pseudo) (no ax+b support): `:nth-last-child(1)` * [descendant](http://dev.w3.org/csswg/selectors4/#descendant-combinators): `ancestor descendant` * [child](http://dev.w3.org/csswg/selectors4/#child-combinators): `parent > child` * [following sibling](http://dev.w3.org/csswg/selectors4/#general-sibling-combinators): `node ~ sibling` * [adjacent sibling](http://dev.w3.org/csswg/selectors4/#adjacent-sibling-combinators): `node + adjacent` * [negation](http://dev.w3.org/csswg/selectors4/#negation-pseudo): `:not(ForStatement)` * [has](https://drafts.csswg.org/selectors-4/#has-pseudo): `:has(ForStatement)` * [matches-any](http://dev.w3.org/csswg/selectors4/#matches): `:matches([attr] > :first-child, :last-child)` * [subject indicator](http://dev.w3.org/csswg/selectors4/#subject): `!IfStatement > [name="foo"]` * class of AST node: `:statement`, `:expression`, `:declaration`, `:function`, or `:pattern` [![Build Status](https://travis-ci.org/estools/esquery.png?branch=master)](https://travis-ci.org/estools/esquery) # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support. [![Build Status](https://travis-ci.org/epoberezkin/fast-deep-equal.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-deep-equal) [![npm](https://img.shields.io/npm/v/fast-deep-equal.svg)](https://www.npmjs.com/package/fast-deep-equal) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-deep-equal/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-deep-equal?branch=master) ## Install ```bash npm install fast-deep-equal ``` ## Features - ES5 compatible - works in node.js (8+) and browsers (IE9+) - checks equality of Date and RegExp objects by value. ES6 equal (`require('fast-deep-equal/es6')`) also supports: - Maps - Sets - Typed arrays ## Usage ```javascript var equal = require('fast-deep-equal'); console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true ``` To support ES6 Maps, Sets and Typed arrays equality use: ```javascript var equal = require('fast-deep-equal/es6'); console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true ``` To use with React (avoiding the traversal of React elements' _owner property that contains circular references and is not needed when comparing the elements - borrowed from [react-fast-compare](https://github.com/FormidableLabs/react-fast-compare)): ```javascript var equal = require('fast-deep-equal/react'); var equal = require('fast-deep-equal/es6/react'); ``` ## Performance benchmark Node.js v12.6.0: ``` fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled) fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled) fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled) nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled) shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled) underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled) lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled) deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled) deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled) ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled) util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled) assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled) The fastest is fast-deep-equal ``` To run benchmark (requires node.js 6+): ```bash npm run benchmark ``` __Please note__: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application. ## Enterprise support fast-deep-equal package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-deep-equal?utm_source=npm-fast-deep-equal&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/fast-deep-equal/blob/master/LICENSE) ### Esrecurse [![Build Status](https://travis-ci.org/estools/esrecurse.svg?branch=master)](https://travis-ci.org/estools/esrecurse) Esrecurse ([esrecurse](https://github.com/estools/esrecurse)) is [ECMAScript](https://www.ecma-international.org/publications/standards/Ecma-262.htm) recursive traversing functionality. ### Example Usage The following code will output all variables declared at the root of a file. ```javascript esrecurse.visit(ast, { XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); ``` We can use `Visitor` instance. ```javascript var visitor = new esrecurse.Visitor({ XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); visitor.visit(ast); ``` We can inherit `Visitor` instance easily. ```javascript class Derived extends esrecurse.Visitor { constructor() { super(null); } XXXStatement(node) { } } ``` ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { this.visit(node.left); // do something... this.visit(node.right); }; ``` And you can invoke default visiting operation inside custom visit operation. ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { // do something... this.visitChildren(node); }; ``` The `childVisitorKeys` option does customize the behaviour of `this.visitChildren(node)`. We can use user-defined node types. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { // Extending the existing traversing rules. childVisitorKeys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } } ); ``` We can use the `fallback` option as well. If the `fallback` option is `"iteration"`, `esrecurse` would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: 'iteration' } ); ``` If the `fallback` option is a function, `esrecurse` calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: function (node) { return Object.keys(node).filter(function(key) { return key !== 'argument' }); } } ); ``` ### License Copyright (C) 2014 [Yusuke Suzuki](https://github.com/Constellation) (twitter: [@Constellation](https://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` You can also just load the module for the function that you care about, if you'd like to minimize your footprint. ```js // load the whole API at once in a single object const semver = require('semver') // or just load the bits you need // all of them listed here, just pick and choose what you want // classes const SemVer = require('semver/classes/semver') const Comparator = require('semver/classes/comparator') const Range = require('semver/classes/range') // functions for working with versions const semverParse = require('semver/functions/parse') const semverValid = require('semver/functions/valid') const semverClean = require('semver/functions/clean') const semverInc = require('semver/functions/inc') const semverDiff = require('semver/functions/diff') const semverMajor = require('semver/functions/major') const semverMinor = require('semver/functions/minor') const semverPatch = require('semver/functions/patch') const semverPrerelease = require('semver/functions/prerelease') const semverCompare = require('semver/functions/compare') const semverRcompare = require('semver/functions/rcompare') const semverCompareLoose = require('semver/functions/compare-loose') const semverCompareBuild = require('semver/functions/compare-build') const semverSort = require('semver/functions/sort') const semverRsort = require('semver/functions/rsort') // low-level comparators between versions const semverGt = require('semver/functions/gt') const semverLt = require('semver/functions/lt') const semverEq = require('semver/functions/eq') const semverNeq = require('semver/functions/neq') const semverGte = require('semver/functions/gte') const semverLte = require('semver/functions/lte') const semverCmp = require('semver/functions/cmp') const semverCoerce = require('semver/functions/coerce') // working with ranges const semverSatisfies = require('semver/functions/satisfies') const semverMaxSatisfying = require('semver/ranges/max-satisfying') const semverMinSatisfying = require('semver/ranges/min-satisfying') const semverToComparators = require('semver/ranges/to-comparators') const semverMinVersion = require('semver/ranges/min-version') const semverValidRange = require('semver/ranges/valid') const semverOutside = require('semver/ranges/outside') const semverGtr = require('semver/ranges/gtr') const semverLtr = require('semver/ranges/ltr') const semverIntersects = require('semver/ranges/intersects') const simplifyRange = require('semver/ranges/simplify') const rangeSubset = require('semver/ranges/subset') ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0-0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0-0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any non-prerelease version satisfies, unless `includePrerelease` is specified, in which case any version at all satisfies) * `1.x` := `>=1.0.0 <2.0.0-0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0-0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0-0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0-0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0-0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0-0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0-0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0-0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0-0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0-0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0-0` * `^0.2.3` := `>=0.2.3 <0.3.0-0` * `^0.0.3` := `>=0.0.3 <0.0.4-0` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4-0` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0-0` * `^0.0.x` := `>=0.0.0 <0.1.0-0` * `^0.0` := `>=0.0.0 <0.1.0-0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0-0` * `^0.x` := `>=0.0.0 <1.0.0-0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect * `simplifyRange(versions, range)`: Return a "simplified" range that matches the same items in `versions` list as the range specified. Note that it does *not* guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with `||` programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned. * `subset(subRange, superRange)`: Return `true` if the `subRange` range is entirely contained by the `superRange` range. Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` ## Exported Modules <!-- TODO: Make sure that all of these items are documented (classes aren't, eg), and then pull the module name into the documentation for that specific thing. --> You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main `require('semver')` export uses getter functions to lazily load the parts of the API that are used. The following modules are available: * `require('semver')` * `require('semver/classes')` * `require('semver/classes/comparator')` * `require('semver/classes/range')` * `require('semver/classes/semver')` * `require('semver/functions/clean')` * `require('semver/functions/cmp')` * `require('semver/functions/coerce')` * `require('semver/functions/compare')` * `require('semver/functions/compare-build')` * `require('semver/functions/compare-loose')` * `require('semver/functions/diff')` * `require('semver/functions/eq')` * `require('semver/functions/gt')` * `require('semver/functions/gte')` * `require('semver/functions/inc')` * `require('semver/functions/lt')` * `require('semver/functions/lte')` * `require('semver/functions/major')` * `require('semver/functions/minor')` * `require('semver/functions/neq')` * `require('semver/functions/parse')` * `require('semver/functions/patch')` * `require('semver/functions/prerelease')` * `require('semver/functions/rcompare')` * `require('semver/functions/rsort')` * `require('semver/functions/satisfies')` * `require('semver/functions/sort')` * `require('semver/functions/valid')` * `require('semver/ranges/gtr')` * `require('semver/ranges/intersects')` * `require('semver/ranges/ltr')` * `require('semver/ranges/max-satisfying')` * `require('semver/ranges/min-satisfying')` * `require('semver/ranges/min-version')` * `require('semver/ranges/outside')` * `require('semver/ranges/to-comparators')` * `require('semver/ranges/valid')` # tar-stream tar-stream is a streaming tar parser and generator and nothing else. It is streams2 and operates purely using streams which means you can easily extract/parse tarballs without ever hitting the file system. Note that you still need to gunzip your data if you have a `.tar.gz`. We recommend using [gunzip-maybe](https://github.com/mafintosh/gunzip-maybe) in conjunction with this. ``` npm install tar-stream ``` [![build status](https://secure.travis-ci.org/mafintosh/tar-stream.png)](http://travis-ci.org/mafintosh/tar-stream) [![License](https://img.shields.io/badge/license-MIT-blue.svg)](http://opensource.org/licenses/MIT) ## Usage tar-stream exposes two streams, [pack](https://github.com/mafintosh/tar-stream#packing) which creates tarballs and [extract](https://github.com/mafintosh/tar-stream#extracting) which extracts tarballs. To [modify an existing tarball](https://github.com/mafintosh/tar-stream#modifying-existing-tarballs) use both. It implementes USTAR with additional support for pax extended headers. It should be compatible with all popular tar distributions out there (gnutar, bsdtar etc) ## Related If you want to pack/unpack directories on the file system check out [tar-fs](https://github.com/mafintosh/tar-fs) which provides file system bindings to this module. ## Packing To create a pack stream use `tar.pack()` and call `pack.entry(header, [callback])` to add tar entries. ``` js var tar = require('tar-stream') var pack = tar.pack() // pack is a streams2 stream // add a file called my-test.txt with the content "Hello World!" pack.entry({ name: 'my-test.txt' }, 'Hello World!') // add a file called my-stream-test.txt from a stream var entry = pack.entry({ name: 'my-stream-test.txt', size: 11 }, function(err) { // the stream was added // no more entries pack.finalize() }) entry.write('hello') entry.write(' ') entry.write('world') entry.end() // pipe the pack stream somewhere pack.pipe(process.stdout) ``` ## Extracting To extract a stream use `tar.extract()` and listen for `extract.on('entry', (header, stream, next) )` ``` js var extract = tar.extract() extract.on('entry', function(header, stream, next) { // header is the tar header // stream is the content body (might be an empty stream) // call next when you are done with this entry stream.on('end', function() { next() // ready for next entry }) stream.resume() // just auto drain the stream }) extract.on('finish', function() { // all entries read }) pack.pipe(extract) ``` The tar archive is streamed sequentially, meaning you **must** drain each entry's stream as you get them or else the main extract stream will receive backpressure and stop reading. ## Headers The header object using in `entry` should contain the following properties. Most of these values can be found by stat'ing a file. ``` js { name: 'path/to/this/entry.txt', size: 1314, // entry size. defaults to 0 mode: 0o644, // entry mode. defaults to to 0o755 for dirs and 0o644 otherwise mtime: new Date(), // last modified date for entry. defaults to now. type: 'file', // type of entry. defaults to file. can be: // file | link | symlink | directory | block-device // character-device | fifo | contiguous-file linkname: 'path', // linked file name uid: 0, // uid of entry owner. defaults to 0 gid: 0, // gid of entry owner. defaults to 0 uname: 'maf', // uname of entry owner. defaults to null gname: 'staff', // gname of entry owner. defaults to null devmajor: 0, // device major version. defaults to 0 devminor: 0 // device minor version. defaults to 0 } ``` ## Modifying existing tarballs Using tar-stream it is easy to rewrite paths / change modes etc in an existing tarball. ``` js var extract = tar.extract() var pack = tar.pack() var path = require('path') extract.on('entry', function(header, stream, callback) { // let's prefix all names with 'tmp' header.name = path.join('tmp', header.name) // write the new entry to the pack stream stream.pipe(pack.entry(header, callback)) }) extract.on('finish', function() { // all entries done - lets finalize it pack.finalize() }) // pipe the old tarball to the extractor oldTarballStream.pipe(extract) // pipe the new tarball the another stream pack.pipe(newTarballStream) ``` ## Saving tarball to fs ``` js var fs = require('fs') var tar = require('tar-stream') var pack = tar.pack() // pack is a streams2 stream var path = 'YourTarBall.tar' var yourTarball = fs.createWriteStream(path) // add a file called YourFile.txt with the content "Hello World!" pack.entry({name: 'YourFile.txt'}, 'Hello World!', function (err) { if (err) throw err pack.finalize() }) // pipe the pack stream to your file pack.pipe(yourTarball) yourTarball.on('close', function () { console.log(path + ' has been written') fs.stat(path, function(err, stats) { if (err) throw err console.log(stats) console.log('Got file info successfully!') }) }) ``` ## Performance [See tar-fs for a performance comparison with node-tar](https://github.com/mafintosh/tar-fs/blob/master/README.md#performance) # License MIT # WebIDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [WebIDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a WebIDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different WebIDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the WebIDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the WebIDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). ## Status All of the numeric types are implemented (float being implemented as double) and some others are as well - check the source for all of them. This list will grow over time in service of the [HTML as Custom Elements](https://github.com/dglazkov/html-as-custom-elements) project, but in the meantime, pull requests welcome! I'm not sure yet what the strategy will be for modifiers, e.g. [`[Clamp]`](http://heycam.github.io/webidl/#Clamp). Maybe something like `conversions["unsigned long"](x, { clamp: true })`? We'll see. We might also want to extend the API to give better error messages, e.g. "Argument 1 of HTMLMediaElement.fastSeek is not a finite floating-point value" instead of "Argument is not a finite floating-point value." This would require passing in more information to the conversion functions than we currently do. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. WebIDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on WebIDL values, i.e. instances of WebIDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a WebIDL value of [WebIDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, WebIDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given WebIDL operation, how does that get converted into a WebIDL value? For example, a JavaScript `true` passed in the position of a WebIDL `boolean` argument becomes a WebIDL `true`. But, a JavaScript `true` passed in the position of a [WebIDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a WebIDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the WebIDL algorithms, they don't actually use WebIDL values, since those aren't "real" outside of specs. Instead, implementations apply the WebIDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting WebIDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of WebIDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given WebIDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ WebIDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ WebIDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a WebIDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't Use This Seriously, why would you ever use this? You really shouldn't. WebIDL is … not great, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from WebIDL. In general, your JavaScript should not be trying to become more like WebIDL; if anything, we should fix WebIDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in WebIDL. Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. randombytes === [![Version](http://img.shields.io/npm/v/randombytes.svg)](https://www.npmjs.org/package/randombytes) [![Build Status](https://travis-ci.org/crypto-browserify/randombytes.svg?branch=master)](https://travis-ci.org/crypto-browserify/randombytes) randombytes from node that works in the browser. In node you just get crypto.randomBytes, but in the browser it uses .crypto/msCrypto.getRandomValues ```js var randomBytes = require('randombytes'); randomBytes(16);//get 16 random bytes randomBytes(16, function (err, resp) { // resp is 16 random bytes }); ``` # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # isobject [![NPM version](https://img.shields.io/npm/v/isobject.svg?style=flat)](https://www.npmjs.com/package/isobject) [![NPM downloads](https://img.shields.io/npm/dm/isobject.svg?style=flat)](https://npmjs.org/package/isobject) [![Build Status](https://img.shields.io/travis/jonschlinkert/isobject.svg?style=flat)](https://travis-ci.org/jonschlinkert/isobject) Returns true if the value is an object and not an array or null. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject --save ``` Use [is-plain-object](https://github.com/jonschlinkert/is-plain-object) if you want only objects that are created by the `Object` constructor. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject ``` Install with [bower](http://bower.io/) ```sh $ bower install isobject ``` ## Usage ```js var isObject = require('isobject'); ``` **True** All of the following return `true`: ```js isObject({}); isObject(Object.create({})); isObject(Object.create(Object.prototype)); isObject(Object.create(null)); isObject({}); isObject(new Foo); isObject(/foo/); ``` **False** All of the following return `false`: ```js isObject(); isObject(function () {}); isObject(1); isObject([]); isObject(undefined); isObject(null); ``` ## Related projects You might also be interested in these projects: [merge-deep](https://www.npmjs.com/package/merge-deep): Recursively merge values in a javascript object. | [homepage](https://github.com/jonschlinkert/merge-deep) * [extend-shallow](https://www.npmjs.com/package/extend-shallow): Extend an object with the properties of additional objects. node.js/javascript util. | [homepage](https://github.com/jonschlinkert/extend-shallow) * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object) * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of) ## Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](https://github.com/jonschlinkert/isobject/issues/new). ## Building docs Generate readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install verb && npm run docs ``` Or, if [verb](https://github.com/verbose/verb) is installed globally: ```sh $ verb ``` ## Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ## Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ## License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/isobject/blob/master/LICENSE). *** _This file was generated by [verb](https://github.com/verbose/verb), v0.9.0, on April 25, 2016._ # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) NOTE: The default branch has been renamed! master is now named main If you have a local clone, you can update it by running: ```shell git branch -m master main git fetch origin git branch -u origin/main main ``` # **node-addon-api module** This module contains **header-only C++ wrapper classes** which simplify the use of the C based [Node-API](https://nodejs.org/dist/latest/docs/api/n-api.html) provided by Node.js when using C++. It provides a C++ object model and exception handling semantics with low overhead. There are three options for implementing addons: Node-API, nan, or direct use of internal V8, libuv and Node.js libraries. Unless there is a need for direct access to functionality which is not exposed by Node-API as outlined in [C/C++ addons](https://nodejs.org/dist/latest/docs/api/addons.html) in Node.js core, use Node-API. Refer to [C/C++ addons with Node-API](https://nodejs.org/dist/latest/docs/api/n-api.html) for more information on Node-API. Node-API is an ABI stable C interface provided by Node.js for building native addons. It is independent from the underlying JavaScript runtime (e.g. V8 or ChakraCore) and is maintained as part of Node.js itself. It is intended to insulate native addons from changes in the underlying JavaScript engine and allow modules compiled for one version to run on later versions of Node.js without recompilation. The `node-addon-api` module, which is not part of Node.js, preserves the benefits of the Node-API as it consists only of inline code that depends only on the stable API provided by Node-API. As such, modules built against one version of Node.js using node-addon-api should run without having to be rebuilt with newer versions of Node.js. It is important to remember that *other* Node.js interfaces such as `libuv` (included in a project via `#include <uv.h>`) are not ABI-stable across Node.js major versions. Thus, an addon must use Node-API and/or `node-addon-api` exclusively and build against a version of Node.js that includes an implementation of Node-API (meaning an active LTS version of Node.js) in order to benefit from ABI stability across Node.js major versions. Node.js provides an [ABI stability guide][] containing a detailed explanation of ABI stability in general, and the Node-API ABI stability guarantee in particular. As new APIs are added to Node-API, node-addon-api must be updated to provide wrappers for those new APIs. For this reason node-addon-api provides methods that allow callers to obtain the underlying Node-API handles so direct calls to Node-API and the use of the objects/methods provided by node-addon-api can be used together. For example, in order to be able to use an API for which the node-addon-api does not yet provide a wrapper. APIs exposed by node-addon-api are generally used to create and manipulate JavaScript values. Concepts and operations generally map to ideas specified in the **ECMA262 Language Specification**. The [Node-API Resource](https://nodejs.github.io/node-addon-examples/) offers an excellent orientation and tips for developers just getting started with Node-API and node-addon-api. - **[Setup](#setup)** - **[API Documentation](#api)** - **[Examples](#examples)** - **[Tests](#tests)** - **[More resource and info about native Addons](#resources)** - **[Badges](#badges)** - **[Code of Conduct](CODE_OF_CONDUCT.md)** - **[Contributors](#contributors)** - **[License](#license)** ## **Current version: 3.2.1** (See [CHANGELOG.md](CHANGELOG.md) for complete Changelog) [![NPM](https://nodei.co/npm/node-addon-api.png?downloads=true&downloadRank=true)](https://nodei.co/npm/node-addon-api/) [![NPM](https://nodei.co/npm-dl/node-addon-api.png?months=6&height=1)](https://nodei.co/npm/node-addon-api/) <a name="setup"></a> node-addon-api is based on [Node-API](https://nodejs.org/api/n-api.html) and supports using different Node-API versions. This allows addons built with it to run with Node.js versions which support the targeted Node-API version. **However** the node-addon-api support model is to support only the active LTS Node.js versions. This means that every year there will be a new major which drops support for the Node.js LTS version which has gone out of service. The oldest Node.js version supported by the current version of node-addon-api is Node.js 10.x. ## Setup - [Installation and usage](doc/setup.md) - [node-gyp](doc/node-gyp.md) - [cmake-js](doc/cmake-js.md) - [Conversion tool](doc/conversion-tool.md) - [Checker tool](doc/checker-tool.md) - [Generator](doc/generator.md) - [Prebuild tools](doc/prebuild_tools.md) <a name="api"></a> ### **API Documentation** The following is the documentation for node-addon-api. - [Full Class Hierarchy](doc/hierarchy.md) - [Addon Structure](doc/addon.md) - Data Types: - [Env](doc/env.md) - [CallbackInfo](doc/callbackinfo.md) - [Reference](doc/reference.md) - [Value](doc/value.md) - [Name](doc/name.md) - [Symbol](doc/symbol.md) - [String](doc/string.md) - [Number](doc/number.md) - [Date](doc/date.md) - [BigInt](doc/bigint.md) - [Boolean](doc/boolean.md) - [External](doc/external.md) - [Object](doc/object.md) - [Array](doc/array.md) - [ObjectReference](doc/object_reference.md) - [PropertyDescriptor](doc/property_descriptor.md) - [Function](doc/function.md) - [FunctionReference](doc/function_reference.md) - [ObjectWrap](doc/object_wrap.md) - [ClassPropertyDescriptor](doc/class_property_descriptor.md) - [Buffer](doc/buffer.md) - [ArrayBuffer](doc/array_buffer.md) - [TypedArray](doc/typed_array.md) - [TypedArrayOf](doc/typed_array_of.md) - [DataView](doc/dataview.md) - [Error Handling](doc/error_handling.md) - [Error](doc/error.md) - [TypeError](doc/type_error.md) - [RangeError](doc/range_error.md) - [Object Lifetime Management](doc/object_lifetime_management.md) - [HandleScope](doc/handle_scope.md) - [EscapableHandleScope](doc/escapable_handle_scope.md) - [Memory Management](doc/memory_management.md) - [Async Operations](doc/async_operations.md) - [AsyncWorker](doc/async_worker.md) - [AsyncContext](doc/async_context.md) - [AsyncWorker Variants](doc/async_worker_variants.md) - [Thread-safe Functions](doc/threadsafe.md) - [ThreadSafeFunction](doc/threadsafe_function.md) - [TypedThreadSafeFunction](doc/typed_threadsafe_function.md) - [Promises](doc/promises.md) - [Version management](doc/version_management.md) <a name="examples"></a> ### **Examples** Are you new to **node-addon-api**? Take a look at our **[examples](https://github.com/nodejs/node-addon-examples)** - **[Hello World](https://github.com/nodejs/node-addon-examples/tree/HEAD/1_hello_world/node-addon-api)** - **[Pass arguments to a function](https://github.com/nodejs/node-addon-examples/tree/HEAD/2_function_arguments/node-addon-api)** - **[Callbacks](https://github.com/nodejs/node-addon-examples/tree/HEAD/3_callbacks/node-addon-api)** - **[Object factory](https://github.com/nodejs/node-addon-examples/tree/HEAD/4_object_factory/node-addon-api)** - **[Function factory](https://github.com/nodejs/node-addon-examples/tree/HEAD/5_function_factory/node-addon-api)** - **[Wrapping C++ Object](https://github.com/nodejs/node-addon-examples/tree/HEAD/6_object_wrap/node-addon-api)** - **[Factory of wrapped object](https://github.com/nodejs/node-addon-examples/tree/HEAD/7_factory_wrap/node-addon-api)** - **[Passing wrapped object around](https://github.com/nodejs/node-addon-examples/tree/HEAD/8_passing_wrapped/node-addon-api)** <a name="tests"></a> ### **Tests** To run the **node-addon-api** tests do: ``` npm install npm test ``` To avoid testing the deprecated portions of the API run ``` npm install npm test --disable-deprecated ``` To run the tests targeting a specific version of Node-API run ``` npm install export NAPI_VERSION=X npm test --NAPI_VERSION=X ``` where X is the version of Node-API you want to target. ### **Debug** To run the **node-addon-api** tests with `--debug` option: ``` npm run-script dev ``` If you want faster build, you might use the following option: ``` npm run-script dev:incremental ``` Take a look and get inspired by our **[test suite](https://github.com/nodejs/node-addon-api/tree/HEAD/test)** ### **Benchmarks** You can run the available benchmarks using the following command: ``` npm run-script benchmark ``` See [benchmark/README.md](benchmark/README.md) for more details about running and adding benchmarks. <a name="resources"></a> ### **More resource and info about native Addons** - **[C++ Addons](https://nodejs.org/dist/latest/docs/api/addons.html)** - **[Node-API](https://nodejs.org/dist/latest/docs/api/n-api.html)** - **[Node-API - Next Generation Node API for Native Modules](https://youtu.be/-Oniup60Afs)** - **[How We Migrated Realm JavaScript From NAN to Node-API](https://developer.mongodb.com/article/realm-javascript-nan-to-n-api)** As node-addon-api's core mission is to expose the plain C Node-API as C++ wrappers, tools that facilitate n-api/node-addon-api providing more convenient patterns on developing a Node.js add-ons with n-api/node-addon-api can be published to NPM as standalone packages. It is also recommended to tag such packages with `node-addon-api` to provide more visibility to the community. Quick links to NPM searches: [keywords:node-addon-api](https://www.npmjs.com/search?q=keywords%3Anode-addon-api). <a name="other-bindings"></a> ### **Other bindings** - **[napi-rs](https://napi.rs)** - (`Rust`) <a name="badges"></a> ### **Badges** The use of badges is recommended to indicate the minimum version of Node-API required for the module. This helps to determine which Node.js major versions are supported. Addon maintainers can consult the [Node-API support matrix][] to determine which Node.js versions provide a given Node-API version. The following badges are available: ![Node-API v1 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v1%20Badge.svg) ![Node-API v2 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v2%20Badge.svg) ![Node-API v3 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v3%20Badge.svg) ![Node-API v4 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v4%20Badge.svg) ![Node-API v5 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v5%20Badge.svg) ![Node-API v6 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v6%20Badge.svg) ![Node-API v7 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v7%20Badge.svg) ![Node-API v8 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v8%20Badge.svg) ![Node-API Experimental Version Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20Experimental%20Version%20Badge.svg) ## **Contributing** We love contributions from the community to **node-addon-api**! See [CONTRIBUTING.md](CONTRIBUTING.md) for more details on our philosophy around extending this module. <a name="contributors"></a> ## Team members ### Active | Name | GitHub Link | | ------------------- | ----------------------------------------------------- | | Anna Henningsen | [addaleax](https://github.com/addaleax) | | Chengzhong Wu | [legendecas](https://github.com/legendecas) | | Gabriel Schulhof | [gabrielschulhof](https://github.com/gabrielschulhof) | | Jim Schlight | [jschlight](https://github.com/jschlight) | | Michael Dawson | [mhdawson](https://github.com/mhdawson) | | Kevin Eady | [KevinEady](https://github.com/KevinEady) | Nicola Del Gobbo | [NickNaso](https://github.com/NickNaso) | ### Emeritus | Name | GitHub Link | | ------------------- | ----------------------------------------------------- | | Arunesh Chandra | [aruneshchandra](https://github.com/aruneshchandra) | | Benjamin Byholm | [kkoopa](https://github.com/kkoopa) | | Jason Ginchereau | [jasongin](https://github.com/jasongin) | | Hitesh Kanwathirtha | [digitalinfinity](https://github.com/digitalinfinity) | | Sampson Gao | [sampsongao](https://github.com/sampsongao) | | Taylor Woll | [boingoing](https://github.com/boingoing) | <a name="license"></a> Licensed under [MIT](./LICENSE.md) [ABI stability guide]: https://nodejs.org/en/docs/guides/abi-stability/ [Node-API support matrix]: https://nodejs.org/dist/latest/docs/api/n-api.html#n_api_n_api_version_matrix ## Timezone support In order to provide support for timezones, without relying on the JavaScript host or any other time-zone aware environment, this library makes use of teh IANA Timezone Database directly: https://www.iana.org/time-zones The database files are parsed by the scripts in this folder, which emit AssemblyScript code which is used to process the various rules at runtime. node-bindings ============= ### Helper module for loading your native module's `.node` file This is a helper module for authors of Node.js native addon modules. It is basically the "swiss army knife" of `require()`ing your native module's `.node` file. Throughout the course of Node's native addon history, addons have ended up being compiled in a variety of different places, depending on which build tool and which version of node was used. To make matters worse, now the `gyp` build tool can produce either a __Release__ or __Debug__ build, each being built into different locations. This module checks _all_ the possible locations that a native addon would be built at, and returns the first one that loads successfully. Installation ------------ Install with `npm`: ``` bash $ npm install --save bindings ``` Or add it to the `"dependencies"` section of your `package.json` file. Example ------- `require()`ing the proper bindings file for the current node version, platform and architecture is as simple as: ``` js var bindings = require('bindings')('binding.node') // Use your bindings defined in your C files bindings.your_c_function() ``` Nice Error Output ----------------- When the `.node` file could not be loaded, `node-bindings` throws an Error with a nice error message telling you exactly what was tried. You can also check the `err.tries` Array property. ``` Error: Could not load the bindings file. Tried: → /Users/nrajlich/ref/build/binding.node → /Users/nrajlich/ref/build/Debug/binding.node → /Users/nrajlich/ref/build/Release/binding.node → /Users/nrajlich/ref/out/Debug/binding.node → /Users/nrajlich/ref/Debug/binding.node → /Users/nrajlich/ref/out/Release/binding.node → /Users/nrajlich/ref/Release/binding.node → /Users/nrajlich/ref/build/default/binding.node → /Users/nrajlich/ref/compiled/0.8.2/darwin/x64/binding.node at bindings (/Users/nrajlich/ref/node_modules/bindings/bindings.js:84:13) at Object.<anonymous> (/Users/nrajlich/ref/lib/ref.js:5:47) at Module._compile (module.js:449:26) at Object.Module._extensions..js (module.js:467:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) ... ``` The searching for the `.node` file will originate from the first directory in which has a `package.json` file is found. License ------- (The MIT License) Copyright (c) 2012 Nathan Rajlich &lt;nathan@tootallnate.net&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![Build Status](https://travis-ci.org/epoberezkin/json-schema-traverse.svg?branch=master)](https://travis-ci.org/epoberezkin/json-schema-traverse) [![npm version](https://badge.fury.io/js/json-schema-traverse.svg)](https://www.npmjs.com/package/json-schema-traverse) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) [![Build Status](https://api.travis-ci.org/adaltas/node-csv-stringify.svg)](https://travis-ci.org/#!/adaltas/node-csv-stringify) [![NPM](https://img.shields.io/npm/dm/csv-stringify)](https://www.npmjs.com/package/csv-stringify) [![NPM](https://img.shields.io/npm/v/csv-stringify)](https://www.npmjs.com/package/csv-stringify) This package is a stringifier converting records into a CSV text and implementing the Node.js [`stream.Transform` API](https://nodejs.org/api/stream.html). It also provides the easier synchronous and callback-based APIs for conveniency. It is both extremely easy to use and powerful. It was first released in 2010 and is tested against big data sets by a large community. ## Documentation * [Project homepage](http://csv.js.org/stringify/) * [API](http://csv.js.org/stringify/api/) * [Options](http://csv.js.org/stringify/options/) * [Examples](http://csv.js.org/stringify/examples/) ## Main features * Follow the Node.js streaming API * Simplicity with the optional callback API * Support for custom formatters, delimiters, quotes, escape characters and header * Support big datasets * Complete test coverage and samples for inspiration * Only 1 external dependency * to be used conjointly with `csv-generate`, `csv-parse` and `stream-transform` * MIT License ## Usage The module is built on the Node.js Stream API. For the sake of simplicity, a simple callback API is also provided. To give you a quick look, here's an example of the callback API: ```javascript const stringify = require('csv-stringify') const assert = require('assert') // import stringify from 'csv-stringify' // import assert from 'assert/strict' const input = [ [ '1', '2', '3', '4' ], [ 'a', 'b', 'c', 'd' ] ] stringify(input, function(err, output) { const expected = '1,2,3,4\na,b,c,d\n' assert.strictEqual(output, expected, `output.should.eql ${expected}`) console.log("Passed.", output) }) ``` ## Development Tests are executed with mocha. To install it, run `npm install` followed by `npm test`. It will install mocha and its dependencies in your project "node_modules" directory and run the test suite. The tests run against the CoffeeScript source files. To generate the JavaScript files, run `npm run build`. The test suite is run online with [Travis](https://travis-ci.org/#!/adaltas/node-csv-stringify). See the [Travis definition file](https://github.com/adaltas/node-csv-stringify/blob/master/.travis.yml) to view the tested Node.js version. ## Contributors * David Worms: <https://github.com/wdavidw> [csv_home]: https://github.com/adaltas/node-csv [stream_transform]: http://nodejs.org/api/stream.html#stream_class_stream_transform [examples]: http://csv.js.org/stringify/examples/ [csv]: https://github.com/adaltas/node-csv # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Current Status whatwg-url is currently up to date with the URL spec up to commit [a62223](https://github.com/whatwg/url/commit/a622235308342c9adc7fc2fd1659ff059f7d5e2a). ## API ### The `URL` Constructor The main API is the [`URL`](https://url.spec.whatwg.org/#url) export, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use this. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/browsers.html#serialization-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by the string `"failure"`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ the string `"failure"`. node-fetch ========== [![npm version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![coverage status][codecov-image]][codecov-url] [![install size][install-size-image]][install-size-url] [![Discord][discord-image]][discord-url] A light-weight module that brings `window.fetch` to Node.js (We are looking for [v2 maintainers and collaborators](https://github.com/bitinn/node-fetch/issues/567)) [![Backers][opencollective-image]][opencollective-url] <!-- TOC --> - [Motivation](#motivation) - [Features](#features) - [Difference from client-side fetch](#difference-from-client-side-fetch) - [Installation](#installation) - [Loading and configuring the module](#loading-and-configuring-the-module) - [Common Usage](#common-usage) - [Plain text or HTML](#plain-text-or-html) - [JSON](#json) - [Simple Post](#simple-post) - [Post with JSON](#post-with-json) - [Post with form parameters](#post-with-form-parameters) - [Handling exceptions](#handling-exceptions) - [Handling client and server errors](#handling-client-and-server-errors) - [Advanced Usage](#advanced-usage) - [Streams](#streams) - [Buffer](#buffer) - [Accessing Headers and other Meta data](#accessing-headers-and-other-meta-data) - [Extract Set-Cookie Header](#extract-set-cookie-header) - [Post data using a file stream](#post-data-using-a-file-stream) - [Post with form-data (detect multipart)](#post-with-form-data-detect-multipart) - [Request cancellation with AbortSignal](#request-cancellation-with-abortsignal) - [API](#api) - [fetch(url[, options])](#fetchurl-options) - [Options](#options) - [Class: Request](#class-request) - [Class: Response](#class-response) - [Class: Headers](#class-headers) - [Interface: Body](#interface-body) - [Class: FetchError](#class-fetcherror) - [License](#license) - [Acknowledgement](#acknowledgement) <!-- /TOC --> ## Motivation Instead of implementing `XMLHttpRequest` in Node.js to run browser-specific [Fetch polyfill](https://github.com/github/fetch), why not go from native `http` to `fetch` API directly? Hence, `node-fetch`, minimal code for a `window.fetch` compatible API on Node.js runtime. See Matt Andrews' [isomorphic-fetch](https://github.com/matthew-andrews/isomorphic-fetch) or Leonardo Quixada's [cross-fetch](https://github.com/lquixada/cross-fetch) for isomorphic usage (exports `node-fetch` for server-side, `whatwg-fetch` for client-side). ## Features - Stay consistent with `window.fetch` API. - Make conscious trade-off when following [WHATWG fetch spec][whatwg-fetch] and [stream spec](https://streams.spec.whatwg.org/) implementation details, document known differences. - Use native promise but allow substituting it with [insert your favorite promise library]. - Use native Node streams for body on both request and response. - Decode content encoding (gzip/deflate) properly and convert string output (such as `res.text()` and `res.json()`) to UTF-8 automatically. - Useful extensions such as timeout, redirect limit, response size limit, [explicit errors](ERROR-HANDLING.md) for troubleshooting. ## Difference from client-side fetch - See [Known Differences](LIMITS.md) for details. - If you happen to use a missing feature that `window.fetch` offers, feel free to open an issue. - Pull requests are welcomed too! ## Installation Current stable release (`2.x`) ```sh $ npm install node-fetch ``` ## Loading and configuring the module We suggest you load the module via `require` until the stabilization of ES modules in node: ```js const fetch = require('node-fetch'); ``` If you are using a Promise library other than native, set it through `fetch.Promise`: ```js const Bluebird = require('bluebird'); fetch.Promise = Bluebird; ``` ## Common Usage NOTE: The documentation below is up-to-date with `2.x` releases; see the [`1.x` readme](https://github.com/bitinn/node-fetch/blob/1.x/README.md), [changelog](https://github.com/bitinn/node-fetch/blob/1.x/CHANGELOG.md) and [2.x upgrade guide](UPGRADE-GUIDE.md) for the differences. #### Plain text or HTML ```js fetch('https://github.com/') .then(res => res.text()) .then(body => console.log(body)); ``` #### JSON ```js fetch('https://api.github.com/users/github') .then(res => res.json()) .then(json => console.log(json)); ``` #### Simple Post ```js fetch('https://httpbin.org/post', { method: 'POST', body: 'a=1' }) .then(res => res.json()) // expecting a json response .then(json => console.log(json)); ``` #### Post with JSON ```js const body = { a: 1 }; fetch('https://httpbin.org/post', { method: 'post', body: JSON.stringify(body), headers: { 'Content-Type': 'application/json' }, }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form parameters `URLSearchParams` is available in Node.js as of v7.5.0. See [official documentation](https://nodejs.org/api/url.html#url_class_urlsearchparams) for more usage methods. NOTE: The `Content-Type` header is only set automatically to `x-www-form-urlencoded` when an instance of `URLSearchParams` is given as such: ```js const { URLSearchParams } = require('url'); const params = new URLSearchParams(); params.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: params }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Handling exceptions NOTE: 3xx-5xx responses are *NOT* exceptions and should be handled in `then()`; see the next section for more information. Adding a catch to the fetch promise chain will catch *all* exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the [error handling document](ERROR-HANDLING.md) for more details. ```js fetch('https://domain.invalid/') .catch(err => console.error(err)); ``` #### Handling client and server errors It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses: ```js function checkStatus(res) { if (res.ok) { // res.status >= 200 && res.status < 300 return res; } else { throw MyCustomError(res.statusText); } } fetch('https://httpbin.org/status/400') .then(checkStatus) .then(res => console.log('will not get here...')) ``` ## Advanced Usage #### Streams The "Node.js way" is to use streams when possible: ```js fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => { const dest = fs.createWriteStream('./octocat.png'); res.body.pipe(dest); }); ``` #### Buffer If you prefer to cache binary data in full, use buffer(). (NOTE: `buffer()` is a `node-fetch`-only API) ```js const fileType = require('file-type'); fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => res.buffer()) .then(buffer => fileType(buffer)) .then(type => { /* ... */ }); ``` #### Accessing Headers and other Meta data ```js fetch('https://github.com/') .then(res => { console.log(res.ok); console.log(res.status); console.log(res.statusText); console.log(res.headers.raw()); console.log(res.headers.get('content-type')); }); ``` #### Extract Set-Cookie Header Unlike browsers, you can access raw `Set-Cookie` headers manually using `Headers.raw()`. This is a `node-fetch` only API. ```js fetch(url).then(res => { // returns an array of values, instead of a string of comma-separated values console.log(res.headers.raw()['set-cookie']); }); ``` #### Post data using a file stream ```js const { createReadStream } = require('fs'); const stream = createReadStream('input.txt'); fetch('https://httpbin.org/post', { method: 'POST', body: stream }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form-data (detect multipart) ```js const FormData = require('form-data'); const form = new FormData(); form.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: form }) .then(res => res.json()) .then(json => console.log(json)); // OR, using custom headers // NOTE: getHeaders() is non-standard API const form = new FormData(); form.append('a', 1); const options = { method: 'POST', body: form, headers: form.getHeaders() } fetch('https://httpbin.org/post', options) .then(res => res.json()) .then(json => console.log(json)); ``` #### Request cancellation with AbortSignal > NOTE: You may cancel streamed requests only on Node >= v8.0.0 You may cancel requests with `AbortController`. A suggested implementation is [`abort-controller`](https://www.npmjs.com/package/abort-controller). An example of timing out a request after 150ms could be achieved as the following: ```js import AbortController from 'abort-controller'; const controller = new AbortController(); const timeout = setTimeout( () => { controller.abort(); }, 150, ); fetch(url, { signal: controller.signal }) .then(res => res.json()) .then( data => { useData(data) }, err => { if (err.name === 'AbortError') { // request was aborted } }, ) .finally(() => { clearTimeout(timeout); }); ``` See [test cases](https://github.com/bitinn/node-fetch/blob/master/test/test.js) for more examples. ## API ### fetch(url[, options]) - `url` A string representing the URL for fetching - `options` [Options](#fetch-options) for the HTTP(S) request - Returns: <code>Promise&lt;[Response](#class-response)&gt;</code> Perform an HTTP(S) fetch. `url` should be an absolute url, such as `https://example.com/`. A path-relative URL (`/file/under/root`) or protocol-relative URL (`//can-be-http-or-https.com/`) will result in a rejected `Promise`. <a id="fetch-options"></a> ### Options The default values are shown after each option key. ```js { // These properties are part of the Fetch Standard method: 'GET', headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below) body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect signal: null, // pass an instance of AbortSignal to optionally abort requests // The following properties are node-fetch extensions follow: 20, // maximum redirect count. 0 to not follow redirect timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies). Signal is recommended instead. compress: true, // support gzip/deflate content encoding. false to disable size: 0, // maximum response body size in bytes. 0 to disable agent: null // http(s).Agent instance or function that returns an instance (see below) } ``` ##### Default Headers If no values are set, the following request headers will be sent automatically: Header | Value ------------------- | -------------------------------------------------------- `Accept-Encoding` | `gzip,deflate` _(when `options.compress === true`)_ `Accept` | `*/*` `Connection` | `close` _(when no `options.agent` is present)_ `Content-Length` | _(automatically calculated, if possible)_ `Transfer-Encoding` | `chunked` _(when `req.body` is a stream)_ `User-Agent` | `node-fetch/1.0 (+https://github.com/bitinn/node-fetch)` Note: when `body` is a `Stream`, `Content-Length` is not set automatically. ##### Custom Agent The `agent` option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following: - Support self-signed certificate - Use only IPv4 or IPv6 - Custom DNS Lookup See [`http.Agent`](https://nodejs.org/api/http.html#http_new_agent_options) for more information. In addition, the `agent` option accepts a function that returns `http`(s)`.Agent` instance given current [URL](https://nodejs.org/api/url.html), this is useful during a redirection chain across HTTP and HTTPS protocol. ```js const httpAgent = new http.Agent({ keepAlive: true }); const httpsAgent = new https.Agent({ keepAlive: true }); const options = { agent: function (_parsedURL) { if (_parsedURL.protocol == 'http:') { return httpAgent; } else { return httpsAgent; } } } ``` <a id="class-request"></a> ### Class: Request An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the [Body](#iface-body) interface. Due to the nature of Node.js, the following properties are not implemented at this moment: - `type` - `destination` - `referrer` - `referrerPolicy` - `mode` - `credentials` - `cache` - `integrity` - `keepalive` The following node-fetch extension properties are provided: - `follow` - `compress` - `counter` - `agent` See [options](#fetch-options) for exact meaning of these extensions. #### new Request(input[, options]) <small>*(spec-compliant)*</small> - `input` A string representing a URL, or another `Request` (which will be cloned) - `options` [Options][#fetch-options] for the HTTP(S) request Constructs a new `Request` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request). In most cases, directly `fetch(url, options)` is simpler than creating a `Request` object. <a id="class-response"></a> ### Class: Response An HTTP(S) response. This class implements the [Body](#iface-body) interface. The following properties are not implemented in node-fetch at this moment: - `Response.error()` - `Response.redirect()` - `type` - `trailer` #### new Response([body[, options]]) <small>*(spec-compliant)*</small> - `body` A `String` or [`Readable` stream][node-readable] - `options` A [`ResponseInit`][response-init] options dictionary Constructs a new `Response` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Response/Response). Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a `Response` directly. #### response.ok <small>*(spec-compliant)*</small> Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300. #### response.redirected <small>*(spec-compliant)*</small> Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0. <a id="class-headers"></a> ### Class: Headers This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the [Fetch Standard][whatwg-fetch] are implemented. #### new Headers([init]) <small>*(spec-compliant)*</small> - `init` Optional argument to pre-fill the `Headers` object Construct a new `Headers` object. `init` can be either `null`, a `Headers` object, an key-value map object or any iterable object. ```js // Example adapted from https://fetch.spec.whatwg.org/#example-headers-class const meta = { 'Content-Type': 'text/xml', 'Breaking-Bad': '<3' }; const headers = new Headers(meta); // The above is equivalent to const meta = [ [ 'Content-Type', 'text/xml' ], [ 'Breaking-Bad', '<3' ] ]; const headers = new Headers(meta); // You can in fact use any iterable objects, like a Map or even another Headers const meta = new Map(); meta.set('Content-Type', 'text/xml'); meta.set('Breaking-Bad', '<3'); const headers = new Headers(meta); const copyOfHeaders = new Headers(headers); ``` <a id="iface-body"></a> ### Interface: Body `Body` is an abstract interface with methods that are applicable to both `Request` and `Response` classes. The following methods are not yet implemented in node-fetch at this moment: - `formData()` #### body.body <small>*(deviation from spec)*</small> * Node.js [`Readable` stream][node-readable] Data are encapsulated in the `Body` object. Note that while the [Fetch Standard][whatwg-fetch] requires the property to always be a WHATWG `ReadableStream`, in node-fetch it is a Node.js [`Readable` stream][node-readable]. #### body.bodyUsed <small>*(spec-compliant)*</small> * `Boolean` A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again. #### body.arrayBuffer() #### body.blob() #### body.json() #### body.text() <small>*(spec-compliant)*</small> * Returns: <code>Promise</code> Consume the body and return a promise that will resolve to one of these formats. #### body.buffer() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;Buffer&gt;</code> Consume the body and return a promise that will resolve to a Buffer. #### body.textConverted() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;String&gt;</code> Identical to `body.text()`, except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible. (This API requires an optional dependency of the npm package [encoding](https://www.npmjs.com/package/encoding), which you need to install manually. `webpack` users may see [a warning message](https://github.com/bitinn/node-fetch/issues/412#issuecomment-379007792) due to this optional dependency.) <a id="class-fetcherror"></a> ### Class: FetchError <small>*(node-fetch extension)*</small> An operational error in the fetching process. See [ERROR-HANDLING.md][] for more info. <a id="class-aborterror"></a> ### Class: AbortError <small>*(node-fetch extension)*</small> An Error thrown when the request is aborted in response to an `AbortSignal`'s `abort` event. It has a `name` property of `AbortError`. See [ERROR-HANDLING.MD][] for more info. ## Acknowledgement Thanks to [github/fetch](https://github.com/github/fetch) for providing a solid implementation reference. `node-fetch` v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr). ## License MIT [npm-image]: https://flat.badgen.net/npm/v/node-fetch [npm-url]: https://www.npmjs.com/package/node-fetch [travis-image]: https://flat.badgen.net/travis/bitinn/node-fetch [travis-url]: https://travis-ci.org/bitinn/node-fetch [codecov-image]: https://flat.badgen.net/codecov/c/github/bitinn/node-fetch/master [codecov-url]: https://codecov.io/gh/bitinn/node-fetch [install-size-image]: https://flat.badgen.net/packagephobia/install/node-fetch [install-size-url]: https://packagephobia.now.sh/result?p=node-fetch [discord-image]: https://img.shields.io/discord/619915844268326952?color=%237289DA&label=Discord&style=flat-square [discord-url]: https://discord.gg/Zxbndcm [opencollective-image]: https://opencollective.com/node-fetch/backers.svg [opencollective-url]: https://opencollective.com/node-fetch [whatwg-fetch]: https://fetch.spec.whatwg.org/ [response-init]: https://fetch.spec.whatwg.org/#responseinit [node-readable]: https://nodejs.org/api/stream.html#stream_readable_streams [mdn-headers]: https://developer.mozilla.org/en-US/docs/Web/API/Headers [LIMITS.md]: https://github.com/bitinn/node-fetch/blob/master/LIMITS.md [ERROR-HANDLING.md]: https://github.com/bitinn/node-fetch/blob/master/ERROR-HANDLING.md [UPGRADE-GUIDE.md]: https://github.com/bitinn/node-fetch/blob/master/UPGRADE-GUIDE.md USB Library for Node.JS =============================== [![Build Status](https://github.com/node-usb/node-usb/workflows/prebuild/badge.svg)](https://github.com/node-usb/node-usb/actions) Node.JS library for communicating with USB devices in JavaScript / CoffeeScript. This is a refactoring / rewrite of Christopher Klein's [node-usb](https://github.com/schakko/node-usb). The API is not compatible (hopefully you find it an improvement). It's based entirely on libusb's asynchronous API for better efficiency, and provides a stream API for continuously streaming data or events. Installation ============ Libusb is included as a submodule. On Linux, you'll need libudev to build libusb. On Ubuntu/Debian: `sudo apt-get install build-essential libudev-dev` Then, just run npm install usb to install from npm. See the bottom of this page for instructions for building from a git checkout. ### Windows Use [Zadig](http://zadig.akeo.ie/) to install the WinUSB driver for your USB device. Otherwise you will get `LIBUSB_ERROR_NOT_SUPPORTED` when attempting to open devices. API === var usb = require('usb') usb --- Top-level object. ### usb.getDeviceList() Return a list of `Device` objects for the USB devices attached to the system. ### usb.findByIds(vid, pid) Convenience method to get the first device with the specified VID and PID, or `undefined` if no such device is present. ### usb.LIBUSB_* Constant properties from libusb ### usb.setDebugLevel(level : int) Set the libusb debug level (between 0 and 4) Device ------ Represents a USB device. ### .busNumber Integer USB device number ### .deviceAddress Integer USB device address ### .portNumbers Array containing the USB device port numbers, or `undefined` if not supported on this platform. ### .deviceDescriptor Object with properties for the fields of the device descriptor: - bLength - bDescriptorType - bcdUSB - bDeviceClass - bDeviceSubClass - bDeviceProtocol - bMaxPacketSize0 - idVendor - idProduct - bcdDevice - iManufacturer - iProduct - iSerialNumber - bNumConfigurations ### .configDescriptor Object with properties for the fields of the configuration descriptor: - bLength - bDescriptorType - wTotalLength - bNumInterfaces - bConfigurationValue - iConfiguration - bmAttributes - bMaxPower - extra (Buffer containing any extra data or additional descriptors) ### .allConfigDescriptors Contains all config descriptors of the device (same structure as .configDescriptor above) ### .parent Contains the parent of the device, such as a hub. If there is no parent this property is set to `null`. ### .open() Open the device. All methods below require the device to be open before use. ### .close() Close the device. ### .controlTransfer(bmRequestType, bRequest, wValue, wIndex, data_or_length, callback(error, data)) Perform a control transfer with `libusb_control_transfer`. Parameter `data_or_length` can be a integer length for an IN transfer, or a Buffer for an out transfer. The type must match the direction specified in the MSB of bmRequestType. The `data` parameter of the callback is always undefined for OUT transfers, or will be passed a Buffer for IN transfers. A [package is available to calculate bmRequestType](https://www.npmjs.com/package/bmrequesttype) if needed. ### .setConfiguration(id, callback(error)) Set the device configuration to something other than the default (0). To use this, first call `.open(false)` (which tells it not to auto configure), then before claiming an interface, call this method. ### .getStringDescriptor(index, callback(error, data)) Perform a control transfer to retrieve a string descriptor ### .getBosDescriptor(callback(error, bosDescriptor)) Perform a control transfer to retrieve an object with properties for the fields of the Binary Object Store descriptor: - bLength - bDescriptorType - wTotalLength - bNumDeviceCaps ### .getCapabilities(callback(error, capabilities)) Retrieve a list of Capability objects for the Binary Object Store capabilities of the device. ### .interface(interface) Return the interface with the specified interface number. ### .interfaces List of Interface objects for the interfaces of the default configuration of the device. ### .timeout Timeout in milliseconds to use for control transfers. ### .reset(callback(error)) Performs a reset of the device. Callback is called when complete. Interface --------- ### .endpoint(address) Return the InEndpoint or OutEndpoint with the specified address. ### .endpoints List of endpoints on this interface: InEndpoint and OutEndpoint objects. ### .interface Integer interface number. ### .altSetting Integer alternate setting number. ### .setAltSetting(altSetting, callback(error)) Sets the alternate setting. It updates the `interface.endpoints` array to reflect the endpoints found in the alternate setting. ### .claim() Claims the interface. This method must be called before using any endpoints of this interface. ### .release([closeEndpoints], callback(error)) Releases the interface and resets the alternate setting. Calls callback when complete. It is an error to release an interface with pending transfers. If the optional closeEndpoints parameter is true, any active endpoint streams are stopped (see `Endpoint.stopStream`), and the interface is released after the stream transfers are cancelled. Transfers submitted individually with `Endpoint.transfer` are not affected by this parameter. ### .isKernelDriverActive() Returns `false` if a kernel driver is not active; `true` if active. ### .detachKernelDriver() Detaches the kernel driver from the interface. ### .attachKernelDriver() Re-attaches the kernel driver for the interface. ### .descriptor Object with fields from the interface descriptor -- see libusb documentation or USB spec. - bLength - bDescriptorType - bInterfaceNumber - bAlternateSetting - bNumEndpoints - bInterfaceClass - bInterfaceSubClass - bInterfaceProtocol - iInterface - extra (Buffer containing any extra data or additional descriptors) Capability --------- ### .type Integer capability type. ### .data Buffer capability data. ### .descriptor Object with fields from the capability descriptor -- see libusb documentation or USB spec. - bLength - bDescriptorType - bDevCapabilityType Endpoint -------- Common base for InEndpoint and OutEndpoint, see below. ### .direction Endpoint direction: `"in"` or `"out"`. ### .transferType Endpoint type: `usb.LIBUSB_TRANSFER_TYPE_BULK`, `usb.LIBUSB_TRANSFER_TYPE_INTERRUPT`, or `usb.LIBUSB_TRANSFER_TYPE_ISOCHRONOUS`. ### .descriptor Object with fields from the endpoint descriptor -- see libusb documentation or USB spec. - bLength - bDescriptorType - bEndpointAddress - bmAttributes - wMaxPacketSize - bInterval - bRefresh - bSynchAddress - extra (Buffer containing any extra data or additional descriptors) ### .timeout Sets the timeout in milliseconds for transfers on this endpoint. The default, `0`, is infinite timeout. ### .clearHalt(callback(error)) Clear the halt/stall condition for this endpoint. InEndpoint ---------- Endpoints in the IN direction (device->PC) have this type. ### .transfer(length, callback(error, data)) Perform a transfer to read data from the endpoint. If length is greater than maxPacketSize, libusb will automatically split the transfer in multiple packets, and you will receive one callback with all data once all packets are complete. `this` in the callback is the InEndpoint object. ### .startPoll(nTransfers=3, transferSize=maxPacketSize) Start polling the endpoint. The library will keep `nTransfers` transfers of size `transferSize` pending in the kernel at all times to ensure continuous data flow. This is handled by the libusb event thread, so it continues even if the Node v8 thread is busy. The `data` and `error` events are emitted as transfers complete. ### .stopPoll(cb) Stop polling. Further data may still be received. The `end` event is emitted and the callback is called once all transfers have completed or canceled. ### Event: data(data : Buffer) Emitted with data received by the polling transfers ### Event: error(error) Emitted when polling encounters an error. All in flight transfers will be automatically canceled and no further polling will be done. You have to wait for the `end` event before you can start polling again. ### Event: end Emitted when polling has been canceled OutEndpoint ----------- Endpoints in the OUT direction (PC->device) have this type. ### .transfer(data, callback(error)) Perform a transfer to write `data` to the endpoint. If length is greater than maxPacketSize, libusb will automatically split the transfer in multiple packets, and you will receive one callback once all packets are complete. `this` in the callback is the OutEndpoint object. ### Event: error(error) Emitted when the stream encounters an error. ### Event: end Emitted when the stream has been stopped and all pending requests have been completed. UsbDetection ------------ ### usb.on('attach', function(device) { ... }); Attaches a callback to plugging in a `device`. ### usb.on('detach', function(device) { ... }); Attaches a callback to unplugging a `device`. ### usb.refHotplugEvents(); Restore (re-reference) the hotplug events unreferenced by `unrefHotplugEvents()` ### usb.unrefHotplugEvents(); Listening to events will prevent the process to exit. By calling this function, hotplug events will be unreferenced by the event loop, allowing the process to exit even when listening for the `attach` and `detach` events. Development and testing ======================= To build from git: git clone --recursive https://github.com/node-usb/node-usb.git cd node-usb npm install To execute the unit tests, [CoffeeScript](http://coffeescript.org) is required. Run npm test Some tests require an [attached STM32F103 Microprocessor USB device with specific firmware](https://github.com/thegecko/node-usb-test-firmware). npm run --silent full-test npm run --silent valgrind Limitations =========== Does not support: - Configurations other than the default one - Isochronous transfers License ======= MIT Note that the compiled Node extension includes Libusb, and is thus subject to the LGPL. # Polyfill for `Object.setPrototypeOf` [![NPM Version](https://img.shields.io/npm/v/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![NPM Downloads](https://img.shields.io/npm/dm/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](https://github.com/standard/standard) A simple cross platform implementation to set the prototype of an instianted object. Supports all modern browsers and at least back to IE8. ## Usage: ``` $ npm install --save setprototypeof ``` ```javascript var setPrototypeOf = require('setprototypeof') var obj = {} setPrototypeOf(obj, { foo: function () { return 'bar' } }) obj.foo() // bar ``` TypeScript is also supported: ```typescript import setPrototypeOf from 'setprototypeof' ``` Deep Extend =========== Recursive object extending. [![Build Status](https://api.travis-ci.org/unclechu/node-deep-extend.svg?branch=master)](https://travis-ci.org/unclechu/node-deep-extend) [![NPM](https://nodei.co/npm/deep-extend.png?downloads=true&downloadRank=true&stars=true)](https://nodei.co/npm/deep-extend/) Install ------- ```bash $ npm install deep-extend ``` Usage ----- ```javascript var deepExtend = require('deep-extend'); var obj1 = { a: 1, b: 2, d: { a: 1, b: [], c: { test1: 123, test2: 321 } }, f: 5, g: 123, i: 321, j: [1, 2] }; var obj2 = { b: 3, c: 5, d: { b: { first: 'one', second: 'two' }, c: { test2: 222 } }, e: { one: 1, two: 2 }, f: [], g: (void 0), h: /abc/g, i: null, j: [3, 4] }; deepExtend(obj1, obj2); console.log(obj1); /* { a: 1, b: 3, d: { a: 1, b: { first: 'one', second: 'two' }, c: { test1: 123, test2: 222 } }, f: [], g: undefined, c: 5, e: { one: 1, two: 2 }, h: /abc/g, i: null, j: [3, 4] } */ ``` Unit testing ------------ ```bash $ npm test ``` Changelog --------- [CHANGELOG.md](./CHANGELOG.md) Any issues? ----------- Please, report about issues [here](https://github.com/unclechu/node-deep-extend/issues). License ------- [MIT](./LICENSE) # simple-get [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/simple-get/master.svg [travis-url]: https://travis-ci.org/feross/simple-get [npm-image]: https://img.shields.io/npm/v/simple-get.svg [npm-url]: https://npmjs.org/package/simple-get [downloads-image]: https://img.shields.io/npm/dm/simple-get.svg [downloads-url]: https://npmjs.org/package/simple-get [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com ### Simplest way to make http get requests ## features This module is the lightest possible wrapper on top of node.js `http`, but supporting these essential features: - follows redirects - automatically handles gzip/deflate responses - supports HTTPS - supports specifying a timeout - supports convenience `url` key so there's no need to use `url.parse` on the url when specifying options - composes well with npm packages for features like cookies, proxies, form data, & OAuth All this in < 100 lines of code. ## install ``` npm install simple-get ``` ## usage Note, all these examples also work in the browser with [browserify](http://browserify.org/). ### simple GET request Doesn't get easier than this: ```js const get = require('simple-get') get('http://example.com', function (err, res) { if (err) throw err console.log(res.statusCode) // 200 res.pipe(process.stdout) // `res` is a stream }) ``` ### even simpler GET request If you just want the data, and don't want to deal with streams: ```js const get = require('simple-get') get.concat('http://example.com', function (err, res, data) { if (err) throw err console.log(res.statusCode) // 200 console.log(data) // Buffer('this is the server response') }) ``` ### POST, PUT, PATCH, HEAD, DELETE support For `POST`, call `get.post` or use option `{ method: 'POST' }`. ```js const get = require('simple-get') const opts = { url: 'http://example.com', body: 'this is the POST body' } get.post(opts, function (err, res) { if (err) throw err res.pipe(process.stdout) // `res` is a stream }) ``` #### A more complex example: ```js const get = require('simple-get') get({ url: 'http://example.com', method: 'POST', body: 'this is the POST body', // simple-get accepts all options that node.js `http` accepts // See: http://nodejs.org/api/http.html#http_http_request_options_callback headers: { 'user-agent': 'my cool app' } }, function (err, res) { if (err) throw err // All properties/methods from http.IncomingResponse are available, // even if a gunzip/inflate transform stream was returned. // See: http://nodejs.org/api/http.html#http_http_incomingmessage res.setTimeout(10000) console.log(res.headers) res.on('data', function (chunk) { // `chunk` is the decoded response, after it's been gunzipped or inflated // (if applicable) console.log('got a chunk of the response: ' + chunk) })) }) ``` ### JSON You can serialize/deserialize request and response with JSON: ```js const get = require('simple-get') const opts = { method: 'POST', url: 'http://example.com', body: { key: 'value' }, json: true } get.concat(opts, function (err, res, data) { if (err) throw err console.log(data.key) // `data` is an object }) ``` ### Timeout You can set a timeout (in milliseconds) on the request with the `timeout` option. If the request takes longer than `timeout` to complete, then the entire request will fail with an `Error`. ```js const get = require('simple-get') const opts = { url: 'http://example.com', timeout: 2000 // 2 second timeout } get(opts, function (err, res) {}) ``` ### One Quick Tip It's a good idea to set the `'user-agent'` header so the provider can more easily see how their resource is used. ```js const get = require('simple-get') const pkg = require('./package.json') get('http://example.com', { headers: { 'user-agent': `my-module/${pkg.version} (https://github.com/username/my-module)` } }) ``` ### Proxies You can use the [`tunnel`](https://github.com/koichik/node-tunnel) module with the `agent` option to work with proxies: ```js const get = require('simple-get') const tunnel = require('tunnel') const opts = { url: 'http://example.com', agent: tunnel.httpOverHttp({ proxy: { host: 'localhost' } }) } get(opts, function (err, res) {}) ``` ### Cookies You can use the [`cookie`](https://github.com/jshttp/cookie) module to include cookies in a request: ```js const get = require('simple-get') const cookie = require('cookie') const opts = { url: 'http://example.com', headers: { cookie: cookie.serialize('foo', 'bar') } } get(opts, function (err, res) {}) ``` ### Form data You can use the [`form-data`](https://github.com/form-data/form-data) module to create POST request with form data: ```js const fs = require('fs') const get = require('simple-get') const FormData = require('form-data') const form = new FormData() form.append('my_file', fs.createReadStream('/foo/bar.jpg')) const opts = { url: 'http://example.com', body: form } get.post(opts, function (err, res) {}) ``` #### Or, include `application/x-www-form-urlencoded` form data manually: ```js const get = require('simple-get') const opts = { url: 'http://example.com', form: { key: 'value' } } get.post(opts, function (err, res) {}) ``` ### Specifically disallowing redirects ```js const get = require('simple-get') const opts = { url: 'http://example.com/will-redirect-elsewhere', followRedirects: false } // res.statusCode will be 301, no error thrown get(opts, function (err, res) {}) ``` ### OAuth You can use the [`oauth-1.0a`](https://github.com/ddo/oauth-1.0a) module to create a signed OAuth request: ```js const get = require('simple-get') const crypto = require('crypto') const OAuth = require('oauth-1.0a') const oauth = OAuth({ consumer: { key: process.env.CONSUMER_KEY, secret: process.env.CONSUMER_SECRET }, signature_method: 'HMAC-SHA1', hash_function: (baseString, key) => crypto.createHmac('sha1', key).update(baseString).digest('base64') }) const token = { key: process.env.ACCESS_TOKEN, secret: process.env.ACCESS_TOKEN_SECRET } const url = 'https://api.twitter.com/1.1/statuses/home_timeline.json' const opts = { url: url, headers: oauth.toHeader(oauth.authorize({url, method: 'GET'}, token)), json: true } get(opts, function (err, res) {}) ``` ### Throttle requests You can use [limiter](https://github.com/jhurliman/node-rate-limiter) to throttle requests. This is useful when calling an API that is rate limited. ```js const simpleGet = require('simple-get') const RateLimiter = require('limiter').RateLimiter const limiter = new RateLimiter(1, 'second') const get = (opts, cb) => limiter.removeTokens(1, () => simpleGet(opts, cb)) get.concat = (opts, cb) => limiter.removeTokens(1, () => simpleGet.concat(opts, cb)) var opts = { url: 'http://example.com' } get.concat(opts, processResult) get.concat(opts, processResult) function processResult (err, res, data) { if (err) throw err console.log(data.toString()) } ``` ## license MIT. Copyright (c) [Feross Aboukhadijeh](http://feross.org). [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # create-hmac [![NPM Package](https://img.shields.io/npm/v/create-hmac.svg?style=flat-square)](https://www.npmjs.org/package/create-hmac) [![Build Status](https://img.shields.io/travis/crypto-browserify/createHmac.svg?branch=master&style=flat-square)](https://travis-ci.org/crypto-browserify/createHmac) [![Dependency status](https://img.shields.io/david/crypto-browserify/createHmac.svg?style=flat-square)](https://david-dm.org/crypto-browserify/createHmac#info=dependencies) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Node style HMACs for use in the browser, with native HMAC functions in node. API is the same as HMACs in node: ```js var createHmac = require('create-hmac') var hmac = createHmac('sha224', Buffer.from('secret key')) hmac.update('synchronous write') //optional encoding parameter hmac.digest() // synchronously get result with optional encoding parameter hmac.write('write to it as a stream') hmac.end() //remember it's a stream hmac.read() //only if you ended it as a stream though ``` # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports `pipe()`ing (including multi-`pipe()` and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap-parser) - [treport](http://npm.im/treport) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with node-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i-- > 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { try { // JSON.parse can throw, emit an error on that super.write(JSON.parse(jsonData[i])) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. Utility ======= Various utility functions shared accross the codebase. | Utility | Description |----------|------------------------------------------- | cpu | Obtains information about the CPU | find | Provides support for finding files etc. | node | Minimal polyfills for Node.js builtins | options | Support for command line options parsing | terminal | Provides support for terminal colors | text | Utility for text processing | web | Minimal polyfills for browser builtins It is possible to reuse the utility in your own project like so: ```ts import { ... } from "assemblyscript/util/terminal.js"; ... ``` Keep in mind, however, that utility can change at any time. ## Test Strategy - tests are copied from the [polyfill implementation](https://github.com/tc39/proposal-temporal/tree/main/polyfill/test) - tests should be removed if they relate to features that do not make sense for TS/AS, i.e. tests that validate the shape of an object do not make sense in a language with compile-time type checking - tests that fail because a feature has not been implemented yet should be left as failures. # js-sha256 [![Build Status](https://travis-ci.org/emn178/js-sha256.svg?branch=master)](https://travis-ci.org/emn178/js-sha256) [![Coverage Status](https://coveralls.io/repos/emn178/js-sha256/badge.svg?branch=master)](https://coveralls.io/r/emn178/js-sha256?branch=master) [![CDNJS](https://img.shields.io/cdnjs/v/js-sha256.svg)](https://cdnjs.com/libraries/js-sha256/) [![NPM](https://nodei.co/npm/js-sha256.png?stars&downloads)](https://nodei.co/npm/js-sha256/) A simple SHA-256 / SHA-224 hash function for JavaScript supports UTF-8 encoding. ## Demo [SHA256 Online](http://emn178.github.io/online-tools/sha256.html) [SHA224 Online](http://emn178.github.io/online-tools/sha224.html) ## Download [Compress](https://raw.github.com/emn178/js-sha256/master/build/sha256.min.js) [Uncompress](https://raw.github.com/emn178/js-sha256/master/src/sha256.js) ## Installation You can also install js-sha256 by using Bower. bower install js-sha256 For node.js, you can use this command to install: npm install js-sha256 ## Usage You could use like this: ```JavaScript sha256('Message to hash'); sha224('Message to hash'); var hash = sha256.create(); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.update('Message to hash'); hash2.update('Message2 to hash'); hash2.array(); // HMAC sha256.hmac('key', 'Message to hash'); sha224.hmac('key', 'Message to hash'); var hash = sha256.hmac.create('key'); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.hmac.update('key', 'Message to hash'); hash2.update('Message2 to hash'); hash2.array(); ``` If you use node.js, you should require the module first: ```JavaScript var sha256 = require('js-sha256'); ``` or ```JavaScript var sha256 = require('js-sha256').sha256; var sha224 = require('js-sha256').sha224; ``` It supports AMD: ```JavaScript require(['your/path/sha256.js'], function(sha256) { // ... }); ``` or TypeScript ```TypeScript import { sha256, sha224 } from 'js-sha256'; ``` ## Example ```JavaScript sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256('The quick brown fox jumps over the lazy dog'); // d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592 sha256('The quick brown fox jumps over the lazy dog.'); // ef537f25c895bfa782526529a9b63d97aa631564d5d789c2b765448c8635fb6c sha224(''); // d14a028c2a3a2bc9476102bb288234c415a2b01f828ea62ac5b3e42f sha224('The quick brown fox jumps over the lazy dog'); // 730e109bd7a8a32b1cb9d9a09aa2325d2430587ddbc0c38bad911525 sha224('The quick brown fox jumps over the lazy dog.'); // 619cba8e8e05826e9b8c519c0a5c68f4fb653e8a3d8aa04bb2c8cd4c // It also supports UTF-8 encoding sha256('中文'); // 72726d8818f693066ceb69afa364218b692e62ea92b385782363780f47529c21 sha224('中文'); // dfbab71afdf54388af4d55f8bd3de8c9b15e0eb916bf9125f4a959d4 // It also supports byte `Array`, `Uint8Array`, `ArrayBuffer` input sha256([]); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256(new Uint8Array([211, 212])); // 182889f925ae4e5cc37118ded6ed87f7bdc7cab5ec5e78faef2e50048999473f // Different output sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.hex(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.array(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.digest(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.arrayBuffer(''); // ArrayBuffer ``` ## License The project is released under the [MIT license](http://www.opensource.org/licenses/MIT). ## Contact The project's website is located at https://github.com/emn178/js-sha256 Author: Chen, Yi-Cyuan (emn178@gmail.com) # isexe Minimal module to check if a file is executable, and a normal file. Uses `fs.stat` and tests against the `PATHEXT` environment variable on Windows. ## USAGE ```javascript var isexe = require('isexe') isexe('some-file-name', function (err, isExe) { if (err) { console.error('probably file does not exist or something', err) } else if (isExe) { console.error('this thing can be run') } else { console.error('cannot be run') } }) // same thing but synchronous, throws errors var isExe = isexe.sync('some-file-name') // treat errors as just "not executable" isexe('maybe-missing-file', { ignoreErrors: true }, callback) var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true }) ``` ## API ### `isexe(path, [options], [callback])` Check if the path is executable. If no callback provided, and a global `Promise` object is available, then a Promise will be returned. Will raise whatever errors may be raised by `fs.stat`, unless `options.ignoreErrors` is set to true. ### `isexe.sync(path, [options])` Same as `isexe` but returns the value and throws any errors raised. ### Options * `ignoreErrors` Treat all errors as "no, this is not executable", but don't raise them. * `uid` Number to use as the user id * `gid` Number to use as the group id * `pathExt` List of path extensions to use instead of `PATHEXT` environment variable on Windows. NOTE: The default branch has been renamed! master is now named main If you have a local clone, you can update it by running: ```shell git branch -m master main git fetch origin git branch -u origin/main main ``` # **node-addon-api module** This module contains **header-only C++ wrapper classes** which simplify the use of the C based [Node-API](https://nodejs.org/dist/latest/docs/api/n-api.html) provided by Node.js when using C++. It provides a C++ object model and exception handling semantics with low overhead. There are three options for implementing addons: Node-API, nan, or direct use of internal V8, libuv and Node.js libraries. Unless there is a need for direct access to functionality which is not exposed by Node-API as outlined in [C/C++ addons](https://nodejs.org/dist/latest/docs/api/addons.html) in Node.js core, use Node-API. Refer to [C/C++ addons with Node-API](https://nodejs.org/dist/latest/docs/api/n-api.html) for more information on Node-API. Node-API is an ABI stable C interface provided by Node.js for building native addons. It is independent from the underlying JavaScript runtime (e.g. V8 or ChakraCore) and is maintained as part of Node.js itself. It is intended to insulate native addons from changes in the underlying JavaScript engine and allow modules compiled for one version to run on later versions of Node.js without recompilation. The `node-addon-api` module, which is not part of Node.js, preserves the benefits of the Node-API as it consists only of inline code that depends only on the stable API provided by Node-API. As such, modules built against one version of Node.js using node-addon-api should run without having to be rebuilt with newer versions of Node.js. It is important to remember that *other* Node.js interfaces such as `libuv` (included in a project via `#include <uv.h>`) are not ABI-stable across Node.js major versions. Thus, an addon must use Node-API and/or `node-addon-api` exclusively and build against a version of Node.js that includes an implementation of Node-API (meaning an active LTS version of Node.js) in order to benefit from ABI stability across Node.js major versions. Node.js provides an [ABI stability guide][] containing a detailed explanation of ABI stability in general, and the Node-API ABI stability guarantee in particular. As new APIs are added to Node-API, node-addon-api must be updated to provide wrappers for those new APIs. For this reason node-addon-api provides methods that allow callers to obtain the underlying Node-API handles so direct calls to Node-API and the use of the objects/methods provided by node-addon-api can be used together. For example, in order to be able to use an API for which the node-addon-api does not yet provide a wrapper. APIs exposed by node-addon-api are generally used to create and manipulate JavaScript values. Concepts and operations generally map to ideas specified in the **ECMA262 Language Specification**. The [Node-API Resource](https://nodejs.github.io/node-addon-examples/) offers an excellent orientation and tips for developers just getting started with Node-API and node-addon-api. - **[Setup](#setup)** - **[API Documentation](#api)** - **[Examples](#examples)** - **[Tests](#tests)** - **[More resource and info about native Addons](#resources)** - **[Badges](#badges)** - **[Code of Conduct](CODE_OF_CONDUCT.md)** - **[Contributors](#contributors)** - **[License](#license)** ## **Current version: 4.3.0** (See [CHANGELOG.md](CHANGELOG.md) for complete Changelog) [![NPM](https://nodei.co/npm/node-addon-api.png?downloads=true&downloadRank=true)](https://nodei.co/npm/node-addon-api/) [![NPM](https://nodei.co/npm-dl/node-addon-api.png?months=6&height=1)](https://nodei.co/npm/node-addon-api/) <a name="setup"></a> node-addon-api is based on [Node-API](https://nodejs.org/api/n-api.html) and supports using different Node-API versions. This allows addons built with it to run with Node.js versions which support the targeted Node-API version. **However** the node-addon-api support model is to support only the active LTS Node.js versions. This means that every year there will be a new major which drops support for the Node.js LTS version which has gone out of service. The oldest Node.js version supported by the current version of node-addon-api is Node.js 12.x. ## Setup - [Installation and usage](doc/setup.md) - [node-gyp](doc/node-gyp.md) - [cmake-js](doc/cmake-js.md) - [Conversion tool](doc/conversion-tool.md) - [Checker tool](doc/checker-tool.md) - [Generator](doc/generator.md) - [Prebuild tools](doc/prebuild_tools.md) <a name="api"></a> ### **API Documentation** The following is the documentation for node-addon-api. - [Full Class Hierarchy](doc/hierarchy.md) - [Addon Structure](doc/addon.md) - Data Types: - [Env](doc/env.md) - [CallbackInfo](doc/callbackinfo.md) - [Reference](doc/reference.md) - [Value](doc/value.md) - [Name](doc/name.md) - [Symbol](doc/symbol.md) - [String](doc/string.md) - [Number](doc/number.md) - [Date](doc/date.md) - [BigInt](doc/bigint.md) - [Boolean](doc/boolean.md) - [External](doc/external.md) - [Object](doc/object.md) - [Array](doc/array.md) - [ObjectReference](doc/object_reference.md) - [PropertyDescriptor](doc/property_descriptor.md) - [Function](doc/function.md) - [FunctionReference](doc/function_reference.md) - [ObjectWrap](doc/object_wrap.md) - [ClassPropertyDescriptor](doc/class_property_descriptor.md) - [Buffer](doc/buffer.md) - [ArrayBuffer](doc/array_buffer.md) - [TypedArray](doc/typed_array.md) - [TypedArrayOf](doc/typed_array_of.md) - [DataView](doc/dataview.md) - [Error Handling](doc/error_handling.md) - [Error](doc/error.md) - [TypeError](doc/type_error.md) - [RangeError](doc/range_error.md) - [Object Lifetime Management](doc/object_lifetime_management.md) - [HandleScope](doc/handle_scope.md) - [EscapableHandleScope](doc/escapable_handle_scope.md) - [Memory Management](doc/memory_management.md) - [Async Operations](doc/async_operations.md) - [AsyncWorker](doc/async_worker.md) - [AsyncContext](doc/async_context.md) - [AsyncWorker Variants](doc/async_worker_variants.md) - [Thread-safe Functions](doc/threadsafe.md) - [ThreadSafeFunction](doc/threadsafe_function.md) - [TypedThreadSafeFunction](doc/typed_threadsafe_function.md) - [Promises](doc/promises.md) - [Version management](doc/version_management.md) <a name="examples"></a> ### **Examples** Are you new to **node-addon-api**? Take a look at our **[examples](https://github.com/nodejs/node-addon-examples)** - **[Hello World](https://github.com/nodejs/node-addon-examples/tree/HEAD/1_hello_world/node-addon-api)** - **[Pass arguments to a function](https://github.com/nodejs/node-addon-examples/tree/HEAD/2_function_arguments/node-addon-api)** - **[Callbacks](https://github.com/nodejs/node-addon-examples/tree/HEAD/3_callbacks/node-addon-api)** - **[Object factory](https://github.com/nodejs/node-addon-examples/tree/HEAD/4_object_factory/node-addon-api)** - **[Function factory](https://github.com/nodejs/node-addon-examples/tree/HEAD/5_function_factory/node-addon-api)** - **[Wrapping C++ Object](https://github.com/nodejs/node-addon-examples/tree/HEAD/6_object_wrap/node-addon-api)** - **[Factory of wrapped object](https://github.com/nodejs/node-addon-examples/tree/HEAD/7_factory_wrap/node-addon-api)** - **[Passing wrapped object around](https://github.com/nodejs/node-addon-examples/tree/HEAD/8_passing_wrapped/node-addon-api)** <a name="tests"></a> ### **Tests** To run the **node-addon-api** tests do: ``` npm install npm test ``` To avoid testing the deprecated portions of the API run ``` npm install npm test --disable-deprecated ``` To run the tests targeting a specific version of Node-API run ``` npm install export NAPI_VERSION=X npm test --NAPI_VERSION=X ``` where X is the version of Node-API you want to target. ### **Debug** To run the **node-addon-api** tests with `--debug` option: ``` npm run-script dev ``` If you want faster build, you might use the following option: ``` npm run-script dev:incremental ``` Take a look and get inspired by our **[test suite](https://github.com/nodejs/node-addon-api/tree/HEAD/test)** ### **Benchmarks** You can run the available benchmarks using the following command: ``` npm run-script benchmark ``` See [benchmark/README.md](benchmark/README.md) for more details about running and adding benchmarks. <a name="resources"></a> ### **More resource and info about native Addons** - **[C++ Addons](https://nodejs.org/dist/latest/docs/api/addons.html)** - **[Node-API](https://nodejs.org/dist/latest/docs/api/n-api.html)** - **[Node-API - Next Generation Node API for Native Modules](https://youtu.be/-Oniup60Afs)** - **[How We Migrated Realm JavaScript From NAN to Node-API](https://developer.mongodb.com/article/realm-javascript-nan-to-n-api)** As node-addon-api's core mission is to expose the plain C Node-API as C++ wrappers, tools that facilitate n-api/node-addon-api providing more convenient patterns on developing a Node.js add-ons with n-api/node-addon-api can be published to NPM as standalone packages. It is also recommended to tag such packages with `node-addon-api` to provide more visibility to the community. Quick links to NPM searches: [keywords:node-addon-api](https://www.npmjs.com/search?q=keywords%3Anode-addon-api). <a name="other-bindings"></a> ### **Other bindings** - **[napi-rs](https://napi.rs)** - (`Rust`) <a name="badges"></a> ### **Badges** The use of badges is recommended to indicate the minimum version of Node-API required for the module. This helps to determine which Node.js major versions are supported. Addon maintainers can consult the [Node-API support matrix][] to determine which Node.js versions provide a given Node-API version. The following badges are available: ![Node-API v1 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v1%20Badge.svg) ![Node-API v2 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v2%20Badge.svg) ![Node-API v3 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v3%20Badge.svg) ![Node-API v4 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v4%20Badge.svg) ![Node-API v5 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v5%20Badge.svg) ![Node-API v6 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v6%20Badge.svg) ![Node-API v7 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v7%20Badge.svg) ![Node-API v8 Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20v8%20Badge.svg) ![Node-API Experimental Version Badge](https://github.com/nodejs/abi-stable-node/blob/doc/assets/Node-API%20Experimental%20Version%20Badge.svg) ## **Contributing** We love contributions from the community to **node-addon-api**! See [CONTRIBUTING.md](CONTRIBUTING.md) for more details on our philosophy around extending this module. <a name="contributors"></a> ## Team members ### Active | Name | GitHub Link | | ------------------- | ----------------------------------------------------- | | Anna Henningsen | [addaleax](https://github.com/addaleax) | | Chengzhong Wu | [legendecas](https://github.com/legendecas) | | Gabriel Schulhof | [gabrielschulhof](https://github.com/gabrielschulhof) | | Jim Schlight | [jschlight](https://github.com/jschlight) | | Michael Dawson | [mhdawson](https://github.com/mhdawson) | | Kevin Eady | [KevinEady](https://github.com/KevinEady) | Nicola Del Gobbo | [NickNaso](https://github.com/NickNaso) | ### Emeritus | Name | GitHub Link | | ------------------- | ----------------------------------------------------- | | Arunesh Chandra | [aruneshchandra](https://github.com/aruneshchandra) | | Benjamin Byholm | [kkoopa](https://github.com/kkoopa) | | Jason Ginchereau | [jasongin](https://github.com/jasongin) | | Hitesh Kanwathirtha | [digitalinfinity](https://github.com/digitalinfinity) | | Sampson Gao | [sampsongao](https://github.com/sampsongao) | | Taylor Woll | [boingoing](https://github.com/boingoing) | <a name="license"></a> Licensed under [MIT](./LICENSE.md) [ABI stability guide]: https://nodejs.org/en/docs/guides/abi-stability/ [Node-API support matrix]: https://nodejs.org/dist/latest/docs/api/n-api.html#n_api_n_api_version_matrix # create-hash [![Build Status](https://travis-ci.org/crypto-browserify/createHash.svg)](https://travis-ci.org/crypto-browserify/createHash) Node style hashes for use in the browser, with native hash functions in node. API is the same as hashes in node: ```js var createHash = require('create-hash') var hash = createHash('sha224') hash.update('synchronous write') // optional encoding parameter hash.digest() // synchronously get result with optional encoding parameter hash.write('write to it as a stream') hash.end() // remember it's a stream hash.read() // only if you ended it as a stream though ``` To get the JavaScript version even in node do `require('create-hash/browser')` # cross-spawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build status][appveyor-image]][appveyor-url] [![Coverage Status][codecov-image]][codecov-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/cross-spawn [downloads-image]:https://img.shields.io/npm/dm/cross-spawn.svg [npm-image]:https://img.shields.io/npm/v/cross-spawn.svg [travis-url]:https://travis-ci.org/moxystudio/node-cross-spawn [travis-image]:https://img.shields.io/travis/moxystudio/node-cross-spawn/master.svg [appveyor-url]:https://ci.appveyor.com/project/satazor/node-cross-spawn [appveyor-image]:https://img.shields.io/appveyor/ci/satazor/node-cross-spawn/master.svg [codecov-url]:https://codecov.io/gh/moxystudio/node-cross-spawn [codecov-image]:https://img.shields.io/codecov/c/github/moxystudio/node-cross-spawn/master.svg [david-dm-url]:https://david-dm.org/moxystudio/node-cross-spawn [david-dm-image]:https://img.shields.io/david/moxystudio/node-cross-spawn.svg [david-dm-dev-url]:https://david-dm.org/moxystudio/node-cross-spawn?type=dev [david-dm-dev-image]:https://img.shields.io/david/dev/moxystudio/node-cross-spawn.svg A cross platform solution to node's spawn and spawnSync. ## Installation Node.js version 8 and up: `$ npm install cross-spawn` Node.js version 7 and under: `$ npm install cross-spawn@6` ## Why Node has issues when using spawn on Windows: - It ignores [PATHEXT](https://github.com/joyent/node/issues/2318) - It does not support [shebangs](https://en.wikipedia.org/wiki/Shebang_(Unix)) - Has problems running commands with [spaces](https://github.com/nodejs/node/issues/7367) - Has problems running commands with posix relative paths (e.g.: `./my-folder/my-executable`) - Has an [issue](https://github.com/moxystudio/node-cross-spawn/issues/82) with command shims (files in `node_modules/.bin/`), where arguments with quotes and parenthesis would result in [invalid syntax error](https://github.com/moxystudio/node-cross-spawn/blob/e77b8f22a416db46b6196767bcd35601d7e11d54/test/index.test.js#L149) - No `options.shell` support on node `<v4.8` All these issues are handled correctly by `cross-spawn`. There are some known modules, such as [win-spawn](https://github.com/ForbesLindesay/win-spawn), that try to solve this but they are either broken or provide faulty escaping of shell arguments. ## Usage Exactly the same way as node's [`spawn`](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options) or [`spawnSync`](https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options), so it's a drop in replacement. ```js const spawn = require('cross-spawn'); // Spawn NPM asynchronously const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); // Spawn NPM synchronously const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); ``` ## Caveats ### Using `options.shell` as an alternative to `cross-spawn` Starting from node `v4.8`, `spawn` has a `shell` option that allows you run commands from within a shell. This new option solves the [PATHEXT](https://github.com/joyent/node/issues/2318) issue but: - It's not supported in node `<v4.8` - You must manually escape the command and arguments which is very error prone, specially when passing user input - There are a lot of other unresolved issues from the [Why](#why) section that you must take into account If you are using the `shell` option to spawn a command in a cross platform way, consider using `cross-spawn` instead. You have been warned. ### `options.shell` support While `cross-spawn` adds support for `options.shell` in node `<v4.8`, all of its enhancements are disabled. This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using `options.shell` you are probably targeting a specific platform anyway and you don't want things to get into your way. ### Shebangs support While `cross-spawn` handles shebangs on Windows, its support is limited. More specifically, it just supports `#!/usr/bin/env <program>` where `<program>` must not contain any arguments. If you would like to have the shebang support improved, feel free to contribute via a pull-request. Remember to always test your code on Windows! ## Tests `$ npm test` `$ npm test -- --watch` during development ## License Released under the [MIT License](https://www.opensource.org/licenses/mit-license.php). <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. # fast-levenshtein - Levenshtein algorithm in Javascript [![Build Status](https://secure.travis-ci.org/hiddentao/fast-levenshtein.png)](http://travis-ci.org/hiddentao/fast-levenshtein) [![NPM module](https://badge.fury.io/js/fast-levenshtein.png)](https://badge.fury.io/js/fast-levenshtein) [![NPM downloads](https://img.shields.io/npm/dm/fast-levenshtein.svg?maxAge=2592000)](https://www.npmjs.com/package/fast-levenshtein) [![Follow on Twitter](https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/hiddentao) An efficient Javascript implementation of the [Levenshtein algorithm](http://en.wikipedia.org/wiki/Levenshtein_distance) with locale-specific collator support. ## Features * Works in node.js and in the browser. * Better performance than other implementations by not needing to store the whole matrix ([more info](http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm)). * Locale-sensitive string comparisions if needed. * Comprehensive test suite and performance benchmark. * Small: <1 KB minified and gzipped ## Installation ### node.js Install using [npm](http://npmjs.org/): ```bash $ npm install fast-levenshtein ``` ### Browser Using bower: ```bash $ bower install fast-levenshtein ``` If you are not using any module loader system then the API will then be accessible via the `window.Levenshtein` object. ## Examples **Default usage** ```javascript var levenshtein = require('fast-levenshtein'); var distance = levenshtein.get('back', 'book'); // 2 var distance = levenshtein.get('我愛你', '我叫你'); // 1 ``` **Locale-sensitive string comparisons** It supports using [Intl.Collator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Collator) for locale-sensitive string comparisons: ```javascript var levenshtein = require('fast-levenshtein'); levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true}); // 1 ``` ## Building and Testing To build the code and run the tests: ```bash $ npm install -g grunt-cli $ npm install $ npm run build ``` ## Performance _Thanks to [Titus Wormer](https://github.com/wooorm) for [encouraging me](https://github.com/hiddentao/fast-levenshtein/issues/1) to do this._ Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM): ```bash Running suite Implementation comparison [benchmark/speed.js]... >> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled) >> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled) >> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled) >> natural x 255 ops/sec ±0.76% (88 runs sampled) >> levenshtein x 180 ops/sec ±3.55% (86 runs sampled) >> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled) Benchmark done. Fastest test is fast-levenshtein at 4.2x faster than levenshtein-component ``` You can run this benchmark yourself by doing: ```bash $ npm install $ npm run build $ npm run benchmark ``` ## Contributing If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes. See [CONTRIBUTING.md](https://github.com/hiddentao/fast-levenshtein/blob/master/CONTRIBUTING.md) for details. ## License MIT - see [LICENSE.md](https://github.com/hiddentao/fast-levenshtein/blob/master/LICENSE.md) # fs-constants Small module that allows you to get the fs constants across Node and the browser. ``` npm install fs-constants ``` Previously you would use `require('constants')` for this in node but that has been deprecated and changed to `require('fs').constants` which does not browserify. This module uses `require('constants')` in the browser and `require('fs').constants` in node to work around this ## Usage ``` js var constants = require('fs-constants') console.log('constants:', constants) ``` ## License MIT
Peersyst_dhealth-paper-wallet
README.md index.ts package-lock.json package.json resources encodedBasePdf.js encodedBasePrivateKeyPdf.js encodedFont.js tsconfig.json
# dHealth Paper Wallets dHealth paper wallet generator 1. [Installation](#installation) 2. [Usage](#usage) ## Installation <a name="installation"></a> To install the npm module on your typescript or node project run: `npm install dhealth-paper-wallet --save` And install plugin dependencies: `npm install dhealth-sdk dhealth-hd-wallets --save` ## Usage <a name="usage"></a> Prepare some constants for use the module: ```javascript import { dHealthPaperWallet } from 'dhealth-paper-wallet'; const hdRootAccount = { mnemonic: "guess welcome coconut forum cricket unfold welcome still ticket cluster buddy fan decrease cotton model drive student assault cloth protect random equal this congress", rootAccountPublicKey: "TC6B74-FLJ5MR-PSXPEM-WDBWX5-VDIXA5-L5UI36-MEA", rootAccountAddress: "1F032B727E910D69F1B6A3244AD1B065547AA0055BC41CF4285F662182DCC18A" }; const privateKeyAccount = { name: "My private key account", address: "TCHBDE-NCLKEB-ILBPWP-3JPB2X-NY64OE-7PYHHE-32I", publicKey: "3B6A27BCCEB6A42D62A3A8D02A6F0D73653215771DE243A63AC048A18B59DA29", privateKey: "0000000000000000000000000000000000000000000000000000000000000000" }; const paperWallet = new dHealthPaperWallet(hdRootAccount, [privateKeyAccount], NetworkType.TEST_NET, '57F7DA205008026C776CB6AED843393F04CD458E0AA2D9F1D5F31A402072B2D6') const uint8Array = await paperWallet.toPdf(); ```
joeymach_reputation-social-network
README.md _near account-dev.txt account-state-dev.json reset-account.sh backend ormconfig.json package-lock.json package.json src controller.ts db.ts near.ts server.ts service.ts tsconfig.json docker-compose.yml lens-protocol-frontend .eslintrc.json README.md abi.json api.js next.config.js package-lock.json package.json pages _app.js api hello.js index.js profile [id].js wallet index.js modal.js networks.js providerOptions.js utils.js public vercel.svg styles Home.module.css globals.css mint_program Cargo.toml build.sh deploy.sh mint.sh src lib.rs node_modules .package-lock.json base-x LICENSE.md README.md package.json src index.d.ts index.js bn.js CHANGELOG.md README.md lib bn.js package.json borsh .eslintrc.yml .travis.yml LICENSE-MIT.txt README.md borsh-ts .eslintrc.yml index.ts test .eslintrc.yml fuzz borsh-roundtrip.js transaction-example enums.d.ts enums.js key_pair.d.ts key_pair.js serialize.d.ts serialize.js signer.d.ts signer.js transaction.d.ts transaction.js serialize.test.js lib index.d.ts index.js package.json tsconfig.json bs58 CHANGELOG.md README.md index.js package.json capability Array.prototype.forEach.js Array.prototype.map.js Error.captureStackTrace.js Error.prototype.stack.js Function.prototype.bind.js Object.create.js Object.defineProperties.js Object.defineProperty.js Object.prototype.hasOwnProperty.js README.md arguments.callee.caller.js es5.js index.js lib CapabilityDetector.js definitions.js index.js package.json strict mode.js depd History.md Readme.md index.js lib browser index.js package.json error-polyfill README.md index.js lib index.js non-v8 Frame.js FrameStringParser.js FrameStringSource.js index.js prepareStackTrace.js unsupported.js v8.js package.json http-errors HISTORY.md README.md index.js node_modules depd History.md Readme.md index.js lib browser index.js compat callsite-tostring.js event-listener-count.js index.js package.json package.json inherits README.md inherits.js inherits_browser.js package.json js-sha256 CHANGELOG.md LICENSE.txt README.md build sha256.min.js index.d.ts package.json src sha256.js mustache CHANGELOG.md README.md mustache.js mustache.min.js package.json near-api-js README.md browser-exports.js dist near-api-js.js near-api-js.min.js lib account.d.ts account.js account_creator.d.ts account_creator.js account_multisig.d.ts account_multisig.js browser-connect.d.ts browser-connect.js browser-index.d.ts browser-index.js common-index.d.ts common-index.js connect.d.ts connect.js connection.d.ts connection.js constants.d.ts constants.js contract.d.ts contract.js generated rpc_error_schema.json index.d.ts index.js key_stores browser-index.d.ts browser-index.js browser_local_storage_key_store.d.ts browser_local_storage_key_store.js in_memory_key_store.d.ts in_memory_key_store.js index.d.ts index.js keystore.d.ts keystore.js merge_key_store.d.ts merge_key_store.js unencrypted_file_system_keystore.d.ts unencrypted_file_system_keystore.js near.d.ts near.js providers index.d.ts index.js json-rpc-provider.d.ts json-rpc-provider.js provider.d.ts provider.js res error_messages.d.ts error_messages.json signer.d.ts signer.js transaction.d.ts transaction.js utils enums.d.ts enums.js errors.d.ts errors.js exponential-backoff.d.ts exponential-backoff.js format.d.ts format.js index.d.ts index.js key_pair.d.ts key_pair.js network.d.ts network.js rpc_errors.d.ts rpc_errors.js serialize.d.ts serialize.js setup-node-fetch.d.ts setup-node-fetch.js web.d.ts web.js validators.d.ts validators.js wallet-account.d.ts wallet-account.js package.json node-fetch LICENSE.md README.md browser.js lib index.es.js index.js package.json o3 README.md index.js lib Class.js abstractMethod.js index.js package.json safe-buffer README.md index.d.ts index.js package.json setprototypeof README.md index.d.ts index.js package.json test index.js statuses HISTORY.md README.md codes.json index.js package.json text-encoding-utf-8 LICENSE.md README.md lib encoding.js encoding.lib.js package.json src encoding.js polyfill.js toidentifier HISTORY.md README.md index.js package.json tr46 index.js lib mappingTable.json package.json tweetnacl AUTHORS.md CHANGELOG.md PULL_REQUEST_TEMPLATE.md README.md nacl-fast.js nacl-fast.min.js nacl.d.ts nacl.js nacl.min.js package.json u3 README.md index.js lib cache.js eachCombination.js index.js package.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js public-api.js url-state-machine.js utils.js package.json | package-lock.json package.json reputation_algorithm README.md repalgo..go
# Javascript Error Polyfill [![Build Status](https://travis-ci.org/inf3rno/error-polyfill.png?branch=master)](https://travis-ci.org/inf3rno/error-polyfill) Implementing the [V8 Stack Trace API](https://github.com/v8/v8/wiki/Stack-Trace-API) in non-V8 environments as much as possible ## Installation ```bash npm install error-polyfill ``` ```bash bower install error-polyfill ``` ### Environment compatibility Tested on the following environments: Windows 7 - **Node.js** 9.6 - **Chrome** 64.0 - **Firefox** 58.0 - **Internet Explorer** 10.0, 11.0 - **PhantomJS** 2.1 - **Opera** 51.0 Travis - **Node.js** 8, 9 - **Chrome** - **Firefox** - **PhantomJS** The polyfill might work on other environments too due to its adaptive design. I use [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. ### Requirements ES5 support is required, without that the lib throws an Error and stops working. The ES5 features are tested by the [capability](https://github.com/inf3rno/capability) lib run time. Classes are created by the [o3](https://github.com/inf3rno/o3) lib. Utility functions are implemented in the [u3](https://github.com/inf3rno/u3) lib. ## API documentation ### Usage In this documentation I used the framework as follows: ```js require("error-polyfill"); // <- your code here ``` It is recommended to require the polyfill in your main script. ### Getting a past stack trace with `Error.getStackTrace` This static method is not part of the V8 Stack Trace API, but it is recommended to **use `Error.getStackTrace(throwable)` instead of `throwable.stack`** to get the stack trace of Error instances! Explanation: By non-V8 environments we cannot replace the default stack generation algorithm, so we need a workaround to generate the stack when somebody tries to access it. So the original stack string will be parsed and the result will be properly formatted by accessing the stack using the `Error.getStackTrace` method. Arguments and return values: - The `throwable` argument should be an `Error` (descendant) instance, but it can be an `Object` instance as well. - The return value is the generated `stack` of the `throwable` argument. Example: ```js try { theNotDefinedFunction(); } catch (error) { console.log(Error.getStackTrace(error)); // ReferenceError: theNotDefinedFunction is not defined // at ... // ... } ``` ### Capturing the present stack trace with `Error.captureStackTrace` The `Error.captureStackTrace(throwable [, terminator])` sets the present stack above the `terminator` on the `throwable`. Arguments and return values: - The `throwable` argument should be an instance of an `Error` descendant, but it can be an `Object` instance as well. It is recommended to use `Error` descendant instances instead of inline objects, because we can recognize them by type e.g. `error instanceof UserError`. - The optional `terminator` argument should be a `Function`. Only the calls before this function will be reported in the stack, so without a `terminator` argument, the last call in the stack will be the call of the `Error.captureStackTrace`. - There is no return value, the `stack` will be set on the `throwable` so you will be able to access it using `Error.getStackTrace`. The format of the stack depends on the `Error.prepareStackTrace` implementation. Example: ```js var UserError = function (message){ this.name = "UserError"; this.message = message; Error.captureStackTrace(this, this.constructor); }; UserError.prototype = Object.create(Error.prototype); function codeSmells(){ throw new UserError("What's going on?!"); } codeSmells(); // UserError: What's going on?! // at codeSmells (myModule.js:23:1) // ... ``` Limitations: By the current implementation the `terminator` can be only the `Error.captureStackTrace` caller function. This will change soon, but in certain conditions, e.g. by using strict mode (`"use strict";`) it is not possible to access the information necessary to implement this feature. You will get an empty `frames` array and a `warning` in the `Error.prepareStackTrace` when the stack parser meets with such conditions. ### Formatting the stack trace with `Error.prepareStackTrace` The `Error.prepareStackTrace(throwable, frames [, warnings])` formats the stack `frames` and returns the `stack` value for `Error.captureStackTrace` or `Error.getStackTrace`. The native implementation returns a stack string, but you can override that by setting a new function value. Arguments and return values: - The `throwable` argument is an `Error` or `Object` instance coming from the `Error.captureStackTrace` or from the creation of a new `Error` instance. Be aware that in some environments you need to throw that instance to get a parsable stack. Without that you will get only a `warning` by trying to access the stack with `Error.getStackTrace`. - The `frames` argument is an array of `Frame` instances. Each `frame` represents a function call in the stack. You can use these frames to build a stack string. To access information about individual frames you can use the following methods. - `frame.toString()` - Returns the string representation of the frame, e.g. `codeSmells (myModule.js:23:1)`. - `frame.getThis()` - **Cannot be supported.** Returns the context of the call, only V8 environments support this natively. - `frame.getTypeName()` - **Not implemented yet.** Returns the type name of the context, by the global namespace it is `Window` in Chrome. - `frame.getFunction()` - Returns the called function or `undefined` by strict mode. - `frame.getFunctionName()` - **Not implemented yet.** Returns the name of the called function. - `frame.getMethodName()` - **Not implemented yet.** Returns the method name of the called function is a method of an object. - `frame.getFileName()` - **Not implemented yet.** Returns the file name where the function was called. - `frame.getLineNumber()` - **Not implemented yet.** Returns at which line the function was called in the file. - `frame.getColumnNumber()` - **Not implemented yet.** Returns at which column the function was called in the file. This information is not always available. - `frame.getEvalOrigin()` - **Not implemented yet.** Returns the original of an `eval` call. - `frame.isTopLevel()` - **Not implemented yet.** Returns whether the function was called from the top level. - `frame.isEval()` - **Not implemented yet.** Returns whether the called function was `eval`. - `frame.isNative()` - **Not implemented yet.** Returns whether the called function was native. - `frame.isConstructor()` - **Not implemented yet.** Returns whether the called function was a constructor. - The optional `warnings` argument contains warning messages coming from the stack parser. It is not part of the V8 Stack Trace API. - The return value will be the stack you can access with `Error.getStackTrace(throwable)`. If it is an object, it is recommended to add a `toString` method, so you will be able to read it in the console. Example: ```js Error.prepareStackTrace = function (throwable, frames, warnings) { var string = ""; string += throwable.name || "Error"; string += ": " + (throwable.message || ""); if (warnings instanceof Array) for (var warningIndex in warnings) { var warning = warnings[warningIndex]; string += "\n # " + warning; } for (var frameIndex in frames) { var frame = frames[frameIndex]; string += "\n at " + frame.toString(); } return string; }; ``` ### Stack trace size limits with `Error.stackTraceLimit` **Not implemented yet.** You can set size limits on the stack trace, so you won't have any problems because of too long stack traces. Example: ```js Error.stackTraceLimit = 10; ``` ### Handling uncaught errors and rejections **Not implemented yet.** ## Differences between environments and modes Since there is no Stack Trace API standard, every browsers solves this problem differently. I try to document what I've found about these differences as detailed as possible, so it will be easier to follow the code. Overriding the `error.stack` property with custom Stack instances - by Node.js and Chrome the `Error.prepareStackTrace()` can override every `error.stack` automatically right by creation - by Firefox, Internet Explorer and Opera you cannot automatically override every `error.stack` by native errors - by PhantomJS you cannot override the `error.stack` property of native errors, it is not configurable Capturing the current stack trace - by Node.js, Chrome, Firefox and Opera the stack property is added by instantiating a native error - by Node.js and Chrome the stack creation is lazy loaded and cached, so the `Error.prepareStackTrace()` is called only by the first access - by Node.js and Chrome the current stack can be added to any object with `Error.captureStackTrace()` - by Internet Explorer the stack is created by throwing a native error - by PhantomJS the stack is created by throwing any object, but not a primitive Accessing the stack - by Node.js, Chrome, Firefox, Internet Explorer, Opera and PhantomJS you can use the `error.stack` property - by old Opera you have to use the `error.stacktrace` property to get the stack Prefixes and postfixes on the stack string - by Node.js, Chrome, Internet Explorer and Opera you have the `error.name` and the `error.message` in a `{name}: {message}` format at the beginning of the stack string - by Firefox and PhantomJS the stack string does not contain the `error.name` and the `error.message` - by Firefox you have an empty line at the end of the stack string Accessing the stack frames array - by Node.js and Chrome you can access the frame objects directly by overriding the `Error.prepareStackTrace()` - by Firefox, Internet Explorer, PhantomJS, and Opera you need to parse the stack string in order to get the frames The structure of the frame string - by Node.js and Chrome - the frame string of calling a function from a module: `thirdFn (http://localhost/myModule.js:45:29)` - the frame strings contain an ` at ` prefix, which is not present by the `frame.toString()` output, so it is added by the `stack.toString()` - by Firefox - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Internet Explorer - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45:29)` - by PhantomJS - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Opera - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45)` Accessing information by individual frames - by Node.js and Chrome the `frame.getThis()` and the `frame.getFunction()` returns `undefined` by frames originate from [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode) code - by Firefox, Internet Explorer, PhantomJS, and Opera the context of the function calls is not accessible, so the `frame.getThis()` cannot be implemented - by Firefox, Internet Explorer, PhantomJS, and Opera functions are not accessible with `arguments.callee.caller` by frames originate from strict mode, so by these frames `frame.getFunction()` can return only `undefined` (this is consistent with V8 behavior) ## License MIT - 2016 Jánszky László Lajos # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 67 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. # WebIDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [WebIDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a WebIDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different WebIDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the WebIDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the WebIDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). ## Status All of the numeric types are implemented (float being implemented as double) and some others are as well - check the source for all of them. This list will grow over time in service of the [HTML as Custom Elements](https://github.com/dglazkov/html-as-custom-elements) project, but in the meantime, pull requests welcome! I'm not sure yet what the strategy will be for modifiers, e.g. [`[Clamp]`](http://heycam.github.io/webidl/#Clamp). Maybe something like `conversions["unsigned long"](x, { clamp: true })`? We'll see. We might also want to extend the API to give better error messages, e.g. "Argument 1 of HTMLMediaElement.fastSeek is not a finite floating-point value" instead of "Argument is not a finite floating-point value." This would require passing in more information to the conversion functions than we currently do. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. WebIDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on WebIDL values, i.e. instances of WebIDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a WebIDL value of [WebIDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, WebIDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given WebIDL operation, how does that get converted into a WebIDL value? For example, a JavaScript `true` passed in the position of a WebIDL `boolean` argument becomes a WebIDL `true`. But, a JavaScript `true` passed in the position of a [WebIDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a WebIDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the WebIDL algorithms, they don't actually use WebIDL values, since those aren't "real" outside of specs. Instead, implementations apply the WebIDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting WebIDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of WebIDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given WebIDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ WebIDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ WebIDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a WebIDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't Use This Seriously, why would you ever use this? You really shouldn't. WebIDL is … not great, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from WebIDL. In general, your JavaScript should not be trying to become more like WebIDL; if anything, we should fix WebIDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in WebIDL. # toidentifier [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Build Status][github-actions-ci-image]][github-actions-ci-url] [![Test Coverage][codecov-image]][codecov-url] > Convert a string of words to a JavaScript identifier ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install toidentifier ``` ## Example ```js var toIdentifier = require('toidentifier') console.log(toIdentifier('Bad Request')) // => "BadRequest" ``` ## API This CommonJS module exports a single default function: `toIdentifier`. ### toIdentifier(string) Given a string as the argument, it will be transformed according to the following rules and the new string will be returned: 1. Split into words separated by space characters (`0x20`). 2. Upper case the first character of each word. 3. Join the words together with no separator. 4. Remove all non-word (`[0-9a-z_]`) characters. ## License [MIT](LICENSE) [codecov-image]: https://img.shields.io/codecov/c/github/component/toidentifier.svg [codecov-url]: https://codecov.io/gh/component/toidentifier [downloads-image]: https://img.shields.io/npm/dm/toidentifier.svg [downloads-url]: https://npmjs.org/package/toidentifier [github-actions-ci-image]: https://img.shields.io/github/workflow/status/component/toidentifier/ci/master?label=ci [github-actions-ci-url]: https://github.com/component/toidentifier?query=workflow%3Aci [npm-image]: https://img.shields.io/npm/v/toidentifier.svg [npm-url]: https://npmjs.org/package/toidentifier ## [npm]: https://www.npmjs.com/ [yarn]: https://yarnpkg.com/ # Lens Protocol Front End Starter This is an example of how to build a front-end application on top of [Lens Protocol](https://docs.lens.xyz/docs). The main queries used in this app are defined in __api.js__: 1. [recommendProfiles](https://docs.lens.xyz/docs/recommended-profiles#api-details) - get popular profiles 2. [getProfiles](https://docs.lens.xyz/docs/get-profiles) - get profiles by passing in an array of `profileIds` 3. [getPublications](https://docs.lens.xyz/docs/get-publications) - returns a list of publications based on your request query 4. [searchProfiles](https://docs.lens.xyz/docs/search-profiles-and-publications) - allows you to search across hashtags on publications or profile handles. This query returns either a Post and Comment or Profile. You can view all of the APIs [here](https://docs.lens.xyz/docs/introduction) ## Running this project You can run this project by following these steps: 1. Clone the repo, change into the directory, and install the dependencies ```sh git clone git@github.com:dabit3/lens-protocol-frontend.git cd lens-protocol-frontend npm install # or yarn ``` 2. Run the project ```sh npm run dev ``` 3. Open the project in your browser at [localhost:3000](http://localhost:3000/) An implementation of the basic EigenTrust algorithm (http://nlp.stanford.edu/pubs/eigentrust.pdf). The algorithm is meant to find the trustworthiness of peers in a decentralized system. A (potentially sparse) matrix is populated with values representing how much peers trust each other. A map is also populated with how much trust is extended by default to a sub-set of peers. From that starting point, the algorithm converges on the global trustworthiness of each peer. text-encoding-utf-8 ============== This is a **partial** polyfill for the [Encoding Living Standard](https://encoding.spec.whatwg.org/) API for the Web, allowing encoding and decoding of textual data to and from Typed Array buffers for binary data in JavaScript. This is fork of [text-encoding](https://github.com/inexorabletash/text-encoding) that **only** support **UTF-8**. Basic examples and tests are included. ### Install ### There are a few ways you can get the `text-encoding-utf-8` library. #### Node #### `text-encoding-utf-8` is on `npm`. Simply run: ```js npm install text-encoding-utf-8 ``` Or add it to your `package.json` dependencies. ### HTML Page Usage ### ```html <script src="encoding.js"></script> ``` ### API Overview ### Basic Usage ```js var uint8array = TextEncoder(encoding).encode(string); var string = TextDecoder(encoding).decode(uint8array); ``` Streaming Decode ```js var string = "", decoder = TextDecoder(encoding), buffer; while (buffer = next_chunk()) { string += decoder.decode(buffer, {stream:true}); } string += decoder.decode(); // finish the stream ``` ### Encodings ### Only `utf-8` and `UTF-8` are supported. ### Non-Standard Behavior ### Only `utf-8` and `UTF-8` are supported. ### Motivation Binary size matters, especially on a mobile phone. Safari on iOS does not support TextDecoder or TextEncoder. # Reputation Social Network (Notoriety) ![](_doc/reputation-social-network.jpg) # Statuses [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Node.js Version][node-version-image]][node-version-url] [![Build Status][travis-image]][travis-url] [![Test Coverage][coveralls-image]][coveralls-url] HTTP status utility for node. This module provides a list of status codes and messages sourced from a few different projects: * The [IANA Status Code Registry](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) * The [Node.js project](https://nodejs.org/) * The [NGINX project](https://www.nginx.com/) * The [Apache HTTP Server project](https://httpd.apache.org/) ## Installation This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```sh $ npm install statuses ``` ## API <!-- eslint-disable no-unused-vars --> ```js var status = require('statuses') ``` ### var code = status(Integer || String) If `Integer` or `String` is a valid HTTP code or status message, then the appropriate `code` will be returned. Otherwise, an error will be thrown. <!-- eslint-disable no-undef --> ```js status(403) // => 403 status('403') // => 403 status('forbidden') // => 403 status('Forbidden') // => 403 status(306) // throws, as it's not supported by node.js ``` ### status.STATUS_CODES Returns an object which maps status codes to status messages, in the same format as the [Node.js http module](https://nodejs.org/dist/latest/docs/api/http.html#http_http_status_codes). ### status.codes Returns an array of all the status codes as `Integer`s. ### var msg = status[code] Map of `code` to `status message`. `undefined` for invalid `code`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status[404] // => 'Not Found' ``` ### var code = status[msg] Map of `status message` to `code`. `msg` can either be title-cased or lower-cased. `undefined` for invalid `status message`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status['not found'] // => 404 status['Not Found'] // => 404 ``` ### status.redirect[code] Returns `true` if a status code is a valid redirect status. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.redirect[200] // => undefined status.redirect[301] // => true ``` ### status.empty[code] Returns `true` if a status code expects an empty body. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.empty[200] // => undefined status.empty[204] // => true status.empty[304] // => true ``` ### status.retry[code] Returns `true` if you should retry the rest. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.retry[501] // => undefined status.retry[503] // => true ``` [npm-image]: https://img.shields.io/npm/v/statuses.svg [npm-url]: https://npmjs.org/package/statuses [node-version-image]: https://img.shields.io/node/v/statuses.svg [node-version-url]: https://nodejs.org/en/download [travis-image]: https://img.shields.io/travis/jshttp/statuses.svg [travis-url]: https://travis-ci.org/jshttp/statuses [coveralls-image]: https://img.shields.io/coveralls/jshttp/statuses.svg [coveralls-url]: https://coveralls.io/r/jshttp/statuses?branch=master [downloads-image]: https://img.shields.io/npm/dm/statuses.svg [downloads-url]: https://npmjs.org/package/statuses # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Current Status whatwg-url is currently up to date with the URL spec up to commit [a62223](https://github.com/whatwg/url/commit/a622235308342c9adc7fc2fd1659ff059f7d5e2a). ## API ### The `URL` Constructor The main API is the [`URL`](https://url.spec.whatwg.org/#url) export, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use this. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/browsers.html#serialization-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by the string `"failure"`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ the string `"failure"`. node-fetch ========== [![npm version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![coverage status][codecov-image]][codecov-url] [![install size][install-size-image]][install-size-url] [![Discord][discord-image]][discord-url] A light-weight module that brings `window.fetch` to Node.js (We are looking for [v2 maintainers and collaborators](https://github.com/bitinn/node-fetch/issues/567)) [![Backers][opencollective-image]][opencollective-url] <!-- TOC --> - [Motivation](#motivation) - [Features](#features) - [Difference from client-side fetch](#difference-from-client-side-fetch) - [Installation](#installation) - [Loading and configuring the module](#loading-and-configuring-the-module) - [Common Usage](#common-usage) - [Plain text or HTML](#plain-text-or-html) - [JSON](#json) - [Simple Post](#simple-post) - [Post with JSON](#post-with-json) - [Post with form parameters](#post-with-form-parameters) - [Handling exceptions](#handling-exceptions) - [Handling client and server errors](#handling-client-and-server-errors) - [Advanced Usage](#advanced-usage) - [Streams](#streams) - [Buffer](#buffer) - [Accessing Headers and other Meta data](#accessing-headers-and-other-meta-data) - [Extract Set-Cookie Header](#extract-set-cookie-header) - [Post data using a file stream](#post-data-using-a-file-stream) - [Post with form-data (detect multipart)](#post-with-form-data-detect-multipart) - [Request cancellation with AbortSignal](#request-cancellation-with-abortsignal) - [API](#api) - [fetch(url[, options])](#fetchurl-options) - [Options](#options) - [Class: Request](#class-request) - [Class: Response](#class-response) - [Class: Headers](#class-headers) - [Interface: Body](#interface-body) - [Class: FetchError](#class-fetcherror) - [License](#license) - [Acknowledgement](#acknowledgement) <!-- /TOC --> ## Motivation Instead of implementing `XMLHttpRequest` in Node.js to run browser-specific [Fetch polyfill](https://github.com/github/fetch), why not go from native `http` to `fetch` API directly? Hence, `node-fetch`, minimal code for a `window.fetch` compatible API on Node.js runtime. See Matt Andrews' [isomorphic-fetch](https://github.com/matthew-andrews/isomorphic-fetch) or Leonardo Quixada's [cross-fetch](https://github.com/lquixada/cross-fetch) for isomorphic usage (exports `node-fetch` for server-side, `whatwg-fetch` for client-side). ## Features - Stay consistent with `window.fetch` API. - Make conscious trade-off when following [WHATWG fetch spec][whatwg-fetch] and [stream spec](https://streams.spec.whatwg.org/) implementation details, document known differences. - Use native promise but allow substituting it with [insert your favorite promise library]. - Use native Node streams for body on both request and response. - Decode content encoding (gzip/deflate) properly and convert string output (such as `res.text()` and `res.json()`) to UTF-8 automatically. - Useful extensions such as timeout, redirect limit, response size limit, [explicit errors](ERROR-HANDLING.md) for troubleshooting. ## Difference from client-side fetch - See [Known Differences](LIMITS.md) for details. - If you happen to use a missing feature that `window.fetch` offers, feel free to open an issue. - Pull requests are welcomed too! ## Installation Current stable release (`2.x`) ```sh $ npm install node-fetch ``` ## Loading and configuring the module We suggest you load the module via `require` until the stabilization of ES modules in node: ```js const fetch = require('node-fetch'); ``` If you are using a Promise library other than native, set it through `fetch.Promise`: ```js const Bluebird = require('bluebird'); fetch.Promise = Bluebird; ``` ## Common Usage NOTE: The documentation below is up-to-date with `2.x` releases; see the [`1.x` readme](https://github.com/bitinn/node-fetch/blob/1.x/README.md), [changelog](https://github.com/bitinn/node-fetch/blob/1.x/CHANGELOG.md) and [2.x upgrade guide](UPGRADE-GUIDE.md) for the differences. #### Plain text or HTML ```js fetch('https://github.com/') .then(res => res.text()) .then(body => console.log(body)); ``` #### JSON ```js fetch('https://api.github.com/users/github') .then(res => res.json()) .then(json => console.log(json)); ``` #### Simple Post ```js fetch('https://httpbin.org/post', { method: 'POST', body: 'a=1' }) .then(res => res.json()) // expecting a json response .then(json => console.log(json)); ``` #### Post with JSON ```js const body = { a: 1 }; fetch('https://httpbin.org/post', { method: 'post', body: JSON.stringify(body), headers: { 'Content-Type': 'application/json' }, }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form parameters `URLSearchParams` is available in Node.js as of v7.5.0. See [official documentation](https://nodejs.org/api/url.html#url_class_urlsearchparams) for more usage methods. NOTE: The `Content-Type` header is only set automatically to `x-www-form-urlencoded` when an instance of `URLSearchParams` is given as such: ```js const { URLSearchParams } = require('url'); const params = new URLSearchParams(); params.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: params }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Handling exceptions NOTE: 3xx-5xx responses are *NOT* exceptions and should be handled in `then()`; see the next section for more information. Adding a catch to the fetch promise chain will catch *all* exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the [error handling document](ERROR-HANDLING.md) for more details. ```js fetch('https://domain.invalid/') .catch(err => console.error(err)); ``` #### Handling client and server errors It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses: ```js function checkStatus(res) { if (res.ok) { // res.status >= 200 && res.status < 300 return res; } else { throw MyCustomError(res.statusText); } } fetch('https://httpbin.org/status/400') .then(checkStatus) .then(res => console.log('will not get here...')) ``` ## Advanced Usage #### Streams The "Node.js way" is to use streams when possible: ```js fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => { const dest = fs.createWriteStream('./octocat.png'); res.body.pipe(dest); }); ``` #### Buffer If you prefer to cache binary data in full, use buffer(). (NOTE: `buffer()` is a `node-fetch`-only API) ```js const fileType = require('file-type'); fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => res.buffer()) .then(buffer => fileType(buffer)) .then(type => { /* ... */ }); ``` #### Accessing Headers and other Meta data ```js fetch('https://github.com/') .then(res => { console.log(res.ok); console.log(res.status); console.log(res.statusText); console.log(res.headers.raw()); console.log(res.headers.get('content-type')); }); ``` #### Extract Set-Cookie Header Unlike browsers, you can access raw `Set-Cookie` headers manually using `Headers.raw()`. This is a `node-fetch` only API. ```js fetch(url).then(res => { // returns an array of values, instead of a string of comma-separated values console.log(res.headers.raw()['set-cookie']); }); ``` #### Post data using a file stream ```js const { createReadStream } = require('fs'); const stream = createReadStream('input.txt'); fetch('https://httpbin.org/post', { method: 'POST', body: stream }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form-data (detect multipart) ```js const FormData = require('form-data'); const form = new FormData(); form.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: form }) .then(res => res.json()) .then(json => console.log(json)); // OR, using custom headers // NOTE: getHeaders() is non-standard API const form = new FormData(); form.append('a', 1); const options = { method: 'POST', body: form, headers: form.getHeaders() } fetch('https://httpbin.org/post', options) .then(res => res.json()) .then(json => console.log(json)); ``` #### Request cancellation with AbortSignal > NOTE: You may cancel streamed requests only on Node >= v8.0.0 You may cancel requests with `AbortController`. A suggested implementation is [`abort-controller`](https://www.npmjs.com/package/abort-controller). An example of timing out a request after 150ms could be achieved as the following: ```js import AbortController from 'abort-controller'; const controller = new AbortController(); const timeout = setTimeout( () => { controller.abort(); }, 150, ); fetch(url, { signal: controller.signal }) .then(res => res.json()) .then( data => { useData(data) }, err => { if (err.name === 'AbortError') { // request was aborted } }, ) .finally(() => { clearTimeout(timeout); }); ``` See [test cases](https://github.com/bitinn/node-fetch/blob/master/test/test.js) for more examples. ## API ### fetch(url[, options]) - `url` A string representing the URL for fetching - `options` [Options](#fetch-options) for the HTTP(S) request - Returns: <code>Promise&lt;[Response](#class-response)&gt;</code> Perform an HTTP(S) fetch. `url` should be an absolute url, such as `https://example.com/`. A path-relative URL (`/file/under/root`) or protocol-relative URL (`//can-be-http-or-https.com/`) will result in a rejected `Promise`. <a id="fetch-options"></a> ### Options The default values are shown after each option key. ```js { // These properties are part of the Fetch Standard method: 'GET', headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below) body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect signal: null, // pass an instance of AbortSignal to optionally abort requests // The following properties are node-fetch extensions follow: 20, // maximum redirect count. 0 to not follow redirect timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies). Signal is recommended instead. compress: true, // support gzip/deflate content encoding. false to disable size: 0, // maximum response body size in bytes. 0 to disable agent: null // http(s).Agent instance or function that returns an instance (see below) } ``` ##### Default Headers If no values are set, the following request headers will be sent automatically: Header | Value ------------------- | -------------------------------------------------------- `Accept-Encoding` | `gzip,deflate` _(when `options.compress === true`)_ `Accept` | `*/*` `Connection` | `close` _(when no `options.agent` is present)_ `Content-Length` | _(automatically calculated, if possible)_ `Transfer-Encoding` | `chunked` _(when `req.body` is a stream)_ `User-Agent` | `node-fetch/1.0 (+https://github.com/bitinn/node-fetch)` Note: when `body` is a `Stream`, `Content-Length` is not set automatically. ##### Custom Agent The `agent` option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following: - Support self-signed certificate - Use only IPv4 or IPv6 - Custom DNS Lookup See [`http.Agent`](https://nodejs.org/api/http.html#http_new_agent_options) for more information. In addition, the `agent` option accepts a function that returns `http`(s)`.Agent` instance given current [URL](https://nodejs.org/api/url.html), this is useful during a redirection chain across HTTP and HTTPS protocol. ```js const httpAgent = new http.Agent({ keepAlive: true }); const httpsAgent = new https.Agent({ keepAlive: true }); const options = { agent: function (_parsedURL) { if (_parsedURL.protocol == 'http:') { return httpAgent; } else { return httpsAgent; } } } ``` <a id="class-request"></a> ### Class: Request An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the [Body](#iface-body) interface. Due to the nature of Node.js, the following properties are not implemented at this moment: - `type` - `destination` - `referrer` - `referrerPolicy` - `mode` - `credentials` - `cache` - `integrity` - `keepalive` The following node-fetch extension properties are provided: - `follow` - `compress` - `counter` - `agent` See [options](#fetch-options) for exact meaning of these extensions. #### new Request(input[, options]) <small>*(spec-compliant)*</small> - `input` A string representing a URL, or another `Request` (which will be cloned) - `options` [Options][#fetch-options] for the HTTP(S) request Constructs a new `Request` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request). In most cases, directly `fetch(url, options)` is simpler than creating a `Request` object. <a id="class-response"></a> ### Class: Response An HTTP(S) response. This class implements the [Body](#iface-body) interface. The following properties are not implemented in node-fetch at this moment: - `Response.error()` - `Response.redirect()` - `type` - `trailer` #### new Response([body[, options]]) <small>*(spec-compliant)*</small> - `body` A `String` or [`Readable` stream][node-readable] - `options` A [`ResponseInit`][response-init] options dictionary Constructs a new `Response` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Response/Response). Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a `Response` directly. #### response.ok <small>*(spec-compliant)*</small> Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300. #### response.redirected <small>*(spec-compliant)*</small> Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0. <a id="class-headers"></a> ### Class: Headers This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the [Fetch Standard][whatwg-fetch] are implemented. #### new Headers([init]) <small>*(spec-compliant)*</small> - `init` Optional argument to pre-fill the `Headers` object Construct a new `Headers` object. `init` can be either `null`, a `Headers` object, an key-value map object or any iterable object. ```js // Example adapted from https://fetch.spec.whatwg.org/#example-headers-class const meta = { 'Content-Type': 'text/xml', 'Breaking-Bad': '<3' }; const headers = new Headers(meta); // The above is equivalent to const meta = [ [ 'Content-Type', 'text/xml' ], [ 'Breaking-Bad', '<3' ] ]; const headers = new Headers(meta); // You can in fact use any iterable objects, like a Map or even another Headers const meta = new Map(); meta.set('Content-Type', 'text/xml'); meta.set('Breaking-Bad', '<3'); const headers = new Headers(meta); const copyOfHeaders = new Headers(headers); ``` <a id="iface-body"></a> ### Interface: Body `Body` is an abstract interface with methods that are applicable to both `Request` and `Response` classes. The following methods are not yet implemented in node-fetch at this moment: - `formData()` #### body.body <small>*(deviation from spec)*</small> * Node.js [`Readable` stream][node-readable] Data are encapsulated in the `Body` object. Note that while the [Fetch Standard][whatwg-fetch] requires the property to always be a WHATWG `ReadableStream`, in node-fetch it is a Node.js [`Readable` stream][node-readable]. #### body.bodyUsed <small>*(spec-compliant)*</small> * `Boolean` A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again. #### body.arrayBuffer() #### body.blob() #### body.json() #### body.text() <small>*(spec-compliant)*</small> * Returns: <code>Promise</code> Consume the body and return a promise that will resolve to one of these formats. #### body.buffer() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;Buffer&gt;</code> Consume the body and return a promise that will resolve to a Buffer. #### body.textConverted() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;String&gt;</code> Identical to `body.text()`, except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible. (This API requires an optional dependency of the npm package [encoding](https://www.npmjs.com/package/encoding), which you need to install manually. `webpack` users may see [a warning message](https://github.com/bitinn/node-fetch/issues/412#issuecomment-379007792) due to this optional dependency.) <a id="class-fetcherror"></a> ### Class: FetchError <small>*(node-fetch extension)*</small> An operational error in the fetching process. See [ERROR-HANDLING.md][] for more info. <a id="class-aborterror"></a> ### Class: AbortError <small>*(node-fetch extension)*</small> An Error thrown when the request is aborted in response to an `AbortSignal`'s `abort` event. It has a `name` property of `AbortError`. See [ERROR-HANDLING.MD][] for more info. ## Acknowledgement Thanks to [github/fetch](https://github.com/github/fetch) for providing a solid implementation reference. `node-fetch` v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr). ## License MIT [npm-image]: https://flat.badgen.net/npm/v/node-fetch [npm-url]: https://www.npmjs.com/package/node-fetch [travis-image]: https://flat.badgen.net/travis/bitinn/node-fetch [travis-url]: https://travis-ci.org/bitinn/node-fetch [codecov-image]: https://flat.badgen.net/codecov/c/github/bitinn/node-fetch/master [codecov-url]: https://codecov.io/gh/bitinn/node-fetch [install-size-image]: https://flat.badgen.net/packagephobia/install/node-fetch [install-size-url]: https://packagephobia.now.sh/result?p=node-fetch [discord-image]: https://img.shields.io/discord/619915844268326952?color=%237289DA&label=Discord&style=flat-square [discord-url]: https://discord.gg/Zxbndcm [opencollective-image]: https://opencollective.com/node-fetch/backers.svg [opencollective-url]: https://opencollective.com/node-fetch [whatwg-fetch]: https://fetch.spec.whatwg.org/ [response-init]: https://fetch.spec.whatwg.org/#responseinit [node-readable]: https://nodejs.org/api/stream.html#stream_readable_streams [mdn-headers]: https://developer.mozilla.org/en-US/docs/Web/API/Headers [LIMITS.md]: https://github.com/bitinn/node-fetch/blob/master/LIMITS.md [ERROR-HANDLING.md]: https://github.com/bitinn/node-fetch/blob/master/ERROR-HANDLING.md [UPGRADE-GUIDE.md]: https://github.com/bitinn/node-fetch/blob/master/UPGRADE-GUIDE.md # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # js-sha256 [![Build Status](https://travis-ci.org/emn178/js-sha256.svg?branch=master)](https://travis-ci.org/emn178/js-sha256) [![Coverage Status](https://coveralls.io/repos/emn178/js-sha256/badge.svg?branch=master)](https://coveralls.io/r/emn178/js-sha256?branch=master) [![CDNJS](https://img.shields.io/cdnjs/v/js-sha256.svg)](https://cdnjs.com/libraries/js-sha256/) [![NPM](https://nodei.co/npm/js-sha256.png?stars&downloads)](https://nodei.co/npm/js-sha256/) A simple SHA-256 / SHA-224 hash function for JavaScript supports UTF-8 encoding. ## Demo [SHA256 Online](http://emn178.github.io/online-tools/sha256.html) [SHA224 Online](http://emn178.github.io/online-tools/sha224.html) ## Download [Compress](https://raw.github.com/emn178/js-sha256/master/build/sha256.min.js) [Uncompress](https://raw.github.com/emn178/js-sha256/master/src/sha256.js) ## Installation You can also install js-sha256 by using Bower. bower install js-sha256 For node.js, you can use this command to install: npm install js-sha256 ## Usage You could use like this: ```JavaScript sha256('Message to hash'); sha224('Message to hash'); var hash = sha256.create(); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.update('Message to hash'); hash2.update('Message2 to hash'); hash2.array(); // HMAC sha256.hmac('key', 'Message to hash'); sha224.hmac('key', 'Message to hash'); var hash = sha256.hmac.create('key'); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.hmac.update('key', 'Message to hash'); hash2.update('Message2 to hash'); hash2.array(); ``` If you use node.js, you should require the module first: ```JavaScript var sha256 = require('js-sha256'); ``` or ```JavaScript var sha256 = require('js-sha256').sha256; var sha224 = require('js-sha256').sha224; ``` It supports AMD: ```JavaScript require(['your/path/sha256.js'], function(sha256) { // ... }); ``` or TypeScript ```TypeScript import { sha256, sha224 } from 'js-sha256'; ``` ## Example ```JavaScript sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256('The quick brown fox jumps over the lazy dog'); // d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592 sha256('The quick brown fox jumps over the lazy dog.'); // ef537f25c895bfa782526529a9b63d97aa631564d5d789c2b765448c8635fb6c sha224(''); // d14a028c2a3a2bc9476102bb288234c415a2b01f828ea62ac5b3e42f sha224('The quick brown fox jumps over the lazy dog'); // 730e109bd7a8a32b1cb9d9a09aa2325d2430587ddbc0c38bad911525 sha224('The quick brown fox jumps over the lazy dog.'); // 619cba8e8e05826e9b8c519c0a5c68f4fb653e8a3d8aa04bb2c8cd4c // It also supports UTF-8 encoding sha256('中文'); // 72726d8818f693066ceb69afa364218b692e62ea92b385782363780f47529c21 sha224('中文'); // dfbab71afdf54388af4d55f8bd3de8c9b15e0eb916bf9125f4a959d4 // It also supports byte `Array`, `Uint8Array`, `ArrayBuffer` input sha256([]); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256(new Uint8Array([211, 212])); // 182889f925ae4e5cc37118ded6ed87f7bdc7cab5ec5e78faef2e50048999473f // Different output sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.hex(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.array(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.digest(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.arrayBuffer(''); // ArrayBuffer ``` ## License The project is released under the [MIT license](http://www.opensource.org/licenses/MIT). ## Contact The project's website is located at https://github.com/emn178/js-sha256 Author: Chen, Yi-Cyuan (emn178@gmail.com) # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # Polyfill for `Object.setPrototypeOf` [![NPM Version](https://img.shields.io/npm/v/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![NPM Downloads](https://img.shields.io/npm/dm/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](https://github.com/standard/standard) A simple cross platform implementation to set the prototype of an instianted object. Supports all modern browsers and at least back to IE8. ## Usage: ``` $ npm install --save setprototypeof ``` ```javascript var setPrototypeOf = require('setprototypeof') var obj = {} setPrototypeOf(obj, { foo: function () { return 'bar' } }) obj.foo() // bar ``` TypeScript is also supported: ```typescript import setPrototypeOf from 'setprototypeof' ``` TweetNaCl.js ============ Port of [TweetNaCl](http://tweetnacl.cr.yp.to) / [NaCl](http://nacl.cr.yp.to/) to JavaScript for modern browsers and Node.js. Public domain. [![Build Status](https://travis-ci.org/dchest/tweetnacl-js.svg?branch=master) ](https://travis-ci.org/dchest/tweetnacl-js) Demo: <https://dchest.github.io/tweetnacl-js/> Documentation ============= * [Overview](#overview) * [Audits](#audits) * [Installation](#installation) * [Examples](#examples) * [Usage](#usage) * [Public-key authenticated encryption (box)](#public-key-authenticated-encryption-box) * [Secret-key authenticated encryption (secretbox)](#secret-key-authenticated-encryption-secretbox) * [Scalar multiplication](#scalar-multiplication) * [Signatures](#signatures) * [Hashing](#hashing) * [Random bytes generation](#random-bytes-generation) * [Constant-time comparison](#constant-time-comparison) * [System requirements](#system-requirements) * [Development and testing](#development-and-testing) * [Benchmarks](#benchmarks) * [Contributors](#contributors) * [Who uses it](#who-uses-it) Overview -------- The primary goal of this project is to produce a translation of TweetNaCl to JavaScript which is as close as possible to the original C implementation, plus a thin layer of idiomatic high-level API on top of it. There are two versions, you can use either of them: * `nacl.js` is the port of TweetNaCl with minimum differences from the original + high-level API. * `nacl-fast.js` is like `nacl.js`, but with some functions replaced with faster versions. (Used by default when importing NPM package.) Audits ------ TweetNaCl.js has been audited by [Cure53](https://cure53.de/) in January-February 2017 (audit was sponsored by [Deletype](https://deletype.com)): > The overall outcome of this audit signals a particularly positive assessment > for TweetNaCl-js, as the testing team was unable to find any security > problems in the library. It has to be noted that this is an exceptionally > rare result of a source code audit for any project and must be seen as a true > testament to a development proceeding with security at its core. > > To reiterate, the TweetNaCl-js project, the source code was found to be > bug-free at this point. > > [...] > > In sum, the testing team is happy to recommend the TweetNaCl-js project as > likely one of the safer and more secure cryptographic tools among its > competition. [Read full audit report](https://cure53.de/tweetnacl.pdf) Installation ------------ You can install TweetNaCl.js via a package manager: [Yarn](https://yarnpkg.com/): $ yarn add tweetnacl [NPM](https://www.npmjs.org/): $ npm install tweetnacl or [download source code](https://github.com/dchest/tweetnacl-js/releases). Examples -------- You can find usage examples in our [wiki](https://github.com/dchest/tweetnacl-js/wiki/Examples). Usage ----- All API functions accept and return bytes as `Uint8Array`s. If you need to encode or decode strings, use functions from <https://github.com/dchest/tweetnacl-util-js> or one of the more robust codec packages. In Node.js v4 and later `Buffer` objects are backed by `Uint8Array`s, so you can freely pass them to TweetNaCl.js functions as arguments. The returned objects are still `Uint8Array`s, so if you need `Buffer`s, you'll have to convert them manually; make sure to convert using copying: `Buffer.from(array)` (or `new Buffer(array)` in Node.js v4 or earlier), instead of sharing: `Buffer.from(array.buffer)` (or `new Buffer(array.buffer)` Node 4 or earlier), because some functions return subarrays of their buffers. ### Public-key authenticated encryption (box) Implements *x25519-xsalsa20-poly1305*. #### nacl.box.keyPair() Generates a new random key pair for box and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 32-byte secret key } #### nacl.box.keyPair.fromSecretKey(secretKey) Returns a key pair for box with public key corresponding to the given secret key. #### nacl.box(message, nonce, theirPublicKey, mySecretKey) Encrypts and authenticates message using peer's public key, our secret key, and the given nonce, which must be unique for each distinct message for a key pair. Returns an encrypted and authenticated message, which is `nacl.box.overheadLength` longer than the original message. #### nacl.box.open(box, nonce, theirPublicKey, mySecretKey) Authenticates and decrypts the given box with peer's public key, our secret key, and the given nonce. Returns the original message, or `null` if authentication fails. #### nacl.box.before(theirPublicKey, mySecretKey) Returns a precomputed shared key which can be used in `nacl.box.after` and `nacl.box.open.after`. #### nacl.box.after(message, nonce, sharedKey) Same as `nacl.box`, but uses a shared key precomputed with `nacl.box.before`. #### nacl.box.open.after(box, nonce, sharedKey) Same as `nacl.box.open`, but uses a shared key precomputed with `nacl.box.before`. #### Constants ##### nacl.box.publicKeyLength = 32 Length of public key in bytes. ##### nacl.box.secretKeyLength = 32 Length of secret key in bytes. ##### nacl.box.sharedKeyLength = 32 Length of precomputed shared key in bytes. ##### nacl.box.nonceLength = 24 Length of nonce in bytes. ##### nacl.box.overheadLength = 16 Length of overhead added to box compared to original message. ### Secret-key authenticated encryption (secretbox) Implements *xsalsa20-poly1305*. #### nacl.secretbox(message, nonce, key) Encrypts and authenticates message using the key and the nonce. The nonce must be unique for each distinct message for this key. Returns an encrypted and authenticated message, which is `nacl.secretbox.overheadLength` longer than the original message. #### nacl.secretbox.open(box, nonce, key) Authenticates and decrypts the given secret box using the key and the nonce. Returns the original message, or `null` if authentication fails. #### Constants ##### nacl.secretbox.keyLength = 32 Length of key in bytes. ##### nacl.secretbox.nonceLength = 24 Length of nonce in bytes. ##### nacl.secretbox.overheadLength = 16 Length of overhead added to secret box compared to original message. ### Scalar multiplication Implements *x25519*. #### nacl.scalarMult(n, p) Multiplies an integer `n` by a group element `p` and returns the resulting group element. #### nacl.scalarMult.base(n) Multiplies an integer `n` by a standard group element and returns the resulting group element. #### Constants ##### nacl.scalarMult.scalarLength = 32 Length of scalar in bytes. ##### nacl.scalarMult.groupElementLength = 32 Length of group element in bytes. ### Signatures Implements [ed25519](http://ed25519.cr.yp.to). #### nacl.sign.keyPair() Generates new random key pair for signing and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 64-byte secret key } #### nacl.sign.keyPair.fromSecretKey(secretKey) Returns a signing key pair with public key corresponding to the given 64-byte secret key. The secret key must have been generated by `nacl.sign.keyPair` or `nacl.sign.keyPair.fromSeed`. #### nacl.sign.keyPair.fromSeed(seed) Returns a new signing key pair generated deterministically from a 32-byte seed. The seed must contain enough entropy to be secure. This method is not recommended for general use: instead, use `nacl.sign.keyPair` to generate a new key pair from a random seed. #### nacl.sign(message, secretKey) Signs the message using the secret key and returns a signed message. #### nacl.sign.open(signedMessage, publicKey) Verifies the signed message and returns the message without signature. Returns `null` if verification failed. #### nacl.sign.detached(message, secretKey) Signs the message using the secret key and returns a signature. #### nacl.sign.detached.verify(message, signature, publicKey) Verifies the signature for the message and returns `true` if verification succeeded or `false` if it failed. #### Constants ##### nacl.sign.publicKeyLength = 32 Length of signing public key in bytes. ##### nacl.sign.secretKeyLength = 64 Length of signing secret key in bytes. ##### nacl.sign.seedLength = 32 Length of seed for `nacl.sign.keyPair.fromSeed` in bytes. ##### nacl.sign.signatureLength = 64 Length of signature in bytes. ### Hashing Implements *SHA-512*. #### nacl.hash(message) Returns SHA-512 hash of the message. #### Constants ##### nacl.hash.hashLength = 64 Length of hash in bytes. ### Random bytes generation #### nacl.randomBytes(length) Returns a `Uint8Array` of the given length containing random bytes of cryptographic quality. **Implementation note** TweetNaCl.js uses the following methods to generate random bytes, depending on the platform it runs on: * `window.crypto.getRandomValues` (WebCrypto standard) * `window.msCrypto.getRandomValues` (Internet Explorer 11) * `crypto.randomBytes` (Node.js) If the platform doesn't provide a suitable PRNG, the following functions, which require random numbers, will throw exception: * `nacl.randomBytes` * `nacl.box.keyPair` * `nacl.sign.keyPair` Other functions are deterministic and will continue working. If a platform you are targeting doesn't implement secure random number generator, but you somehow have a cryptographically-strong source of entropy (not `Math.random`!), and you know what you are doing, you can plug it into TweetNaCl.js like this: nacl.setPRNG(function(x, n) { // ... copy n random bytes into x ... }); Note that `nacl.setPRNG` *completely replaces* internal random byte generator with the one provided. ### Constant-time comparison #### nacl.verify(x, y) Compares `x` and `y` in constant time and returns `true` if their lengths are non-zero and equal, and their contents are equal. Returns `false` if either of the arguments has zero length, or arguments have different lengths, or their contents differ. System requirements ------------------- TweetNaCl.js supports modern browsers that have a cryptographically secure pseudorandom number generator and typed arrays, including the latest versions of: * Chrome * Firefox * Safari (Mac, iOS) * Internet Explorer 11 Other systems: * Node.js Development and testing ------------------------ Install NPM modules needed for development: $ npm install To build minified versions: $ npm run build Tests use minified version, so make sure to rebuild it every time you change `nacl.js` or `nacl-fast.js`. ### Testing To run tests in Node.js: $ npm run test-node By default all tests described here work on `nacl.min.js`. To test other versions, set environment variable `NACL_SRC` to the file name you want to test. For example, the following command will test fast minified version: $ NACL_SRC=nacl-fast.min.js npm run test-node To run full suite of tests in Node.js, including comparing outputs of JavaScript port to outputs of the original C version: $ npm run test-node-all To prepare tests for browsers: $ npm run build-test-browser and then open `test/browser/test.html` (or `test/browser/test-fast.html`) to run them. To run tests in both Node and Electron: $ npm test ### Benchmarking To run benchmarks in Node.js: $ npm run bench $ NACL_SRC=nacl-fast.min.js npm run bench To run benchmarks in a browser, open `test/benchmark/bench.html` (or `test/benchmark/bench-fast.html`). Benchmarks ---------- For reference, here are benchmarks from MacBook Pro (Retina, 13-inch, Mid 2014) laptop with 2.6 GHz Intel Core i5 CPU (Intel) in Chrome 53/OS X and Xiaomi Redmi Note 3 smartphone with 1.8 GHz Qualcomm Snapdragon 650 64-bit CPU (ARM) in Chrome 52/Android: | | nacl.js Intel | nacl-fast.js Intel | nacl.js ARM | nacl-fast.js ARM | | ------------- |:-------------:|:-------------------:|:-------------:|:-----------------:| | salsa20 | 1.3 MB/s | 128 MB/s | 0.4 MB/s | 43 MB/s | | poly1305 | 13 MB/s | 171 MB/s | 4 MB/s | 52 MB/s | | hash | 4 MB/s | 34 MB/s | 0.9 MB/s | 12 MB/s | | secretbox 1K | 1113 op/s | 57583 op/s | 334 op/s | 14227 op/s | | box 1K | 145 op/s | 718 op/s | 37 op/s | 368 op/s | | scalarMult | 171 op/s | 733 op/s | 56 op/s | 380 op/s | | sign | 77 op/s | 200 op/s | 20 op/s | 61 op/s | | sign.open | 39 op/s | 102 op/s | 11 op/s | 31 op/s | (You can run benchmarks on your devices by clicking on the links at the bottom of the [home page](https://tweetnacl.js.org)). In short, with *nacl-fast.js* and 1024-byte messages you can expect to encrypt and authenticate more than 57000 messages per second on a typical laptop or more than 14000 messages per second on a $170 smartphone, sign about 200 and verify 100 messages per second on a laptop or 60 and 30 messages per second on a smartphone, per CPU core (with Web Workers you can do these operations in parallel), which is good enough for most applications. Contributors ------------ See AUTHORS.md file. Third-party libraries based on TweetNaCl.js ------------------------------------------- * [forward-secrecy](https://github.com/alax/forward-secrecy) — Axolotl ratchet implementation * [nacl-stream](https://github.com/dchest/nacl-stream-js) - streaming encryption * [tweetnacl-auth-js](https://github.com/dchest/tweetnacl-auth-js) — implementation of [`crypto_auth`](http://nacl.cr.yp.to/auth.html) * [tweetnacl-sealed-box](https://github.com/whs/tweetnacl-sealed-box) — implementation of [`sealed boxes`](https://download.libsodium.org/doc/public-key_cryptography/sealed_boxes.html) * [chloride](https://github.com/dominictarr/chloride) - unified API for various NaCl modules Who uses it ----------- Some notable users of TweetNaCl.js: * [GitHub](https://github.com) * [MEGA](https://github.com/meganz/webclient) * [Stellar](https://www.stellar.org/) * [miniLock](https://github.com/kaepora/miniLock) bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # capability.js - javascript environment capability detection [![Build Status](https://travis-ci.org/inf3rno/capability.png?branch=master)](https://travis-ci.org/inf3rno/capability) The capability.js library provides capability detection for different javascript environments. ## Documentation This project is empty yet. ### Installation ```bash npm install capability ``` ```bash bower install capability ``` #### Environment compatibility The lib requires only basic javascript features, so it will run in every js environments. #### Requirements If you want to use the lib in browser, you'll need a node module loader, e.g. browserify, webpack, etc... #### Usage In this documentation I used the lib as follows: ```js var capability = require("capability"); ``` ### Capabilities API #### Defining a capability You can define a capability by using the `define(name, test)` function. ```js capability.define("Object.create", function () { return Object.create; }); ``` The `name` parameter should contain the identifier of the capability and the `test` parameter should contain a function, which can detect the capability. If the capability is supported by the environment, then the `test()` should return `true`, otherwise it should return `false`. You don't have to convert the return value into a `Boolean`, the library will do that for you, so you won't have memory leaks because of this. #### Testing a capability The `test(name)` function will return a `Boolean` about whether the capability is supported by the actual environment. ```js console.log(capability.test("Object.create")); // true - in recent environments // false - by pre ES5 environments without Object.create ``` You can use `capability(name)` instead of `capability.test(name)` if you want a short code by optional requirements. #### Checking a capability The `check(name)` function will throw an Error when the capability is not supported by the actual environment. ```js capability.check("Object.create"); // this will throw an Error by pre ES5 environments without Object.create ``` #### Checking capability with require and modules It is possible to check the environments with `require()` by adding a module, which calls the `check(name)` function. By the capability definitions in this lib I added such modules by each definition, so you can do for example `require("capability/es5")`. Ofc. you can do fun stuff if you want, e.g. you can call multiple `check`s from a single `requirements.js` file in your lib, etc... ### Definitions Currently the following definitions are supported by the lib: - strict mode - `arguments.callee.caller` - es5 - `Array.prototype.forEach` - `Array.prototype.map` - `Function.prototype.bind` - `Object.create` - `Object.defineProperties` - `Object.defineProperty` - `Object.prototype.hasOwnProperty` - `Error.captureStackTrace` - `Error.prototype.stack` ## License MIT - 2016 Jánszky László Lajos # u3 - Utility Functions This lib contains utility functions for e3, dataflower and other projects. ## Documentation ### Installation ```bash npm install u3 ``` ```bash bower install u3 ``` #### Usage In this documentation I used the lib as follows: ```js var u3 = require("u3"), cache = u3.cache, eachCombination = u3.eachCombination; ``` ### Function wrappers #### cache The `cache(fn)` function caches the fn results, so by the next calls it will return the result of the first call. You can use different arguments, but they won't affect the return value. ```js var a = cache(function fn(x, y, z){ return x + y + z; }); console.log(a(1, 2, 3)); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` It is possible to cache a value too. ```js var a = cache(1 + 2 + 3); console.log(a()); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` ### Math #### eachCombination The `eachCombination(alternativesByDimension, callback)` calls the `callback(a,b,c,...)` on each combination of the `alternatives[a[],b[],c[],...]`. ```js eachCombination([ [1, 2, 3], ["a", "b"] ], console.log); /* 1, "a" 1, "b" 2, "a" 2, "b" 3, "a" 3, "b" */ ``` You can use any dimension and number of alternatives. In the current example we used 2 dimensions. By the first dimension we used 3 alternatives: `[1, 2, 3]` and by the second dimension we used 2 alternatives: `["a", "b"]`. ## License MIT - 2016 Jánszky László Lajos # Borsh JS [![Project license](https://img.shields.io/badge/license-Apache2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Project license](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Discord](https://img.shields.io/discord/490367152054992913?label=discord)](https://discord.gg/Vyp7ETM) [![Travis status](https://travis-ci.com/near/borsh.svg?branch=master)](https://travis-ci.com/near/borsh-js) [![NPM version](https://img.shields.io/npm/v/borsh.svg?style=flat-square)](https://npmjs.com/borsh) [![Size on NPM](https://img.shields.io/bundlephobia/minzip/borsh.svg?style=flat-square)](https://npmjs.com/borsh) **Borsh JS** is an implementation of the [Borsh] binary serialization format for JavaScript and TypeScript projects. Borsh stands for _Binary Object Representation Serializer for Hashing_. It is meant to be used in security-critical projects as it prioritizes consistency, safety, speed, and comes with a strict specification. ## Examples ### Serializing an object ```javascript const value = new Test({ x: 255, y: 20, z: '123', q: [1, 2, 3] }); const schema = new Map([[Test, { kind: 'struct', fields: [['x', 'u8'], ['y', 'u64'], ['z', 'string'], ['q', [3]]] }]]); const buffer = borsh.serialize(schema, value); ``` ### Deserializing an object ```javascript const newValue = borsh.deserialize(schema, Test, buffer); ``` ## Type Mappings | Borsh | TypeScript | |-----------------------|----------------| | `u8` integer | `number` | | `u16` integer | `number` | | `u32` integer | `number` | | `u64` integer | `BN` | | `u128` integer | `BN` | | `u256` integer | `BN` | | `u512` integer | `BN` | | `f32` float | N/A | | `f64` float | N/A | | fixed-size byte array | `Uint8Array` | | UTF-8 string | `string` | | option | `null` or type | | map | N/A | | set | N/A | | structs | `any` | ## Contributing Install dependencies: ```bash yarn install ``` Continuously build with: ```bash yarn dev ``` Run tests: ```bash yarn test ``` Run linter ```bash yarn lint ``` ## Publish Prepare `dist` version by running: ```bash yarn build ``` When publishing to npm use [np](https://github.com/sindresorhus/np). # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-MIT](LICENSE-MIT.txt) and [LICENSE-APACHE](LICENSE-APACHE) for details. [Borsh]: https://borsh.io Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # http-errors [![NPM Version][npm-version-image]][npm-url] [![NPM Downloads][npm-downloads-image]][node-url] [![Node.js Version][node-image]][node-url] [![Build Status][ci-image]][ci-url] [![Test Coverage][coveralls-image]][coveralls-url] Create HTTP errors for Express, Koa, Connect, etc. with ease. ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install http-errors ``` ## Example ```js var createError = require('http-errors') var express = require('express') var app = express() app.use(function (req, res, next) { if (!req.user) return next(createError(401, 'Please login to view this page.')) next() }) ``` ## API This is the current API, currently extracted from Koa and subject to change. ### Error Properties - `expose` - can be used to signal if `message` should be sent to the client, defaulting to `false` when `status` >= 500 - `headers` - can be an object of header names to values to be sent to the client, defaulting to `undefined`. When defined, the key names should all be lower-cased - `message` - the traditional error message, which should be kept short and all single line - `status` - the status code of the error, mirroring `statusCode` for general compatibility - `statusCode` - the status code of the error, defaulting to `500` ### createError([status], [message], [properties]) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = createError(404, 'This video does not exist!') ``` - `status: 500` - the status code as a number - `message` - the message of the error, defaulting to node's text for that status code. - `properties` - custom properties to attach to the object ### createError([status], [error], [properties]) Extend the given `error` object with `createError.HttpError` properties. This will not alter the inheritance of the given `error` object, and the modified `error` object is the return value. <!-- eslint-disable no-redeclare --> ```js fs.readFile('foo.txt', function (err, buf) { if (err) { if (err.code === 'ENOENT') { var httpError = createError(404, err, { expose: false }) } else { var httpError = createError(500, err) } } }) ``` - `status` - the status code as a number - `error` - the error object to extend - `properties` - custom properties to attach to the object ### createError.isHttpError(val) Determine if the provided `val` is an `HttpError`. This will return `true` if the error inherits from the `HttpError` constructor of this module or matches the "duck type" for an error this module creates. All outputs from the `createError` factory will return `true` for this function, including if an non-`HttpError` was passed into the factory. ### new createError\[code || name\](\[msg]\)) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = new createError.NotFound() ``` - `code` - the status code as a number - `name` - the name of the error as a "bumpy case", i.e. `NotFound` or `InternalServerError`. #### List of all constructors |Status Code|Constructor Name | |-----------|-----------------------------| |400 |BadRequest | |401 |Unauthorized | |402 |PaymentRequired | |403 |Forbidden | |404 |NotFound | |405 |MethodNotAllowed | |406 |NotAcceptable | |407 |ProxyAuthenticationRequired | |408 |RequestTimeout | |409 |Conflict | |410 |Gone | |411 |LengthRequired | |412 |PreconditionFailed | |413 |PayloadTooLarge | |414 |URITooLong | |415 |UnsupportedMediaType | |416 |RangeNotSatisfiable | |417 |ExpectationFailed | |418 |ImATeapot | |421 |MisdirectedRequest | |422 |UnprocessableEntity | |423 |Locked | |424 |FailedDependency | |425 |UnorderedCollection | |426 |UpgradeRequired | |428 |PreconditionRequired | |429 |TooManyRequests | |431 |RequestHeaderFieldsTooLarge | |451 |UnavailableForLegalReasons | |500 |InternalServerError | |501 |NotImplemented | |502 |BadGateway | |503 |ServiceUnavailable | |504 |GatewayTimeout | |505 |HTTPVersionNotSupported | |506 |VariantAlsoNegotiates | |507 |InsufficientStorage | |508 |LoopDetected | |509 |BandwidthLimitExceeded | |510 |NotExtended | |511 |NetworkAuthenticationRequired| ## License [MIT](LICENSE) [ci-image]: https://badgen.net/github/checks/jshttp/http-errors/master?label=ci [ci-url]: https://github.com/jshttp/http-errors/actions?query=workflow%3Aci [coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/http-errors/master [coveralls-url]: https://coveralls.io/r/jshttp/http-errors?branch=master [node-image]: https://badgen.net/npm/node/http-errors [node-url]: https://nodejs.org/en/download [npm-downloads-image]: https://badgen.net/npm/dm/http-errors [npm-url]: https://npmjs.org/package/http-errors [npm-version-image]: https://badgen.net/npm/v/http-errors [travis-image]: https://badgen.net/travis/jshttp/http-errors/master [travis-url]: https://travis-ci.org/jshttp/http-errors # Ozone - Javascript Class Framework [![Build Status](https://travis-ci.org/inf3rno/o3.png?branch=master)](https://travis-ci.org/inf3rno/o3) The Ozone class framework contains enhanced class support to ease the development of object-oriented javascript applications in an ES5 environment. Another alternative to get a better class support to use ES6 classes and compilers like Babel, Traceur or TypeScript until native ES6 support arrives. ## Documentation ### Installation ```bash npm install o3 ``` ```bash bower install o3 ``` #### Environment compatibility The framework succeeded the tests on - node v4.2 and v5.x - chrome 51.0 - firefox 47.0 and 48.0 - internet explorer 11.0 - phantomjs 2.1 by the usage of npm scripts under win7 x64. I wasn't able to test the framework by Opera since the Karma launcher is buggy, so I decided not to support Opera. I used [Yadda](https://github.com/acuminous/yadda) to write BDD tests. I used [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. On pre-ES5 environments there will be bugs in the Class module due to pre-ES5 enumeration and the lack of some ES5 methods, so pre-ES5 environments are not supported. #### Requirements An ES5 capable environment is required with - `Object.create` - ES5 compatible property enumeration: `Object.defineProperty`, `Object.getOwnPropertyDescriptor`, `Object.prototype.hasOwnProperty`, etc. - `Array.prototype.forEach` #### Usage In this documentation I used the framework as follows: ```js var o3 = require("o3"), Class = o3.Class; ``` ### Inheritance #### Inheriting from native classes (from the Error class in these examples) You can extend native classes by calling the Class() function. ```js var UserError = Class(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` An alternative to call Class.extend() with the Ancestor as the context. The Class() function uses this in the background. ```js var UserError = Class.extend.call(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` #### Inheriting from custom classes You can use Class.extend() by any other class, not just by native classes. ```js var Ancestor = Class(Object, { prototype: { a: 1, b: 2 } }); var Descendant = Class.extend.call(Ancestor, { prototype: { c: 3 } }); ``` Or you can simply add it as a static method, so you don't have to pass context any time you want to use it. The only drawback, that this static method will be inherited as well. ```js var Ancestor = Class(Object, { extend: Class.extend, prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheriting from the Class class You can inherit the extend() method and other utility methods from the Class class. Probably this is the simplest solution if you need the Class API and you don't need to inherit from special native classes like Error. ```js var Ancestor = Class.extend({ prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheritance with clone and merge The static extend() method uses the clone() and merge() utility methods to inherit from the ancestor and add properties from the config. ```js var MyClass = Class.clone.call(Object, function MyClass(){ // ... }); Class.merge.call(MyClass, { prototype: { x: 1, y: 2 } }); ``` Or with utility methods. ```js var MyClass = Class.clone(function MyClass() { // ... }).merge({ prototype: { x: 1, y: 2 } }); ``` #### Inheritance with clone and absorb You can fill in missing properties with the usage of absorb. ```js var MyClass = Class(SomeAncestor, {...}); Class.absorb.call(MyClass, Class); MyClass.merge({...}); ``` For example if you don't have Class methods and your class already has an ancestor, then you can use absorb() to add Class methods. #### Abstract classes Using abstract classes with instantiation verification won't be implemented in this lib, however we provide an `abstractMethod`, which you can put to not implemented parts of your abstract class. ```js var AbstractA = Class({ prototype: { doA: function (){ // ... var b = this.getB(); // ... // do something with b // ... }, getB: abstractMethod } }); var AB1 = Class(AbstractA, { prototype: { getB: function (){ return new B1(); } } }); var ab1 = new AB1(); ``` I strongly support the composition over inheritance principle and I think you should use dependency injection instead of abstract classes. ```js var A = Class({ prototype: { init: function (b){ this.b = b; }, doA: function (){ // ... // do something with this.b // ... } } }); var b = new B1(); var ab1 = new A(b); ``` ### Constructors #### Using a custom constructor You can pass your custom constructor as a config option by creating the class. ```js var MyClass = Class(Object, { prototype: { constructor: function () { // ... } } }); ``` #### Using a custom factory to create the constructor Or you can pass a static factory method to create your custom constructor. ```js var MyClass = Class(Object, { factory: function () { return function () { // ... } } }); ``` #### Using an inherited factory to create the constructor By inheritance the constructors of the descendant classes will be automatically created as well. ```js var Ancestor = Class(Object, { factory: function () { return function () { // ... } } }); var Descendant = Class(Ancestor, {}); ``` #### Using the default factory to create the constructor You don't need to pass anything if you need a noop function as constructor. The Class.factory() will create a noop constructor by default. ```js var MyClass = Class(Object, {}); ``` In fact you don't need to pass any arguments to the Class function if you need an empty class inheriting from the Object native class. ```js var MyClass = Class(); ``` The default factory calls the build() and init() methods if they are given. ```js var MyClass = Class({ prototype: { build: function (options) { console.log("build", options); }, init: function (options) { console.log("init", options); } } }); var my = new MyClass({a: 1, b: 2}); // build {a: 1, b: 2} // init {a: 1, b: 2} var my2 = my.clone({c: 3}); // build {c: 3} var MyClass2 = MyClass.extend({}, [{d: 4}]); // build {d: 4} ``` ### Instantiation #### Creating new instance with the new operator Ofc. you can create a new instance in the javascript way. ```js var MyClass = Class(); var my = new MyClass(); ``` #### Creating a new instance with the static newInstance method If you want to pass an array of arguments then you can do it the following way. ```js var MyClass = Class.extend({ prototype: { constructor: function () { for (var i in arguments) console.log(arguments[i]); } } }); var my = MyClass.newInstance.apply(MyClass, ["a", "b", "c"]); // a // b // c ``` #### Creating new instance with clone You can create a new instance by cloning the prototype of the class. ```js var MyClass = Class(); var my = Class.prototype.clone.call(MyClass.prototype); ``` Or you can inherit the utility methods to make this easier. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); ``` Just be aware that by default cloning calls only the `build()` method, so the `init()` method won't be called by the new instance. #### Cloning instances You can clone an existing instance with the clone method. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); var my2 = my.clone(); ``` Be aware that this is prototypal inheritance with Object.create(), so the inherited properties won't be enumerable. The clone() method calls the build() method on the new instance if it is given. #### Using clone in the constructor You can use the same behavior both by cloning and by creating a new instance using the constructor ```js var MyClass = Class.extend({ lastIndex: 0, prototype: { index: undefined, constructor: function MyClass() { return MyClass.prototype.clone(); }, clone: function () { var instance = Class.prototype.clone.call(this); instance.index = ++MyClass.lastIndex; return instance; } } }); var my1 = new MyClass(); var my2 = MyClass.prototype.clone(); var my3 = my1.clone(); var my4 = my2.clone(); ``` Be aware that this way the constructor will drop the instance created with the `new` operator. Be aware that the clone() method is used by inheritance, so creating the prototype of a descendant class will use the clone() method as well. ```js var Descendant = MyClass.clone(function Descendant() { return Descendant.prototype.clone(); }); var my5 = Descendant.prototype; var my6 = new Descendant(); // ... ``` #### Using absorb(), merge() or inheritance to set the defaults values on properties You can use absorb() to set default values after configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(config); this.absorb(theDefaults); } } }); ``` You can use merge() to set default values before configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(theDefaults); this.merge(config); } } }); ``` You can use inheritance to set default values on class level. ```js var MyClass = Class.extend({ prototype: { aProperty: defaultValue, // ... constructor: function (config) { this.merge(config); } } }); ``` ## License MIT - 2015 Jánszky László Lajos # near-api-js [![Build Status](https://travis-ci.com/near/near-api-js.svg?branch=master)](https://travis-ci.com/near/near-api-js) [![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/near/near-api-js) A JavaScript/TypeScript library for development of DApps on the NEAR platform # Documentation [Read the TypeDoc API documentation](https://near.github.io/near-api-js/) --- # Examples ## [Quick Reference](https://github.com/near/near-api-js/blob/master/examples/quick-reference.md) _(Cheat sheet / quick reference)_ ## [Cookbook](https://github.com/near/near-api-js/blob/master/examples/cookbook/README.md) _(Common use cases / more complex examples)_ --- # Contribute to this library 1. Install dependencies yarn 2. Run continuous build with: yarn build -- -w # Publish Prepare `dist` version by running: yarn dist When publishing to npm use [np](https://github.com/sindresorhus/np). --- # Integration Test Start the node by following instructions from [nearcore](https://github.com/nearprotocol/nearcore), then yarn test Tests use sample contract from `near-hello` npm package, see https://github.com/nearprotocol/near-hello # Update error schema Follow next steps: 1. [Change hash for the commit with errors in the nearcore](https://github.com/near/near-api-js/blob/master/gen_error_types.js#L7-L9) 2. Fetch new schema: `node fetch_error_schema.js` 3. `yarn build` to update `lib/**.js` files # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE](LICENSE) and [LICENSE-APACHE](LICENSE-APACHE) for details. # mustache.js - Logic-less {{mustache}} templates with JavaScript > What could be more logical awesome than no logic at all? [![Build Status](https://travis-ci.org/janl/mustache.js.svg?branch=master)](https://travis-ci.org/janl/mustache.js) [mustache.js](http://github.com/janl/mustache.js) is a zero-dependency implementation of the [mustache](http://mustache.github.com/) template system in JavaScript. [Mustache](http://mustache.github.com/) is a logic-less template syntax. It can be used for HTML, config files, source code - anything. It works by expanding tags in a template using values provided in a hash or object. We call it "logic-less" because there are no if statements, else clauses, or for loops. Instead there are only tags. Some tags are replaced with a value, some nothing, and others a series of values. For a language-agnostic overview of mustache's template syntax, see the `mustache(5)` [manpage](http://mustache.github.com/mustache.5.html). ## Where to use mustache.js? You can use mustache.js to render mustache templates anywhere you can use JavaScript. This includes web browsers, server-side environments such as [Node.js](http://nodejs.org/), and [CouchDB](http://couchdb.apache.org/) views. mustache.js ships with support for the [CommonJS](http://www.commonjs.org/) module API, the [Asynchronous Module Definition](https://github.com/amdjs/amdjs-api/wiki/AMD) API (AMD) and [ECMAScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules). In addition to being a package to be used programmatically, you can use it as a [command line tool](#command-line-tool). And this will be your templates after you use Mustache: !['stache](https://cloud.githubusercontent.com/assets/288977/8779228/a3cf700e-2f02-11e5-869a-300312fb7a00.gif) ## Install You can get Mustache via [npm](http://npmjs.com). ```bash $ npm install mustache --save ``` ## Usage Below is a quick example how to use mustache.js: ```js var view = { title: "Joe", calc: function () { return 2 + 4; } }; var output = Mustache.render("{{title}} spends {{calc}}", view); ``` In this example, the `Mustache.render` function takes two parameters: 1) the [mustache](http://mustache.github.com/) template and 2) a `view` object that contains the data and code needed to render the template. ## Templates A [mustache](http://mustache.github.com/) template is a string that contains any number of mustache tags. Tags are indicated by the double mustaches that surround them. `{{person}}` is a tag, as is `{{#person}}`. In both examples we refer to `person` as the tag's key. There are several types of tags available in mustache.js, described below. There are several techniques that can be used to load templates and hand them to mustache.js, here are two of them: #### Include Templates If you need a template for a dynamic part in a static website, you can consider including the template in the static HTML file to avoid loading templates separately. Here's a small example: ```js // file: render.js function renderHello() { var template = document.getElementById('template').innerHTML; var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; } ``` ```html <html> <body onload="renderHello()"> <div id="target">Loading...</div> <script id="template" type="x-tmpl-mustache"> Hello {{ name }}! </script> <script src="https://unpkg.com/mustache@latest"></script> <script src="render.js"></script> </body> </html> ``` #### Load External Templates If your templates reside in individual files, you can load them asynchronously and render them when they arrive. Another example using [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch): ```js function renderHello() { fetch('template.mustache') .then((response) => response.text()) .then((template) => { var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; }); } ``` ### Variables The most basic tag type is a simple variable. A `{{name}}` tag renders the value of the `name` key in the current context. If there is no such key, nothing is rendered. All variables are HTML-escaped by default. If you want to render unescaped HTML, use the triple mustache: `{{{name}}}`. You can also use `&` to unescape a variable. If you'd like to change HTML-escaping behavior globally (for example, to template non-HTML formats), you can override Mustache's escape function. For example, to disable all escaping: `Mustache.escape = function(text) {return text;};`. If you want `{{name}}` _not_ to be interpreted as a mustache tag, but rather to appear exactly as `{{name}}` in the output, you must change and then restore the default delimiter. See the [Custom Delimiters](#custom-delimiters) section for more information. View: ```json { "name": "Chris", "company": "<b>GitHub</b>" } ``` Template: ``` * {{name}} * {{age}} * {{company}} * {{{company}}} * {{&company}} {{=<% %>=}} * {{company}} <%={{ }}=%> ``` Output: ```html * Chris * * &lt;b&gt;GitHub&lt;/b&gt; * <b>GitHub</b> * <b>GitHub</b> * {{company}} ``` JavaScript's dot notation may be used to access keys that are properties of objects in a view. View: ```json { "name": { "first": "Michael", "last": "Jackson" }, "age": "RIP" } ``` Template: ```html * {{name.first}} {{name.last}} * {{age}} ``` Output: ```html * Michael Jackson * RIP ``` ### Sections Sections render blocks of text zero or more times, depending on the value of the key in the current context. A section begins with a pound and ends with a slash. That is, `{{#person}}` begins a `person` section, while `{{/person}}` ends it. The text between the two tags is referred to as that section's "block". The behavior of the section is determined by the value of the key. #### False Values or Empty Lists If the `person` key does not exist, or exists and has a value of `null`, `undefined`, `false`, `0`, or `NaN`, or is an empty string or an empty list, the block will not be rendered. View: ```json { "person": false } ``` Template: ```html Shown. {{#person}} Never shown! {{/person}} ``` Output: ```html Shown. ``` #### Non-Empty Lists If the `person` key exists and is not `null`, `undefined`, or `false`, and is not an empty list the block will be rendered one or more times. When the value is a list, the block is rendered once for each item in the list. The context of the block is set to the current item in the list for each iteration. In this way we can loop over collections. View: ```json { "stooges": [ { "name": "Moe" }, { "name": "Larry" }, { "name": "Curly" } ] } ``` Template: ```html {{#stooges}} <b>{{name}}</b> {{/stooges}} ``` Output: ```html <b>Moe</b> <b>Larry</b> <b>Curly</b> ``` When looping over an array of strings, a `.` can be used to refer to the current item in the list. View: ```json { "musketeers": ["Athos", "Aramis", "Porthos", "D'Artagnan"] } ``` Template: ```html {{#musketeers}} * {{.}} {{/musketeers}} ``` Output: ```html * Athos * Aramis * Porthos * D'Artagnan ``` If the value of a section variable is a function, it will be called in the context of the current item in the list on each iteration. View: ```js { "beatles": [ { "firstName": "John", "lastName": "Lennon" }, { "firstName": "Paul", "lastName": "McCartney" }, { "firstName": "George", "lastName": "Harrison" }, { "firstName": "Ringo", "lastName": "Starr" } ], "name": function () { return this.firstName + " " + this.lastName; } } ``` Template: ```html {{#beatles}} * {{name}} {{/beatles}} ``` Output: ```html * John Lennon * Paul McCartney * George Harrison * Ringo Starr ``` #### Functions If the value of a section key is a function, it is called with the section's literal block of text, un-rendered, as its first argument. The second argument is a special rendering function that uses the current view as its view argument. It is called in the context of the current view object. View: ```js { "name": "Tater", "bold": function () { return function (text, render) { return "<b>" + render(text) + "</b>"; } } } ``` Template: ```html {{#bold}}Hi {{name}}.{{/bold}} ``` Output: ```html <b>Hi Tater.</b> ``` ### Inverted Sections An inverted section opens with `{{^section}}` instead of `{{#section}}`. The block of an inverted section is rendered only if the value of that section's tag is `null`, `undefined`, `false`, *falsy* or an empty list. View: ```json { "repos": [] } ``` Template: ```html {{#repos}}<b>{{name}}</b>{{/repos}} {{^repos}}No repos :({{/repos}} ``` Output: ```html No repos :( ``` ### Comments Comments begin with a bang and are ignored. The following template: ```html <h1>Today{{! ignore me }}.</h1> ``` Will render as follows: ```html <h1>Today.</h1> ``` Comments may contain newlines. ### Partials Partials begin with a greater than sign, like {{> box}}. Partials are rendered at runtime (as opposed to compile time), so recursive partials are possible. Just avoid infinite loops. They also inherit the calling context. Whereas in ERB you may have this: ```html+erb <%= partial :next_more, :start => start, :size => size %> ``` Mustache requires only this: ```html {{> next_more}} ``` Why? Because the `next_more.mustache` file will inherit the `size` and `start` variables from the calling context. In this way you may want to think of partials as includes, imports, template expansion, nested templates, or subtemplates, even though those aren't literally the case here. For example, this template and partial: base.mustache: <h2>Names</h2> {{#names}} {{> user}} {{/names}} user.mustache: <strong>{{name}}</strong> Can be thought of as a single, expanded template: ```html <h2>Names</h2> {{#names}} <strong>{{name}}</strong> {{/names}} ``` In mustache.js an object of partials may be passed as the third argument to `Mustache.render`. The object should be keyed by the name of the partial, and its value should be the partial text. ```js Mustache.render(template, view, { user: userTemplate }); ``` ### Custom Delimiters Custom delimiters can be used in place of `{{` and `}}` by setting the new values in JavaScript or in templates. #### Setting in JavaScript The `Mustache.tags` property holds an array consisting of the opening and closing tag values. Set custom values by passing a new array of tags to `render()`, which gets honored over the default values, or by overriding the `Mustache.tags` property itself: ```js var customTags = [ '<%', '%>' ]; ``` ##### Pass Value into Render Method ```js Mustache.render(template, view, {}, customTags); ``` ##### Override Tags Property ```js Mustache.tags = customTags; // Subsequent parse() and render() calls will use customTags ``` #### Setting in Templates Set Delimiter tags start with an equals sign and change the tag delimiters from `{{` and `}}` to custom strings. Consider the following contrived example: ```html+erb * {{ default_tags }} {{=<% %>=}} * <% erb_style_tags %> <%={{ }}=%> * {{ default_tags_again }} ``` Here we have a list with three items. The first item uses the default tag style, the second uses ERB style as defined by the Set Delimiter tag, and the third returns to the default style after yet another Set Delimiter declaration. According to [ctemplates](https://htmlpreview.github.io/?https://raw.githubusercontent.com/OlafvdSpek/ctemplate/master/doc/howto.html), this "is useful for languages like TeX, where double-braces may occur in the text and are awkward to use for markup." Custom delimiters may not contain whitespace or the equals sign. ## Pre-parsing and Caching Templates By default, when mustache.js first parses a template it keeps the full parsed token tree in a cache. The next time it sees that same template it skips the parsing step and renders the template much more quickly. If you'd like, you can do this ahead of time using `mustache.parse`. ```js Mustache.parse(template); // Then, sometime later. Mustache.render(template, view); ``` ## Command line tool mustache.js is shipped with a Node.js based command line tool. It might be installed as a global tool on your computer to render a mustache template of some kind ```bash $ npm install -g mustache $ mustache dataView.json myTemplate.mustache > output.html ``` also supports stdin. ```bash $ cat dataView.json | mustache - myTemplate.mustache > output.html ``` or as a package.json `devDependency` in a build process maybe? ```bash $ npm install mustache --save-dev ``` ```json { "scripts": { "build": "mustache dataView.json myTemplate.mustache > public/output.html" } } ``` ```bash $ npm run build ``` The command line tool is basically a wrapper around `Mustache.render` so you get all the features. If your templates use partials you should pass paths to partials using `-p` flag: ```bash $ mustache -p path/to/partial1.mustache -p path/to/partial2.mustache dataView.json myTemplate.mustache ``` ## Plugins for JavaScript Libraries mustache.js may be built specifically for several different client libraries, including the following: - [jQuery](http://jquery.com/) - [MooTools](http://mootools.net/) - [Dojo](http://www.dojotoolkit.org/) - [YUI](http://developer.yahoo.com/yui/) - [qooxdoo](http://qooxdoo.org/) These may be built using [Rake](http://rake.rubyforge.org/) and one of the following commands: ```bash $ rake jquery $ rake mootools $ rake dojo $ rake yui3 $ rake qooxdoo ``` ## TypeScript Since the source code of this package is written in JavaScript, we follow the [TypeScript publishing docs](https://www.typescriptlang.org/docs/handbook/declaration-files/publishing.html) preferred approach by having type definitions available via [@types/mustache](https://www.npmjs.com/package/@types/mustache). ## Testing In order to run the tests you'll need to install [Node.js](http://nodejs.org/). You also need to install the sub module containing [Mustache specifications](http://github.com/mustache/spec) in the project root. ```bash $ git submodule init $ git submodule update ``` Install dependencies. ```bash $ npm install ``` Then run the tests. ```bash $ npm test ``` The test suite consists of both unit and integration tests. If a template isn't rendering correctly for you, you can make a test for it by doing the following: 1. Create a template file named `mytest.mustache` in the `test/_files` directory. Replace `mytest` with the name of your test. 2. Create a corresponding view file named `mytest.js` in the same directory. This file should contain a JavaScript object literal enclosed in parentheses. See any of the other view files for an example. 3. Create a file with the expected output in `mytest.txt` in the same directory. Then, you can run the test with: ```bash $ TEST=mytest npm run test-render ``` ### Browser tests Browser tests are not included in `npm test` as they run for too long, although they are ran automatically on Travis when merged into master. Run browser tests locally in any browser: ```bash $ npm run test-browser-local ``` then point your browser to `http://localhost:8080/__zuul` ## Who uses mustache.js? An updated list of mustache.js users is kept [on the Github wiki](https://github.com/janl/mustache.js/wiki/Beard-Competition). Add yourself or your company if you use mustache.js! ## Contributing mustache.js is a mature project, but it continues to actively invite maintainers. You can help out a high-profile project that is used in a lot of places on the web. No big commitment required, if all you do is review a single [Pull Request](https://github.com/janl/mustache.js/pulls), you are a maintainer. And a hero. ### Your First Contribution - review a [Pull Request](https://github.com/janl/mustache.js/pulls) - fix an [Issue](https://github.com/janl/mustache.js/issues) - update the [documentation](https://github.com/janl/mustache.js#usage) - make a website - write a tutorial ## Thanks mustache.js wouldn't kick ass if it weren't for these fine souls: * Chris Wanstrath / defunkt * Alexander Lang / langalex * Sebastian Cohnen / tisba * J Chris Anderson / jchris * Tom Robinson / tlrobinson * Aaron Quint / quirkey * Douglas Crockford * Nikita Vasilyev / NV * Elise Wood / glytch * Damien Mathieu / dmathieu * Jakub Kuźma / qoobaa * Will Leinweber / will * dpree * Jason Smith / jhs * Aaron Gibralter / agibralter * Ross Boucher / boucher * Matt Sanford / mzsanford * Ben Cherry / bcherry * Michael Jackson / mjackson * Phillip Johnsen / phillipj * David da Silva Contín / dasilvacontin